code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pickle
import mmcv
train = open('/home/ubuntu/tp/annotations/tp_train.pkl', 'rb')
train_data = pickle.load(train)
len(train_data)
mmcv.dump(train_data[:500], '/home/ubuntu/tp/annotations/tp_train_small.pkl')
| nbs/explore_pickle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] Collapsed="false" slideshow={"slide_type": "slide"}
# <img src="https://upload.wikimedia.org/wikipedia/commons/4/47/Logo_UTFSM.png" width="200" alt="utfsm-logo" align="left"/>
#
# # MAT281
# ### Aplicaciones de la Matemática en la Ingeniería
# + [markdown] Collapsed="false" slideshow={"slide_type": "slide"}
# ## Módulo 04
# ## Laboratorio Clase 06: Proyectos de Machine Learning
# + [markdown] Collapsed="false"
# ### Instrucciones
#
#
# * Completa tus datos personales (nombre y rol USM) en siguiente celda.
# * La escala es de 0 a 4 considerando solo valores enteros.
# * Debes _pushear_ tus cambios a tu repositorio personal del curso.
# * Como respaldo, debes enviar un archivo .zip con el siguiente formato `mXX_cYY_lab_apellido_nombre.zip` a <EMAIL>, debe contener todo lo necesario para que se ejecute correctamente cada celda, ya sea datos, imágenes, scripts, etc.
# * Se evaluará:
# - Soluciones
# - Código
# - Que Binder esté bien configurado.
# - Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error.
# * __La entrega es al final de esta clase.__
# + [markdown] Collapsed="false"
# __Nombre__: <NAME>
#
# __Rol__: 201610009-1
# + [markdown] Collapsed="false"
# ## GapMinder
# + Collapsed="false"
import pandas as pd
import altair as alt
from vega_datasets import data
alt.themes.enable('opaque')
# %matplotlib inline
# + Collapsed="false"
gapminder = data.gapminder_health_income()
gapminder.head()
# + [markdown] Collapsed="false"
# ### 1. Análisis exploratorio (1 pto)
#
# Como mínimo, realizar un `describe` del dataframe y una visualización adecuada, una _scatter matrix_ con los valores numéricos.
# + Collapsed="false"
gapminder.describe()
# + Collapsed="false"
alt.Chart(gapminder).mark_circle(opacity=0.5).encode(
alt.X(alt.repeat("column"), type='quantitative'),
alt.Y(alt.repeat("row"), type='quantitative'),
color='country:N'
).properties(
width=150,
height=150
).repeat(
row=['income','health','population'],
column=['population','health','income']
)
# + [markdown] Collapsed="false"
# ### 2. Preprocesamiento (1 pto)
# + [markdown] Collapsed="false"
# Aplicar un escalamiento a los datos antes de aplicar nuestro algoritmo de clustering. Para ello, definir la variable `X_raw` que corresponde a un `numpy.array` con los valores del dataframe `gapminder` en las columnas _income_, _health_ y _population_. Luego, definir la variable `X` que deben ser los datos escalados de `X_raw`.
# + Collapsed="false"
from sklearn.preprocessing import StandardScaler
import numpy as np
# + Collapsed="false"
X_raw = np.array(gapminder[['income','health','population']])
X = StandardScaler().fit_transform(X_raw)
# + [markdown] Collapsed="false"
# ### 3. Clustering (1 pto)
# + Collapsed="false"
from sklearn.cluster import KMeans
# + [markdown] Collapsed="false"
# Definir un _estimator_ `KMeans` con `k=3` y `random_state=42`, luego ajustar con `X` y finalmente, agregar los _labels_ obtenidos a una nueva columna del dataframe `gapminder` llamada `cluster`. Finalmente, realizar el mismo gráfico del principio pero coloreado por los clusters obtenidos.
#
#
# + Collapsed="false"
k = 3
kmeans =KMeans(n_clusters = k,random_state=42)
kmeans.fit(X)
clusters = kmeans.labels_
gapminder = gapminder.assign(cluster = clusters)
# + Collapsed="false"
alt.Chart(gapminder).mark_circle(opacity=0.5).encode(
alt.X(alt.repeat("column"), type='quantitative'),
alt.Y(alt.repeat("row"), type='quantitative'),
color='cluster:N'
).properties(
width=150,
height=150
).repeat(
row=['income','health','population'],
column=['population','health','income']
)
# + [markdown] Collapsed="true"
# ### 4. Regla del codo (1 pto)
# + [markdown] Collapsed="false"
# __¿Cómo escoger la mejor cantidad de _clusters_?__
#
# En este ejercicio hemos utilizado que el número de clusters es igual a 3. El ajuste del modelo siempre será mejor al aumentar el número de clusters, pero ello no significa que el número de clusters sea el apropiado. De hecho, si tenemos que ajustar $n$ puntos, claramente tomar $n$ clusters generaría un ajuste perfecto, pero no permitiría representar si existen realmente agrupaciones de datos.
#
# Cuando no se conoce el número de clusters a priori, se utiliza la [regla del codo](https://jarroba.com/seleccion-del-numero-optimo-clusters/), que indica que el número más apropiado es aquel donde "cambia la pendiente" de decrecimiento de la la suma de las distancias a los clusters para cada punto, en función del número de clusters.
#
# A continuación se provee el código para el caso de clustering sobre los datos estandarizados, leídos directamente de un archivo preparado especialmente.
# + Collapsed="false"
elbow = pd.Series(name="inertia").rename_axis(index="k")
for k in range(1, 10):
kmeans = KMeans(n_clusters=k, random_state=42).fit(X)
elbow.loc[k] = kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center
elbow = elbow.reset_index()
# + Collapsed="false"
alt.Chart(elbow).mark_line(point=True).encode(
x="k:O",
y="inertia:Q"
).properties(
height=600,
width=800
)
# + [markdown] Collapsed="false"
# __Pregunta__
#
# Considerando los datos (países) y el gráfico anterior, ¿Cuántos clusters escogerías?
# + [markdown] Collapsed="false"
# 3 o 4, pues son puntos donde cambia la curva de una manera mas abrupta. Si hay que elegir uno sería el 4. La grafica de población mostraria que con 3 estaría bien, pero al agregar otro foco en el principal, se tendriá lo elegido.
| m04_machine_learning/m04_c06_ml_workflow/m04_c06_lab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
BASEDIR = "."
import pandas
data = pandas.read_csv(
BASEDIR + '/multiple_soy_clusters.log',
delim_whitespace=True,
)
data2 = pandas.read_csv(
BASEDIR + '/multiple_soy_clusters-1.log',
delim_whitespace=True,
)
# +
grouped = data.groupby(
['clustrs']
).agg(
{'elapsed_time':['mean','std']}
)
grouped.head()
# -
merged = grouped
final = merged.reset_index()
final.columns = ['clusters', 'mean', 'std']
final.head()
# +
import numpy as np
from datetime import timedelta
import matplotlib.pyplot as plt
def create_sane_figure():
# You typically want your plot to be ~1.33x wider than tall.
# Common sizes: (10, 7.5) and (12, 9)
fig = plt.figure(figsize=(12, 6)) # a new figure window
ax = fig.add_subplot(1, 1, 1) # specify (nrows, ncols, axnum)
# Put the axis behind the datapoints
ax.set_axisbelow(True)
# Grey ticks, labels, and axis'
for spine in ax.spines.values():
spine.set_color('black')
for line in ax.get_xticklines() + ax.get_yticklines():
line.set_color('black')
ax.set_xlabel(None, fontsize=16, color='black')
ax.set_ylabel(None, fontsize=16, color='black')
# Don't show a grid
ax.grid(False)
# remove top and right border of graph
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.tick_params(
axis='both', which='both',
labelsize=14, labelcolor='black',
color='black')
# Make the title big enough so it spans the entire plot, but don't make it
# so big that it requires two lines to show.
ax.set_title(None, fontsize=22, color='black')
return (fig, ax)
# +
# %matplotlib inline
def create_multiple_soy_clusters():
(fig, ax) = create_sane_figure()
ax.set_title("Creating multiple Spark-on-Hadoop topologies", fontsize=22)
ax.set_xlabel("number of clusters")
ax.set_ylabel("simulation time (s)")
ax.set_xticks(range(0, 101, 20))
ax.set_yticks(np.arange(0, 7, 1))
# # Now put the actual data in the plot
# ax.errorbar(
# final.clusters, final['mean'],
# yerr=final['std'],
# label="concurrency potential",
# linestyle='None',
# capsize=5,
# fmt='o',
# )
ax.plot(
data['clustrs'], data['elapsed_time'],'o',
)
# ax.plot(
# data2['clustrs'], data2['elapsed_time'],'o', color="black"
# )
# Save the plot to a file
# fig.savefig("multiple_soy_clusters.png", bbox_inches="tight")
fig.savefig("multiple_soy_clusters.pdf", bbox_inches="tight")
fig.show()
create_multiple_soy_clusters()
| results/Multiple SOY clusters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Final Capstone Project - Suggested Walkthrough:
#
# This is a suggested method for handling one of the Final Capstone Projects. We start by coding out the strictest requirements, and then build out from a working baseline model. Feel free to adapt this solution, and add features you think could help. Good luck!
#
#
# ## Bank Account Manager
#
# Under the Classes section in the list of suggested final capstone projects is a Bank Account Manager program. The goal is to create a class called Account which will be an abstract class for three other classes called CheckingAccount, SavingsAccount and BusinessAccount. Then you should manage credits and debits from these accounts through an ATM style program.
# ### Project Scope
# To tackle this project, first consider what has to happen.
# 1. There will be three different types of bank account (Checking, Savings, Business)
# 2. Each account will accept deposits and withdrawals, and will need to report balances
# ### Project Wishlist
# We might consider additional features, like:
# * impose a monthly maintenance fee
# * waive fees for minimum combined deposit balances
# * each account may have additional properties unique to that account:
# * Checking allows unlimited transactions, and may keep track of printed checks
# * Savings limits the number of withdrawals per period, and may earn interest
# * Business may impose transaction fees
# * automatically transfer the "change" for debit card purchases from Checking to Savings, <br>where "change" is the amount needed to raise a debit to the nearest whole dollar
# * permit savings autodraft overdraft protection
#
# ### Let's get started!
# #### Step 1: Establish an abstract Account class with features shared by all accounts.
# Note that abstract classes are never instantiated, they simply provide a base class with attributes and methods to be inherited by any derived class.
class Account:
# Define an __init__ constructor method with attributes shared by all accounts:
def __init__(self,acct_nbr,opening_deposit):
self.acct_nbr = acct_nbr
self.balance = opening_deposit
# Define a __str__ mehthod to return a recognizable string to any print() command
def __str__(self):
return f'${self.balance:.2f}'
# Define a universal method to accept deposits
def deposit(self,dep_amt):
self.balance += dep_amt
# Define a universal method to handle withdrawals
def withdraw(self,wd_amt):
if self.balance >= wd_amt:
self.balance -= wd_amt
else:
return 'Funds Unavailable'
# #### Step 2: Establish a Checking Account class that inherits from Account, and adds Checking-specific traits.
class Checking(Account):
def __init__(self,acct_nbr,opening_deposit):
# Run the base class __init__
super().__init__(acct_nbr,opening_deposit)
# Define a __str__ method that returns a string specific to Checking accounts
def __str__(self):
return f'Checking Account #{self.acct_nbr}\n Balance: {Account.__str__(self)}'
# #### Step 3: TEST setting up a Checking Account object
x = Checking(54321,654.33)
print(x)
x.withdraw(1000)
x.withdraw(30)
x.balance
# #### Step 4: Set up similar Savings and Business account classes
# +
class Savings(Account):
def __init__(self,acct_nbr,opening_deposit):
# Run the base class __init__
super().__init__(acct_nbr,opening_deposit)
# Define a __str__ method that returns a string specific to Savings accounts
def __str__(self):
return f'Savings Account #{self.acct_nbr}\n Balance: {Account.__str__(self)}'
class Business(Account):
def __init__(self,acct_nbr,opening_deposit):
# Run the base class __init__
super().__init__(acct_nbr,opening_deposit)
# Define a __str__ method that returns a string specific to Business accounts
def __str__(self):
return f'Business Account #{self.acct_nbr}\n Balance: {Account.__str__(self)}'
# -
# **At this point** we've met the minimum requirement for the assignment. We have three different bank account classes. Each one can accept deposits, make withdrawals and report a balance, as they each inherit from an abstract Account base class.
#
# So now the fun part - let's add some features!
# #### Step 5: Create a Customer class
#
# For this next phase, let's set up a Customer class that holds a customer's name and PIN and can contain any number and/or combination of Account objects.
class Customer:
def __init__(self, name, PIN):
self.name = name
self.PIN = PIN
# Create a dictionary of accounts, with lists to hold multiple accounts
self.accts = {'C':[],'S':[],'B':[]}
def __str__(self):
return self.name
def open_checking(self,acct_nbr,opening_deposit):
self.accts['C'].append(Checking(acct_nbr,opening_deposit))
def open_savings(self,acct_nbr,opening_deposit):
self.accts['S'].append(Savings(acct_nbr,opening_deposit))
def open_business(self,acct_nbr,opening_deposit):
self.accts['B'].append(Business(acct_nbr,opening_deposit))
# rather than maintain a running total of deposit balances,
# write a method that computes a total as needed
def get_total_deposits(self):
total = 0
for acct in self.accts['C']:
print(acct)
total += acct.balance
for acct in self.accts['S']:
print(acct)
total += acct.balance
for acct in self.accts['B']:
print(acct)
total += acct.balance
print(f'Combined Deposits: ${total}')
# #### Step 6: TEST setting up a Customer, adding accounts, and checking balances
bob = Customer('Bob',1)
bob.open_checking(321,555.55)
bob.get_total_deposits()
bob.open_savings(564,444.66)
bob.get_total_deposits()
nancy = Customer('Nancy',2)
nancy.open_business(2018,8900)
nancy.get_total_deposits()
# **Wait!** Why don't Nancy's combined deposits show a decimal? <br>This is easily fixed in the class definition (mostly copied from above, with a change made to the last line of code):
class Customer:
def __init__(self, name, PIN):
self.name = name
self.PIN = PIN
self.accts = {'C':[],'S':[],'B':[]}
def __str__(self):
return self.name
def open_checking(self,acct_nbr,opening_deposit):
self.accts['C'].append(Checking(acct_nbr,opening_deposit))
def open_savings(self,acct_nbr,opening_deposit):
self.accts['S'].append(Savings(acct_nbr,opening_deposit))
def open_business(self,acct_nbr,opening_deposit):
self.accts['B'].append(Business(acct_nbr,opening_deposit))
def get_total_deposits(self):
total = 0
for acct in self.accts['C']:
print(acct)
total += acct.balance
for acct in self.accts['S']:
print(acct)
total += acct.balance
for acct in self.accts['B']:
print(acct)
total += acct.balance
print(f'Combined Deposits: ${total:.2f}') # added precision formatting here
# **So it's fixed, right?**
nancy.get_total_deposits()
# **Nope!** Changes made to the class definition do *not* affect objects created under different sets of instructions.<br>To fix Nancy's account, we have to build her record from scratch.
nancy = Customer('Nancy',2)
nancy.open_business(2018,8900)
nancy.get_total_deposits()
# #### This is why testing is so important!
# #### Step 7: Let's write some functions for making deposits and withdrawals.
#
# Be sure to include a docstring that explains what's expected by the function!
def make_dep(cust,acct_type,acct_num,dep_amt):
"""
make_dep(cust, acct_type, acct_num, dep_amt)
cust = variable name (Customer record/ID)
acct_type = string 'C' 'S' or 'B'
acct_num = integer
dep_amt = integer
"""
for acct in cust.accts[acct_type]:
if acct.acct_nbr == acct_num:
acct.deposit(dep_amt)
make_dep(nancy,'B',2018,67.45)
nancy.get_total_deposits()
def make_wd(cust,acct_type,acct_num,wd_amt):
"""
make_dep(cust, acct_type, acct_num, wd_amt)
cust = variable name (Customer record/ID)
acct_type = string 'C' 'S' or 'B'
acct_num = integer
wd_amt = integer
"""
for acct in cust.accts[acct_type]:
if acct.acct_nbr == acct_num:
acct.withdraw(wd_amt)
make_wd(nancy,'B',2018,1000000)
nancy.get_total_deposits()
# **What happened??** We seemed to successfully make a withdrawal, but nothing changed!<br>This is because, at the very beginning, we had our Account class *return* the string 'Funds Unavailable' instead of print it. If we change that here, we'll have to also run the derived class definitions, and Nancy's creation, but *not* the Customer class definition. Watch:
class Account:
def __init__(self,acct_nbr,opening_deposit):
self.acct_nbr = acct_nbr
self.balance = opening_deposit
def __str__(self):
return f'${self.balance:.2f}'
def deposit(self,dep_amt):
self.balance += dep_amt
def withdraw(self,wd_amt):
if self.balance >= wd_amt:
self.balance -= wd_amt
else:
print('Funds Unavailable') # changed "return" to "print"
# +
class Checking(Account):
def __init__(self,acct_nbr,opening_deposit):
super().__init__(acct_nbr,opening_deposit)
def __str__(self):
return f'Checking Account #{self.acct_nbr}\n Balance: {Account.__str__(self)}'
class Savings(Account):
def __init__(self,acct_nbr,opening_deposit):
super().__init__(acct_nbr,opening_deposit)
def __str__(self):
return f'Savings Account #{self.acct_nbr}\n Balance: {Account.__str__(self)}'
class Business(Account):
def __init__(self,acct_nbr,opening_deposit):
super().__init__(acct_nbr,opening_deposit)
def __str__(self):
return f'Business Account #{self.acct_nbr}\n Balance: {Account.__str__(self)}'
# -
nancy = Customer('Nancy',2)
nancy.open_business(2018,8900)
nancy.get_total_deposits()
make_wd(nancy,'B',2018,1000000)
nancy.get_total_deposits()
# ## Good job!
| 12-Final Capstone Python Project/03-Final Capstone Suggested Walkthrough.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# Thực hiện phân tích EDA cho dữ liệu 'supermarket_sales_vn.csv':
# 1. Thông tin về kích thước, số lượng dòng của tập dữ liệu?
# 2. Tên, ý nghĩa, loại dữ liệu của từng trường dữ liệu
# 3. Đơn biến (một cột dữ liệu liên tục: giá, số lượng, tổng đơn, thuế, cogs, rating):
# a. Các giá trị thống kê mô tả (min, max, range, mean, meadian, mode (nếu có), var,
# std, quantiles, 95% CI,...)
# b. Vẽ biểu đồ phân bố dữ liệu cho từng cột dữ liệu trên
# 4. Đơn biến (các cột dữ liệu định danh):
# a. Đếm số đơn hàng theo chi nhánh, giới tính, loại sản phẩm
# b. Vẽ biểu đồ cột, tròn hoặc tree-map cho các thông số trên
# 5. Đa biến (kết hợp các cột dữ liệu định danh và liên tục):
# a. Thực hiện lại câu hỏi thứ 3 cho từng chi nhánh, giới tính, phân loại khách hàng
# b. So sánh tổng đơn theo phân loại sản phẩm
# 6. Trả lời các câu hỏi sau:
# a. Khung giờ bán được nhiều đơn nhất
# b. Khung giờ có doanh thu cao nhất
# c. Top5 mặt hàng bán được nhiều nhất (theo số lượng)
# d. Top5 mặt hàng có doanh thu cao nhất
# 7. Phân tích mối tương quan:
# a. Giữa giá và rating
# b. Giữa số lượng và rating
# c. Giữa tổng đơn và rating
# 8. Tại sao không quan tâm tới mối tương quan giữa giá và tổng đơn, thuế và số
# lượng, hay giữa số lượng và cogs?
# %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
np.set_printoptions(precision=3, suppress=True)
import squarify
# +
# Load data
import csv
def read_file(path):
with open(path, newline='', encoding='utf-8') as csv_file:
data_csv = csv.reader(csv_file, delimiter=',')
header = next(data_csv)
raw_data = np.array([row for row in data_csv])
return raw_data
path = "supermarket_sales_vn.csv"
raw_data = read_file(path)
# -
print(raw_data.shape)
print(raw_data.size)
print(raw_data.T[:,1])
data = raw_data[:,[5,6,8,7,12,13]].astype(float) # unit_price, quantity, total, tax, cogs, rating
print(data)
# Phan tich don bien - Du lieu lien tuc
def occ(dat):
values, counts = np.unique(dat, return_counts=True)
return values, counts
def spread(dat):
min = np.min(dat)
max = np.max(dat)
ptp = np.ptp(dat)
var = np.var(dat)
std = np.std(dat)
return min, max, ptp, var, std
import statistics
def central(dat):
mean = np.mean(dat)
median = np.median(dat)
mode = statistics.mode(dat)
return mean, median, mode
def varb(dat):
quantile = np.quantile(dat, [0.25, 0.5, 0.75])
skew = stats.skew(dat)
kurtosis = stats.kurtosis(dat)
return quantile, skew, kurtosis
# Phan tich theo Don gia mat hang
values1, counts1 = occ(data[:,0])
min1, max1, ptp1, var1, std1 = spread(data[:,0])
mean1, median1, mode1 = central(data[:,0])
quantile1, skew1, kurtosis1 = varb(data[:,0])
print('Trung binh don gia mat hang: {:.2f}'.format(np.mean(values1)))
print('Mat hang co don gia thap nhat: {:.2f}'.format(min1))
print('Mat hang co don gia cao nhat: {:.2f}'.format(max1))
print('25% don gia mat hang thap nhat trong khoang: {:.2f}'.format(quantile1[0]))
print('50% don gia mat hang trong khoang giua: {:.2f}'.format(quantile1[1]))
print('25% don gia mat hang cao nhat trong khoang: {:.2f}'.format(quantile1[2]))
print('Median cua don gia mat hang: {:.2f}'.format(median1))
print('Mode cua don gia mat hang: {:.2f}'.format(mode1))
# Ve bieu do
# +
fig,ax1 = plt.subplots(figsize=(10,4), num=1)
ax1.hist(data[:,0], bins=np.arange(100+1), alpha=0.75, width=0.8, color='red', edgecolor='black', linewidth=1.0)
xtick_labels = np.arange(9,100+1,3)
ax1.set_xticks(xtick_labels+0.4)
ax1.set_xticklabels(xtick_labels)
ax1.axvline(mean1, color='b', linestyle='dashed', linewidth=1.2)
ax1.text(mean1-7, 19, f"Mean\n{mean1:.2f}", color="k")
ax1.axvline(median1, color='g', linestyle='dashed', linewidth=1.2)
ax1.text(median1+2, 16, f"Median\n{median1:.2f}", color="k")
ax1.axvline(mode1, color='b', linestyle='dashed', linewidth=1.2)
ax1.text(mode1+1, 22, f"Mode\n{mode1:.2f}", color="k")
ax1.set_xlabel("Don gia")
ax1.set_title("Don gia san pham")
ax1 = plt.gca()
ax1.axes.yaxis.set_ticklabels([])
plt.show()
# -
# Phan tich theo So luong don hang
# +
values2, counts2 = occ(data[:,1].astype(np.int_))
min2, max2, ptp2, var2, std2 = spread(data[:,1].astype(np.int_))
mean2, median2, mode2 = central(data[:,1].astype(np.int_))
quantile2, skew2, kurtosis2 = varb(data[:,1].astype(np.int_))
print('Tong so luong don hang: '+ str(np.sum(values2)))
print('So luong don hang trung binh: '+ str(int(np.mean(values2))))
print('So luong don hang thap nhat: ' + str(min2))
print('So luong don hang cao nhat:' + str(max2))
print('25% so luong don hang thap nhat trong khoang: ' + str(quantile2[0]))
print('50% so luong don hang trong khoang giua:' + str(quantile2[1]))
print('25% so luong don hang cao nhat trong khoang: ' + str(quantile2[2]))
print('Median cua so luong don hang: ' + str(median2))
print('Mode cua so luong don hang: ' + str(mode2))
# +
fig,ax2 = plt.subplots(figsize=(6,4), num=1)
ax2.hist(data[:,1], bins=np.arange(10+1), alpha=0.75, width=0.8, color='yellow', edgecolor='black', linewidth=1.0)
xtick_labels = np.arange(1,10+1,1)
ax2.set_xticks(xtick_labels+0.4)
ax2.set_xticklabels(xtick_labels)
ax2.axvline(mean2, color='b', linestyle='dashed', linewidth=1.2)
ax2.text(mean2+.2, 150, f"Mean\n{mean2:.2f}", color="k")
ax2.axvline(median2, color='g', linestyle='dashed', linewidth=1.2)
ax2.text(median2-1.3, 125, f"Median\n{median2:.2f}", color="k")
ax2.axvline(mode2, color='b', linestyle='dashed', linewidth=1.2)
ax2.text(mode2+.1, 180, f"Mode\n{mode2}", color="k")
ax2.set_xlabel("So luong don hang")
ax2.set_title("Tong so luong don hang")
ax2 = plt.gca()
ax2.axes.yaxis.set_ticklabels([])
plt.show()
# -
# Phan tich theo Tong gia tri don hang
# +
values3, counts3 = occ(data[:,2])
min3, max3, ptp3, var3, std3 = spread(data[:,2])
mean3, median3, mode3 = central(data[:,2])
quantile3, skew3, kurtosis3 = varb(data[:,2])
print('Tong gia tri tat ca cac don hang:{:.2f} '.format(np.sum(values3)))
print('Gia tri don hang trung binh:{:.2f} '.format(np.mean(values3)))
print('Gia tri don hang thap nhat:{:.2f} '. format(min3))
print('Gia tri don hang cao nhat:{:.2f}'.format(max3))
print('25% gia tri don hang thap nhat trong khoang:{:.2f} '.format(quantile3[0]))
print('50% gia tri don hang trong khoang giua:{:.2f}'.format(quantile3[1]))
print('25% gia tri don hang cao nhat trong khoang:{:.2f} '.format(quantile3[2]))
print('Median cua gia tri don hang:{:.2f} '.format(median3))
print('Mode cua gia tri don hang:{:.2f} '.format(mode3))
# +
fig,ax3 = plt.subplots(figsize=(10,4), num=1)
ax3.hist(data[:,2], bins=np.arange(10,1042+1,10), alpha=0.75, width=10, color='green', edgecolor='black', linewidth=1.0)
xtick_labels = np.arange(10,1042+1,50)
ax3.set_xticks(xtick_labels+0.5)
ax3.set_xticklabels(xtick_labels)
ax3.axvline(mean3, color='r', linestyle='dashed', linewidth=1.2)
ax3.text(mean3+10, 30, f"Mean\n{mean3:.2f}", color="k")
ax3.axvline(median3, color='purple', linestyle='dashed', linewidth=1.2)
ax3.text(median3-80, 26, f"Median\n{median3:.2f}", color="k")
ax3.axvline(mode3, color='b', linestyle='dashed', linewidth=1.2)
ax3.text(mode3+10, 35, f"Mode\n{mode3:.2f}", color="k")
ax3.set_xlabel("Gia tri don hang")
ax3.set_title("Tong gia tri don hang")
ax3 = plt.gca()
ax3.axes.yaxis.set_ticklabels([])
plt.show()
# -
# Phan tich theo tax
# +
values_tax, counts_tax = occ(data[:,3])
min_tax, max_tax, ptp_tax, var_tax, std_tax = spread(data[:,3])
mean_tax, median_tax, mode_tax= central(data[:,3])
quantile_tax, skew_tax, kurtosis_tax = varb(data[:,3])
print('Tong thue don hang (5%):{:.2f} '.format(np.sum(values_tax)))
print('Thue don hang trung binh:{:.2f} '.format(np.mean(values_tax)))
print('Thue don hang thap nhat:{:.2f} '. format(min_tax))
print('Thue don hang cao nhat:{:.2f}'.format(max_tax))
print('25% Thue don hang thap nhat trong khoang:{:.2f} '.format(quantile_tax[0]))
print('50% Thue don hang trong khoang giua:{:.2f}'.format(quantile_tax[1]))
print('25% Thue don hang cao nhat trong khoang:{:.2f} '.format(quantile_tax[2]))
print('Median cua Thue don hang:{:.2f} '.format(median_tax))
print('Mode cua Thue don hang:{:.2f} '.format(mode_tax))
# -
# Ve bieu do thue don hang
# +
fig,ax_tax = plt.subplots(figsize=(10,4), num=1)
ax_tax.hist(data[:,3], bins=np.arange(0,max_tax+1,2), alpha=0.75, width=2, color='brown', edgecolor='black', linewidth=1.0)
xtick_labels = np.arange(0,max_tax+1,5)
ax_tax.set_xticks(xtick_labels+0.5)
ax_tax.set_xticklabels(xtick_labels)
ax_tax.axvline(mean_tax, color='r', linestyle='dashed', linewidth=1.2)
ax_tax.text(mean_tax+.5, 77, f"Mean\n{mean_tax:.2f}", color="k")
ax_tax.axvline(median_tax, color='k', linestyle='dashed', linewidth=1.2)
ax_tax.text(median_tax-4, 66, f"Median\n{median_tax:.2f}", color="k")
ax_tax.axvline(mode_tax, color='b', linestyle='dashed', linewidth=1.2)
ax_tax.text(mode_tax+.5, 89, f"Mode\n{mode_tax:.2f}", color="k")
ax_tax.set_xlabel("Thue don hang")
ax_tax.set_title("Tong thue don hang")
ax_tax = plt.gca()
ax_tax.axes.yaxis.set_ticklabels([])
plt.show()
# -
# Phan tich theo cogs (Cost of goods)
# +
values_cogs, counts_cogs = occ(data[:,4])
min_cogs, max_cogs, ptp_cogs, var_cogs, std_cogs = spread(data[:,4])
mean_cogs, median_cogs, mode_cogs= central(data[:,4])
quantile_cogs, skew_cogs, kurtosis_cogs = varb(data[:,4])
print('Tong chi phi don hang:{:.2f} '.format(np.sum(values_cogs)))
print('Chi phi don hang trung binh:{:.2f} '.format(np.mean(values_cogs)))
print('Chi phi don hang thap nhat:{:.2f} '. format(min_cogs))
print('Chi phi don hang cao nhat:{:.2f}'.format(max_cogs))
print('25% chi phi don hang thap nhat trong khoang:{:.2f} '.format(quantile_cogs[0]))
print('50% chi phi don hang trong khoang giua:{:.2f}'.format(quantile_cogs[1]))
print('25% chi phi don hang cao nhat trong khoang:{:.2f} '.format(quantile_cogs[2]))
print('Median cua chi phi don hang:{:.2f} '.format(median_cogs))
print('Mode cua chi phi don hang:{:.2f} '.format(mode_cogs))
# -
# Ve bieu do phan phoi
# +
fig,ax_cogs = plt.subplots(figsize=(10,4), num=1)
ax_cogs.hist(data[:,4], bins=np.arange(10,993+1,10), alpha=0.75, width=10, color='purple', edgecolor='black', linewidth=1.0)
xtick_labels = np.arange(10,993+1,50)
ax_cogs.set_xticks(xtick_labels+0.5)
ax_cogs.set_xticklabels(xtick_labels)
ax_cogs.axvline(mean_cogs, color='r', linestyle='dashed', linewidth=1.2)
ax_cogs.text(mean_cogs+10, 30, f"Mean\n{mean_cogs:.2f}", color="k")
ax_cogs.axvline(median_cogs, color='k', linestyle='dashed', linewidth=1.2)
ax_cogs.text(median_cogs-80, 26, f"Median\n{median_cogs:.2f}", color="k")
ax_cogs.axvline(mode_cogs, color='b', linestyle='dashed', linewidth=1.2)
ax_cogs.text(mode_cogs+10, 35, f"Mode\n{mode_cogs:.2f}", color="k")
ax_cogs.set_xlabel("Chi phi don hang")
ax_cogs.set_title("Tong chi phi don hang")
ax_cogs = plt.gca()
ax_cogs.axes.yaxis.set_ticklabels([])
plt.show()
# -
# Phan tich theo Danh gia don hang
# +
values4, counts4 = occ(data[:,-1])
min4, max4, ptp4, var4, std4 = spread(data[:,-1])
mean4, median4, mode4 = central(data[:,-1])
quantile4, skew4, kurtosis4 = varb(data[:,-1])
print('Tong so luot danh gia:{:.2f} '.format(np.sum(counts4)))
print('Danh gia trung binh:{:.2f} '.format(np.mean(values4)))
print('Danh gia thap nhat:{:.2f} '. format(min4))
print('Danh gia cao nhat:{:.2f}'.format(max4))
print('25% danh gia thap nhat trong khoang:{:.2f} '.format(quantile4[0]))
print('50% danh gia trong khoang giua:{:.2f}'.format(quantile4[1]))
print('25% danh gia cao nhat trong khoang:{:.2f} '.format(quantile4[2]))
print('Median cua tat ca cac danh gia:{:.2f} '.format(median4))
print('Mode cua tat ca cac danh gia:{:.2f} '.format(mode4))
# -
# Ve bieu do
# +
fig,ax4 = plt.subplots(figsize=(6,4), num=1)
ax4.hist(data[:,1], bins=np.arange(max4+1), alpha=0.5, width=0.5
, color='b', edgecolor='black', linewidth=1.0)
xtick_labels = np.arange(1,max4+1,1)
ax4.set_xticks(xtick_labels+0.4)
ax4.set_xticklabels(xtick_labels)
ax4.axvline(mean4, color='b', linestyle='dashed', linewidth=1.2)
ax4.text(mean4-0.9, 120, f"Mean\n{mean4:.2f}", color="k")
ax4.axvline(median4, color='r', linestyle='dashed', linewidth=1.2)
ax4.text(median4-2.3, 145, f"Median\n{median4:.2f}", color="k")
ax4.axvline(mode4, color='g', linestyle='dashed', linewidth=1.2)
ax4.text(mode4+1.1, 180, f"Mode\n{mode4:.2f}", color="k")
ax4.set_xlabel("Rating Points")
ax4.set_title("Danh gia san pham")
ax4.set_ylabel("So luong Rating")
plt.show()
# -
# Phan tich don bien - Du lieu khong lien tuc
# a. Đếm số đơn hàng theo chi nhánh, giới tính, loại sản phẩm
# b. Vẽ biểu đồ cột, tròn hoặc tree-map cho các thông số trên
data2 = raw_data[:, [1,3,4]]
print(data2[:10])
# +
# city
values_cn, counts_cn = occ(data2[:,0])
freq_city = np.asarray((values_cn, counts_cn)).T
freq_city_sort = freq_city[np.argsort(freq_city[:, 1])]
# gender
values_gd, counts_gd = occ(data2[:,1])
freq_gd = np.asarray((values_gd, counts_gd)).T
# print(freq_gd)
fig = plt.figure(figsize=(10,4), num=1)
# city plot
ax = fig.add_subplot(1,2,1)
ax.bar(x= freq_city_sort[:,0], height=counts_cn, color='blue', alpha=.5, edgecolor = 'black', linewidth=.5)
ax.set_title('So don hang theo chi nhanh')
ax.set_xlabel('So luong don hang')
ax.set_ylabel('Chi nhanh')
ax.set_xticks( freq_city_sort[:,0])
# gender plot
ax1 = fig.add_subplot(1,2,2)
ax1.pie(freq_gd[:,1], labels = freq_gd[:,0] , autopct = '%1.1f%%',shadow=True, startangle = 180)
ax1.axis('equal')
ax1.set_title("So don hang theo Gioi tinh")
plt.tight_layout()
plt.show()
# +
# product categories
values_ct, counts_ct = occ(data2[:,2])
freq_ct = np.asarray((values_ct, counts_ct)).T
freq_ct_sort = freq_ct[np.argsort(freq_ct[:,1])]
# print(freq_ct_sort)
# categories
labels = freq_ct_sort
# print(labels)
dict_ct = {}
for k, v in zip(freq_ct[:, 0], freq_ct[:,1]):
dict_ct[k] = v
# print(d)
labels = []
for k, v in dict_ct.items():
labels.append('{} \n {}'.format(k, v))
# print(labels)
color_list = ['#0f7216', '#b2790c','#f9d4d4', '#d35158', '#ea3033', '#0000ff']
fig = plt.figure(figsize=(10,4), num=1)
ax2 = fig.add_subplot(1,1,1)
squarify.plot(sizes = freq_ct[:,1].astype(float), label=labels, pad= True, color= color_list, alpha=.5)
ax2.set_title("The Product Categories")
ax2.axes.axis('off')
plt.tight_layout()
plt.show()
# -
# 5. Đa biến (kết hợp các cột dữ liệu định danh và liên tục):
# a. Thực hiện lại câu hỏi thứ 3 cho từng chi nhánh, giới tính, phân loại khách hàng
# b. So sánh tổng đơn theo phân loại sản phẩm
# Cities
data3 = np.concatenate((data, data2), axis=1)
# print(data3[:1])
total_cogs_data = data3[:, [2,3,4]]
city_hn = total_cogs_data[np.where(np.any(data3 == 'Hà Nội', axis = 1))].astype(np.float32)
city_sg = data3[:, [2,3,4]][np.where(np.any(data3 == 'TP HCM', axis = 1))].astype(np.float32)
city_dn = data3[:, [2,3,4]][np.where(np.any(data3 == 'Đà Nẵng', axis = 1))].astype(np.float32)
print(city_hn[:2])
print(city_sg[:2])
print(city_dn[:2])
# Plotting cities' data associated with total, cogs, tax
# +
# total
to_plot_total = [city_hn[:,0], city_sg[:,0], city_dn[:,0]]
red_square = dict(markerfacecolor='r', marker='s')
fig = plt.figure(figsize =(8, 4), num=1)
ax = fig.add_subplot(111)
bp_total = ax.boxplot(to_plot_total, notch=True, flierprops=red_square, whis=.75, patch_artist=True,labels=['Ha Noi', 'Sai Gon', 'Da Nang'])
plt.show()
# +
# cogs
to_plot_cogs = [city_hn[:,1], city_sg[:,1], city_dn[:,1]]
yellow_diamond = dict(markerfacecolor='y', marker='D', markersize=5.0)
fig = plt.figure(figsize =(8, 6), num=1)
ax = fig.add_subplot(111)
bp_cogs = ax.boxplot(to_plot_cogs, flierprops=yellow_diamond, whis=.5,labels=['Ha Noi', 'Sai Gon', 'Da Nang'])
plt.show()
# +
# gender
female = data3[:, [0]][np.where(np.any(data3 == 'Nữ', axis = 1))].astype(np.float32).squeeze()
male = data3[:, [0]][np.where(np.any(data3 == 'Nam', axis = 1))].astype(np.float32).squeeze()
# print(female[:5])
# print(male[:5])
fig = plt.figure(figsize=(10,5), num=1)
ax = fig.add_subplot(111)
ax.hist(female, bins=50, facecolor='red', density=True, alpha=0.75)
ax.hist(male, bins=50, facecolor='blue', density=True, alpha=0.75)
ax.legend(["Nam", "Nữ"])
ax.grid()
ax.set_title('Total and Gender')
plt.tight_layout()
plt.show()
# +
# Total, Categories
v, c = occ(data3[:,-1])
# print(v)
# print(c)
# print(data3[:1])
elec = data3[:, [0]][np.where(np.any(data3 == 'Electronic accessories', axis = 1))].astype(np.float32)
fash = data3[:, [0]][np.where(np.any(data3 == 'Fashion accessories', axis = 1))].astype(np.float32)
food = data3[:, [0]][np.where(np.any(data3 == 'Food and beverages', axis = 1))].astype(np.float32)
health = data3[:, [0]][np.where(np.any(data3 == 'Health and beauty', axis = 1))].astype(np.float32)
home = data3[:, [0]][np.where(np.any(data3 == 'Home and lifestyle', axis = 1))].astype(np.float32)
sports = data3[:, [0]][np.where(np.any(data3 == 'Sports and travel', axis = 1))].astype(np.float32)
# print(elec[:5])
# print(sports[:5])
# -
categories=['Electronic accessories','Fashion accessories','Food and beverages','Health and beauty','Home and lifestyle','Sports and travel']
to_plot_cate = [elec.squeeze(), fash.squeeze(), food.squeeze(), health.squeeze(), home.squeeze(), sports.squeeze()]
# print(to_plot_cate)
pink_round = dict(markerfacecolor='pink', marker='o')
fig = plt.figure(figsize =(10, 4), num=1)
ax = fig.add_subplot(111)
bp_cate = ax.boxplot(to_plot_cate, notch=True, flierprops=pink_round, whis=.75) #,labels=categories)
ax.set_xticklabels(categories)
ax.set_title("Total and Product lines")
plt.tight_layout()
plt.show()
# 6. Trả lời các câu hỏi sau:
# a. Khung giờ bán được nhiều đơn nhất
# b. Khung giờ có doanh thu cao nhất
# c. Top5 mặt hàng bán được nhiều nhất (theo số lượng)
# d. Top5 mặt hàng có doanh thu cao nhất
# +
# a. Khung giờ bán được nhiều đơn nhất
# print(raw_data.T[:,1])
qty_time_data = raw_data[:, [6, 10]]
# print(time_qty_data)
vals, count = occ(qty_time_data[:,0].astype(int))
# print(vals, count)
max_qty = vals[np.argmax(count)] # get the maximum quantity
# print(max_qty)
max_qty_idx= np.argwhere(qty_time_data == str(max_qty)) # get the index position
# print(max_qty_idx[:].shape)
# print(max_qty_idx)
max_qty_idx= max_qty_idx[:,0] # get the index position of quantity
# print(max_qty_idx)
qty = qty_time_data[:,1][np.where(max_qty_idx)] # get the time
# print(len(qty))
qty = set(qty)
# print(len(qty))
print('Khung giờ bán được nhiều đơn nhất: ', qty, sep='\n')
# +
# b. Khung giờ có doanh thu cao nhất
# total_time_data = raw_data[:, [8, 10]]
# vals, count = occ(total_time_data[:,0].astype(float))
# max_total = vals[np.argmax(count)] # get the maximum quantity
# max_total_idx= np.argwhere(total_time_data == str(max_total)) # get the index position
# max_total_idx= max_total_idx[:,0] # get the index position of quantity
# total = total_time_data[:,1][np.where(max_total_idx)] # get the time
# total = set(total)
# print('Khung giờ có doanh thu cao nhất: ', total, sep='\n')
# +
# c. Top5 mặt hàng bán được nhiều nhất (theo số lượng)
# prod_qty_data = raw_data[:, [4, 6]]
# vals, count = occ(prod_qty_data[:,1].astype(int))
# max_5prod_idx = vals[np.argsort(count)][::-1][:5] # get 5 maximum quantities
# max_5prod_qty = prod_qty_data[:,0][np.where(max_5prod_idx)]
# # max_5prod_qty = set(max_5prod_qty)
# print('Top5 mặt hàng bán được nhiều nhất (theo số lượng): ', max_5prod_qty, sep='\n')
# -
# d. Top5 mặt hàng có doanh thu cao nhất
# prod_total_data = raw_data[:, [4, 8]]
# top5prod_sort= np.argsort(prod_total_data[:,1].astype(float))
# top5prod_value = prod_total_data[:,1][a][::-1][:5]
# print(top5prod_value)
# top5prod_total = prod_qty_data[:,0][np.where(top5prod_value)]
# print('Top5 mặt hàng có doanh thu cao nhất: ', top5prod_total, sep='\n')
| hw7/hw7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from mpl_toolkits.axes_grid1.anchored_artists import AnchoredSizeBar
import matplotlib.font_manager as fm
import scipy.io as sio
from scipy.optimize import curve_fit
from cil.io import NEXUSDataReader
import os
from utils import cnr_spatial, K_edge_sub
# -
# **In this script we reproduce the main figures created for the accompanying paper, using the reconstructed lizard head data.**
#
# Note: Here we use .nxs files of the reconstructed datasets, produced using the additional scripts provided (`Lizard_Head_120s_60Proj_FDK_TVTGV.ipnb`). You will need to run this script first in order to create the necessary reconstructed datasets used in creating the figures.
# First read in the .nxs data files using the `NEXUSDataReader`.
# +
reader = NEXUSDataReader(file_name = "HyperspectralData/Lizard_120s_60Proj_FDK.nxs")
lizard_fdk_recon = reader.load_data()
reader = NEXUSDataReader(file_name = "HyperspectralData/1000_iters_alpha_0.002_beta_0.25.nxs")
lizard_tv_tgv_recon = reader.load_data()
# Read Energy-Channel conversion
tmp_energy_channels = sio.loadmat("MatlabData/Energy_axis.mat")
ekeV = tmp_energy_channels['E_axis']
ekeV_crop = ekeV[0][59:159]
# -
# In the paper, we show all values in terms of attenuation value.
# Currently our reconstructed datasets are measured in terms of 'optical density'. We convert to attenuation by dividing by the voxel size. For the powder phantom, the voxel size is 137 $\mu$m.
# +
#%% Convert data from Optical density to attenuation
vox_size_um = 137
for i in range(lizard_tv_tgv_recon.shape[0]):
lizard_tv_tgv_recon.as_array()[i] = lizard_tv_tgv_recon.as_array()[i]/vox_size_um
lizard_fdk_recon.as_array()[i] = lizard_fdk_recon.as_array()[i]/vox_size_um
# -
# ## Figure 4
#
# Comparison of the FDK and TV-TGV reconstructed datasets, with qualitative and quantitative analysis.
# ### Figure 4a
#
# Reconstructed slices for the two reconstructed datasets, shown in two different image planes.
# +
from mpl_toolkits.axes_grid1 import AxesGrid
recons = [lizard_fdk_recon.as_array()[60,35,:,:], lizard_tv_tgv_recon.as_array()[60,35,:,:],
lizard_fdk_recon.as_array()[60,:,:,25], lizard_tv_tgv_recon.as_array()[60,:,:,25]]
labels_text = ["FDK", "TV-TGV"]
plt.rcParams['xtick.labelsize']=15
plt.rcParams['ytick.labelsize']=15
fig = plt.figure(figsize=(9, 10))
grid = AxesGrid(fig, 111,
nrows_ncols=(2, 2),
axes_pad=0.05,
cbar_mode='single',
cbar_location='right',
cbar_size = 0.5,
cbar_pad=0.1
)
fontprops = fm.FontProperties(size=15)
k = 0
for ax in grid:
scalebar = AnchoredSizeBar(ax.transData,
13.33, '2 mm', 'lower left',
pad=0.5,
color='white',
frameon=False,
size_vertical=2,
fontproperties=fontprops)
im = ax.imshow(recons[k], cmap="inferno", vmin = 0.0, vmax = 0.0035)
if k==0:
ax.set_title(labels_text[0],fontsize=30)
ax.add_artist(scalebar)
if k==1:
ax.set_title(labels_text[1],fontsize=30)
rect1 = patches.Rectangle((54,34),2,2,linewidth=1,edgecolor='b',facecolor='b')
ax.add_patch(rect1)
if k==2:
ax.add_artist(scalebar)
if k==3:
rect = patches.Rectangle((40,10),2,2,linewidth=1,edgecolor='b',facecolor='b')
ax.add_patch(rect)
rect2 = patches.Rectangle((35,6),4,4,linewidth=1,edgecolor='r',facecolor='r')
ax.add_patch(rect2)
rect3 = patches.Rectangle((62,20),4,4,linewidth=1,edgecolor='w',facecolor='w')
ax.add_patch(rect3)
ax.set_xticks([])
ax.set_yticks([])
k+=1
cbar = grid.cbar_axes[0].colorbar(im,ticks=[0.0,0.0010,0.0020,0.0030])
# -
# ### Figure 4b
#
# Spectral plots for two ROIs in the lizard head, corresponding to the Lens of the eye, and a section of the Jaw muscle.
# +
# Average over the ROIs for each soft tissue region, for each reconstructed dataset
# Lens
avg_y_x_FDK_lens = lizard_fdk_recon.as_array()[:,35,36:38,54:56]
avg_ROI_FDK_lens = np.mean(np.mean(avg_y_x_FDK_lens,axis=1),axis=1)
avg_y_x_TGV_lens = lizard_tv_tgv_recon.as_array()[:,35,36:38,54:56]
avg_ROI_TGV_lens = np.mean(np.mean(avg_y_x_TGV_lens,axis=1),axis=1)
# Jaw
avg_y_x_FDK_jaw = lizard_fdk_recon.as_array()[:,10:12,40:42,25]
avg_ROI_FDK_jaw = np.mean(np.mean(avg_y_x_FDK_jaw,axis=1),axis=1)
avg_y_x_TGV_jaw = lizard_tv_tgv_recon.as_array()[:,10:12,40:42,25]
avg_ROI_TGV_jaw = np.mean(np.mean(avg_y_x_TGV_jaw,axis=1),axis=1)
# Plot result
plt.figure(figsize=(12,8))
plt.plot(ekeV_crop,avg_ROI_FDK_jaw,label='FDK - Jaw',linestyle=':')
plt.plot(ekeV_crop,avg_ROI_FDK_lens,label='FDK - Lens',linestyle='-')
plt.plot(ekeV_crop,avg_ROI_TGV_jaw,label='TV-TGV - Jaw',linestyle='--')
plt.plot(ekeV_crop,avg_ROI_TGV_lens,label='TV-TGV - Lens',linestyle='-.')
plt.axvline(x=33.169, color = 'black', linestyle = "--")
plt.text(32.3, 0.003, "I K-edge", rotation=90, fontsize=15, color = "black")
plt.ylim(0.0,0.004)
plt.xlabel('Energy (keV)',fontsize=20), plt.ylabel('Attenuation ($\mu$m$^{-1}$)',fontsize=20)
plt.legend(fontsize=20, loc='upper left')
# +
# Values for Jaw ROI
# FDK
avg_y_x_FDK_jaw = lizard_fdk_recon.as_array()[:,6:11,35:40,25]
avg_ROI_FDK_jaw = np.mean(np.mean(avg_y_x_FDK_jaw,axis=1),axis=1)
std_ROI_FDK_jaw = np.std(np.std(avg_y_x_FDK_jaw,axis=1),axis=1)
mean_val_FDK_jaw = np.mean(std_ROI_FDK_jaw)
std_ROI_FDK_jaw = np.ones(100)*mean_val_FDK_jaw
# TV-TGV
avg_y_x_TGV_jaw = lizard_tv_tgv_recon.as_array()[:,6:11,35:40,25]
avg_ROI_TGV_jaw = np.mean(np.mean(avg_y_x_TGV_jaw,axis=1),axis=1)
std_ROI_TGV_jaw = np.std(np.std(avg_y_x_TGV_jaw,axis=1),axis=1)
mean_val_TGV_jaw = np.mean(std_ROI_TGV_jaw)
std_ROI_TGV_jaw = np.ones(100)*mean_val_TGV_jaw
# Values for background ROI
# FDK
avg_y_x_FDK_bg = lizard_fdk_recon.as_array()[:,20:25,62:67,25]
avg_ROI_FDK_bg = np.mean(np.mean(avg_y_x_FDK_bg,axis=1),axis=1)
std_ROI_FDK_bg = np.std(np.std(avg_y_x_FDK_bg,axis=1),axis=1)
mean_val_FDK_bg = np.mean(std_ROI_FDK_bg)
std_ROI_FDK_bg = np.ones(100)*mean_val_FDK_bg
# TV-TGV
avg_y_x_TGV_bg = lizard_tv_tgv_recon.as_array()[:,20:25,62:67,25]
avg_ROI_TGV_bg = np.mean(np.mean(avg_y_x_TGV_bg,axis=1),axis=1)
std_ROI_TGV_bg = np.std(np.std(avg_y_x_TGV_bg,axis=1),axis=1)
mean_val_TGV_bg = np.mean(std_ROI_TGV_bg)
std_ROI_TGV_bg = np.ones(100)*mean_val_TGV_bg
# -
# Calculate CNR values using cnr_spatial function
cnr_FDK = cnr_spatial(avg_ROI_FDK_jaw,std_ROI_FDK_jaw,avg_ROI_FDK_bg,std_ROI_FDK_bg)
cnr_TGV = cnr_spatial(avg_ROI_TGV_jaw,std_ROI_TGV_jaw,avg_ROI_TGV_bg,std_ROI_TGV_bg)
# +
# Plot results for each reconstructed dataset
plt.figure(figsize=(12,8))
plt.plot(ekeV_crop,cnr_FDK,'s',markersize=6,label = 'FDK - Jaw',color='C0')
plt.plot(ekeV_crop,cnr_TGV,'d',markersize=6,label = 'TV-TGV - Jaw',color='C2')
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlabel('Energy (keV)',fontsize=20)
plt.ylabel('CNR',fontsize=20)
plt.legend(fontsize=20, loc='upper left')
# -
# ## Figure 5
#
# Figure 5a was produced using the step size spectral analysis, such as the example given in Fig 5b. The TomViz software was used to produce the images, so here we only show the method by which Fig. 5b was created.
# ### Figure 5b
#
# Example of step size analysis across the spectral range for an ROI in the jaw adductor muscle for both the FDK and TV-TGV reconstructed datasets.
# For this we require the energy-channel conversion parameters. These can be calculated from the `Energy_axis.mat` file directly, but for ease we have included the linear conversion values below, such that:
#
# Energy (keV) = Channel number x 0.2774 + 0.6619
#
# We must also adjust for the fact we started our reduced channel subset from channel 60 [59-159], therefore we must shift the calculated channel positions accordingly.
#
# Finally we must know the channel number corresponding to the known position of the K-edge, in this case for iodine (33.169 keV).
# +
# Define limits of linear regions either side of absorption edge
lin_reg = [21,30,35,44]
lin_reg = [float(x) for x in lin_reg]
def func_lin(x, a, b):
return (a * x) + b
# Define channel equivalents
# Linear conversion intercept and gradient
gradient = 0.2774
intercept = 0.6619
start_channel = 59
# Define position of known K-edges, convert to channel
edge = 33.169
edge_channel = int(((edge - intercept)/gradient) - start_channel)
lower1_channel = int(((lin_reg[0]-intercept)/gradient)-start_channel)
lower2_channel = int(((lin_reg[1]-intercept)/gradient)-start_channel)
upper1_channel = int(((lin_reg[2]-intercept)/gradient)-start_channel)
upper2_channel = int(((lin_reg[3]-intercept)/gradient)-start_channel)
regions = [lower1_channel,lower2_channel,upper1_channel,upper2_channel]
# +
# Analysis of Jaw
# Calculate average value across ROI
# TV TGV
avg_y_x_TGV_jaw = lizard_tv_tgv_recon.as_array()[:,10,40,24:27]
avg_ROI_TGV_jaw = np.mean(avg_y_x_TGV_jaw,axis=1)
PDHG_pixel = avg_ROI_TGV_jaw
# FDK
avg_y_x_180_jaw = lizard_fdk_recon.as_array()[:,10,40,24:27]
avg_ROI_180_jaw = np.mean(avg_y_x_180_jaw,axis=1)
FDK_pixel = avg_ROI_180_jaw
# Calculate data interpolation between defined channel regions
# TV-TGV
[popt1,pcov1] = curve_fit(func_lin, ekeV_crop[regions[0]:regions[1]], PDHG_pixel[regions[0]:regions[1]])
[popt2,pcov2] = curve_fit(func_lin, ekeV_crop[regions[2]:regions[3]], PDHG_pixel[regions[2]:regions[3]])
# FDK
[popt3,pcov3] = curve_fit(func_lin, ekeV_crop[regions[0]:regions[1]], FDK_pixel[regions[0]:regions[1]])
[popt4,pcov4] = curve_fit(func_lin, ekeV_crop[regions[2]:regions[3]], FDK_pixel[regions[2]:regions[3]])
# Calculate distance between known edge position and linear regions either side
shift1 = edge_channel - regions[1]
shift2 = regions[2] - edge_channel
# +
# Plot both FDK and TV-TGV fits
# Plot full FDK and TV-TGV data over spectral range
plt.figure(figsize=(10,8))
plt.plot(ekeV_crop,avg_ROI_TGV_jaw, label='TV-TGV')
plt.plot(ekeV_crop,avg_ROI_180_jaw, label='FDK',ls=':')
# Calculate linear fits and plot fits on top
# TV-TGV fit
lower_att = func_lin(ekeV_crop[regions[1]+shift1],*popt1)
upper_att = func_lin(ekeV_crop[regions[2]-shift2],*popt2)
plt.plot(ekeV_crop[regions[0]:regions[1]+shift1],
func_lin(ekeV_crop[regions[0]:regions[1]+shift1],*popt1),'k-', label='TV-TGV - fit')
plt.plot(ekeV_crop[regions[2]-shift2:regions[3]],
func_lin(ekeV_crop[regions[2]-shift2:regions[3]],*popt2),'k-')
# FDK fit
lower_att2 = func_lin(ekeV_crop[regions[1]+shift1],*popt3)
upper_att2 = func_lin(ekeV_crop[regions[2]-shift2],*popt4)
plt.plot(ekeV_crop[regions[0]:regions[1]+shift1],
func_lin(ekeV_crop[regions[0]:regions[1]+shift1],*popt3),'r--', label='FDK - fit')
plt.plot(ekeV_crop[regions[2]-shift2:regions[3]],
func_lin(ekeV_crop[regions[2]-shift2:regions[3]],*popt4),'r--')
# Add vertical arrow indicating how step size is measured
plt.arrow(33.169,lower_att,0,upper_att-lower_att,head_width=0.2,head_length=0.0001,length_includes_head='True',ls= "-",color='black')
plt.arrow(33.169,upper_att,0,lower_att-upper_att,head_width=0.2,head_length=0.0001,length_includes_head='True',ls= "-",color='black')
plt.text(33.3, 0.0008, r'$\Delta\mu_0$', fontsize=20, color = "black")
plt.xlim(20.0,45.0),plt.ylim(0.0,0.0015)
plt.xlabel('Energy (keV)',fontsize=20), plt.ylabel('Attenuation ($\mu$m$^{-1}$)',fontsize=20)
plt.legend(fontsize=20, loc='upper left')
# -
# ## Figure 6
#
# Figure 6 hyperspectral results were produced using the K-edge subtraction method, with final images created using the TomViz software. Here we simply show the process by which the K-edge subtraction method was applied to the data, before transferring over to TomViz.
# The `K_edge_sub` function in the `utils.py` file describes how the function works in more detail, but simply put, the function isolates the data corresponding to the absorption edge, such that we can segment out the chemical element causing the spectral marker, in this case iodine.
# Two parameters known as `Width` and `Separation` are needed to define isolation region.
#
# We can also store the resulting data in two parts:
# - The isolated region belonging to the spectral marker
# - The remaining material which should contain none of the chemical element identified.
# +
# Apply K-edge subtraction for each known K-edge in the sample (in this case one - Iodine at 33.169 keV)
# Recommended Width = 5, Sep = 2
KEdgeSubtracted_FDK = []
KEdgeRemaining_FDK = []
KEdgeSubtracted_TVTGV = []
KEdgeRemaining_TVTGV = []
width = 5
sep = 2
# FDK
print('FDK\n')
results_fdk = K_edge_sub(lizard_fdk_recon, edge_channel, width, sep)
KEdgeSubtracted_FDK.append(results_fdk[0])
KEdgeRemaining_FDK.append(results_fdk[1])
# TV-TGV
print('\nTV-TGV\n')
results_tvtgv = K_edge_sub(lizard_tv_tgv_recon, edge_channel, width, sep)
KEdgeSubtracted_TVTGV.append(results_tvtgv[0])
KEdgeRemaining_TVTGV.append(results_tvtgv[1])
| Python_Scripts/Lizard_Head/Reproduce_Figures_Lizard_Head.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Model
# Importiere Bibliotheken
import sys
import io
import re
import numpy as np
import pandas as pd
import nltk
from nltk.corpus import stopwords
# Kodierte Daten einlesen
data_labeled = pd.read_pickle('data_modified/tweets_labeled.pkl')
data_labeled.head()
# Funktionen definieren um Stoppworte zu entfernen & Daten weiter zu bereinigen
nltk.download('stopwords')
stopword_set = set(stopwords.words("german"))
def preprocess(raw_text):
res = raw_text
res = res.replace('ä', 'ae')
res = res.replace('ö', 'oe')
res = res.replace('ü', 'ue')
res = res.replace('Ä', 'Ae')
res = res.replace('Ö', 'Oe')
res = res.replace('Ü', 'Ue')
res = res.replace('ß', 'ss')
stopword_set = set(stopwords.words("german"))
return " ".join([i for i in re.sub(r'[^a-zA-Z\s]', "", res).lower().split() if i not in stopword_set])
# Daten bereinigen und X und y Variablen legen
texts=[]
for txt in data_labeled['Text']:
txt=preprocess(txt)
texts.append(txt)
labels = data_labeled['Label'].values
# +
# TF-IDF document-term matrix erstellen
from sklearn.feature_extraction.text import CountVectorizer
count_vect_x = CountVectorizer(min_df=1)
X_counts= count_vect_x.fit_transform(texts)
X_counts.shape
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_tfidf = tfidf_transformer.fit_transform(X_counts)
X_tfidf
# -
# test and train Daten definieren
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_tfidf, labels, test_size=0.5, random_state=14)
# +
# Logistic Regression ausführen
from sklearn import linear_model
model1 = linear_model.LogisticRegression()
model1.fit(X_train, y_train)
# -
# Genauigkeit der Vorhersage ermitteln
print("Accuracy: {}".format(model1.score(X_test, y_test)))
| 5_LinearModel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Elements Of Functional Programming
# Most of Python’s purely functional built-in functions return a list rather than an iterable in Python 2, making them less memory efficient than their Python 3.x equivalents. If you’re stuck with Python 2, check the `itertools` module. It provides an iterator-based version of many of these functions – `itertools.izip()`, `itertools.imap()`, `itertools.ifilter()`, and so on.
# ## `map()`
squares = map(lambda num: num * num, [0, 1, 2, 3, 4])
squares
list(squares)
# equivalent using list comprehensions
squares = [num * num for num in [0, 1, 2, 3, 4]]
squares
# ## `filter()`
hwords = filter(lambda word: word.startswith('H'), ['Hello', 'World'])
list(hwords)
# equivalent using list comprehensions
hwords = [word for word in ['Hello', 'World'] if word.startswith('H')]
hwords
# ## `enumerate()`
#
# This function is useful when you need to write code that refers to array indexes.
# +
words = ['I', 'like', 'Python']
for i, word in enumerate(words):
print(f'Word #{i}: "{word}"')
# -
# ## `sorted()`
items = [('a', 2), ('c', 1), ('b', 4)]
sorted(items)
sorted(items, key=lambda t: t[1])
# ## `any()` and `all()`
# +
nums = [0, 1, 3, -1]
if any(map(lambda n: n < 0, nums)):
print('At least one item is a negative number')
# -
if all(map(lambda n: n > 0, nums)):
print('All items are positive numbers')
# ## `zip()`
#
# It's useful when you need to combine a list of keys and a list of values into a dict.
keys = ['foo', 'bar']
map(len, keys)
zip(keys, map(len, keys))
list(zip(keys, map(len, keys)))
dict(zip(keys, map(len, keys)))
# ## Finding an item using `first()`
#
# `first` is a small Python package with a simple function that returns the first true value from an iterable, or None if there is none.
# +
from first import first
first([0, False, None, [], (), 42])
# -
nums = [-1, 0, 1, 2]
first(nums)
first(nums, key=lambda n: n > 0)
| 16_elements_of_functional_programming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="EkNBVeiTcwSF"
# ## Transfer Learning ResNet50
# + [markdown] id="FI9mhQBzcwSU"
# Please download the dataset from the below url
# + id="_4SJ_zkmcwSa"
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
# + id="CIKyb110cwSh"
# import the libraries as shown below
from tensorflow.keras.layers import Input, Lambda, Dense, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img
from tensorflow.keras.models import Sequential
import numpy as np
from glob import glob
#import matplotlib.pyplot as plt
# + id="YOkXAtXHcwSm"
# re-size all the images to this
IMAGE_SIZE = [224, 224]
train_path = 'Datasets/train'
valid_path = 'Datasets/test'
# + id="ZwiwHDSAcwSq"
# Import the Vgg 16 library as shown below and add preprocessing layer to the front of VGG
# Here we will be using imagenet weights
resnet = ResNet50(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)
# + id="42JKO29bcwSv"
# don't train existing weights
for layer in resnet.layers:
layer.trainable = False
# + id="3c46ae4KcwS1"
# useful for getting number of output classes
folders = glob('Datasets/train/*')
# + id="4Cna4cWVcwS6"
# our layers - you can add more if you want
x = Flatten()(resnet.output)
# + id="5TFIuKzkcwS-"
prediction = Dense(len(folders), activation='softmax')(x)
# create a model object
model = Model(inputs=resnet.input, outputs=prediction)
# + id="N8qMPaqScwTC" outputId="065c674b-4785-411c-cc41-c8b8c8a6f99e"
# view the structure of the model
model.summary()
# + id="rNf6Vk3fcwTI"
# tell the model what cost and optimization method to use
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
# + id="fDmNVOq1cwTK"
# Use the Image Data Generator to import the images from the dataset
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
# + id="Eis13ZYpcwTL" outputId="1f49780a-06a9-44dc-8fae-2c2d350c7a4c"
# Make sure you provide the same target size as initialied for the image size
training_set = train_datagen.flow_from_directory('Datasets/train',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical')
# + id="4blajZNXcwTO" outputId="3469177d-8636-45cf-c9e2-0bee0edfc0c8"
test_set = test_datagen.flow_from_directory('Datasets/test',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical')
# + id="KDBd5ePPcwTQ" outputId="860dfc29-2b6b-4d98-84f4-e50f1ddd6014"
# fit the model
# Run the cell. It will take some time to execute
r = model.fit_generator(
training_set,
validation_data=test_set,
epochs=20,
steps_per_epoch=len(training_set),
validation_steps=len(test_set)
)
# + id="329Rw6awcwTT"
import matplotlib.pyplot as plt
# + id="hGul--7AcwTV" outputId="dd1a2807-88be-467a-fd34-1213361deff3"
# plot the loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# plot the accuracy
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
# + id="l-raO4TVcwTX"
# save it as a h5 file
from tensorflow.keras.models import load_model
model.save('model_resnet50.h5')
# + id="bHqz2WMFcwTZ"
y_pred = model.predict(test_set)
# + id="mcrA8Ou-cwTa" outputId="a4de005b-e17c-4844-e5d3-4ab7ecc83f23"
y_pred
# + id="Qi4GFe20cwTc"
import numpy as np
y_pred = np.argmax(y_pred, axis=1)
# + id="x-2gr8nJcwTd" outputId="bc7382eb-f999-44ad-e0ce-28c5f1b001a9"
y_pred
# + id="XOneLLX_cwTf"
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing import image
# + id="e4aXTrSTcwTg"
model=load_model('model_resnet50.h5')
# + id="0PK45yNUcwTh" outputId="2df5c8fa-e2ed-47c3-e7ed-2f2e0b6b0cf0"
img_data
# + id="KpYS7WZUcwTi"
img=image.load_img('Datasets/Test/Coffee/download (2).jpg',target_size=(224,224))
# + id="XvRenJdWcwTj" outputId="60aca1a3-d00c-4e43-c359-ebb2773e63f6"
x=image.img_to_array(img)
x
# + id="sjtln_2ccwTk" outputId="aff74d50-41e9-4dd6-935b-9d46e3c89529"
x.shape
# + id="Tpo0kP6xcwTl"
x=x/255
# + id="Y8HeWFT9cwTm" outputId="f11024d2-474a-4eff-dc96-35ae20f40c92"
import numpy as np
x=np.expand_dims(x,axis=0)
img_data=preprocess_input(x)
img_data.shape
# + id="IYc2g4g5cwTn" outputId="8df6edc5-7be1-4a1e-9988-804951a93078"
model.predict(img_data)
# + id="CKEHy-LkcwTo"
a=np.argmax(model.predict(img_data), axis=1)
# + id="cp2r4LN-cwTp" outputId="cb672d70-5158-445a-d362-d568576316ff"
a==1
# + id="jmEGtY9dcwTq"
import tensorflow as tf
# + id="C0W4HhLdcwTr" outputId="cfe9dd51-4c78-4769-b904-3421170a306b"
tf.__version__
| Transfer_Learning_Resnet_50.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:segregation]
# language: python
# name: conda-env-segregation-py
# ---
# # Single-group Segregation Indices
# %load_ext watermark
# %watermark -a 'eli knaap' -v -d -u -p segregation,geopandas,libpysal,pandana
# Single-group indices are calculated using the `singlegroup` module
# ### Data Prep
import geopandas as gpd
import matplotlib.pyplot as plt
from libpysal.examples import load_example
# read in sacramento data from libpysal and reproject into an appropriate CRS
sacramento = gpd.read_file(load_example("Sacramento1").get_path("sacramentot2.shp"))
sacramento = sacramento.to_crs(sacramento.estimate_utm_crs())
sacramento.head()
sacramento.plot('BLACK')
# ## Aspatial Segregation Indices
# To compute an aspatial segregation index, pass a dataframe, a group population variable, and total population variable to the index's class
from segregation.singlegroup import Dissim
dissim = Dissim(sacramento, group_pop_var='BLACK',
total_pop_var='TOT_POP')
# The `statistic` attribute holds the value of the segregation index, and the `data` attribute holds the data used to calculate the index
dissim.statistic
dissim.data.head()
# ## Spatial Segregation Indices
# For calculating spatial segregation indices, the package implements two classes of indices: spatially-explicit and spatially-implicit.
#
# Spatially-explicit indices are those for which space was a formal consideration in the index's original formulation, whereas spatially-implicit indices are developed using the logic of [Reardon and O'Sulivan](http://doi.wiley.com/10.1111/j.0081-1750.2004.00150.x).
#
# For the latter,(otherwise called *generalized* spatial segregation indices) the package can incorporate spatial relationships represented by either a [`libpysal.W`](https://pysal.org/libpysal/api.html) weights object or a [`pandana.Network`](http://udst.github.io/pandana/network.html) network object, which means generalized spatial segregation indices can be computed according to many different spatial relationships which could include contiguity, distance, or network connectivity. This flexibility is particularly useful for specifying appropriate "neighborhood" definitions for different types of input data (which could be, e.g. housing units, census tracts, or counties)
# For spatially-explicit indices, they can be called like any other, though some may have additional arguments:
from segregation.singlegroup import AbsoluteCentralization, Gini
cent = AbsoluteCentralization(sacramento, group_pop_var='BLACK',
total_pop_var='TOT_POP')
cent.statistic
# ### Euclidian distance based measures
# For generalized spatial indices, a `distance` parameter can be passed to the index of choice. Under the hood, the input data will be passed through a kernel function with the distance parameter as the kernel bandwidth.
#
# (note in this case because the CRS of the sacramento dataframe is UTM, the units are in meters)
# aspatial gini index
aspatial_gini = Gini(sacramento, group_pop_var='BLACK',
total_pop_var='TOT_POP')
# generalized spatial gini index
gen_spatialgini = Gini(sacramento, group_pop_var='BLACK',
total_pop_var='TOT_POP', distance=2000)
gen_spatialgini.statistic
aspatial_gini.statistic
# Examining the `data` attribute of the fitted index shows how the input data are transformed
# kernelized data
gen_spatialgini.data.plot('BLACK')
# original data
sacramento.plot('BLACK')
# ### Network distance based measures
# Instead of a euclidian distance-based kernel, each generalized spatial segregation index can be calculated using accssibility analysis on a transportation network instead. Since people can't fly, using a travel network to measure spatial distances is more conceptually pure to the spirit of segregation indices
import pandana as pdna
# A network can be created using the [urbanaccess](https://github.com/UDST/urbanaccess) package, or the built-in `get_osm_network` function from the `segregation.util` module. Alternatively, metropolitan-scale networks from OpenStreetMap are also available in the [CGS quilt bucket](https://open.quiltdata.com/b/spatial-ucr/tree/osm/) (named by CBSA FIPS code)
net = pdna.Network.from_hdf5('../40900.h5')
network_spatialgini = Gini(sacramento, group_pop_var='BLACK',
total_pop_var='TOT_POP', distance=2000,
network=net, decay='linear')
# Comparing spatial gini indices based on straight-line distance versus network distance:
network_spatialgini.statistic
gen_spatialgini.statistic
# The segregation statistic using network distance to construct neighborhoods is higher than using the one using unrestricted euclidian distance
# ## Batch-Computing Single-Group Measures
# To compute all single group indices in one go, the package provides a wrapper function in the `batch` module
from segregation.batch import batch_compute_singlegroup
all_singlegroup = batch_compute_singlegroup(sacramento, group_pop_var='BLACK', total_pop_var='TOT_POP')
all_singlegroup
| docs/notebooks/01_singlegroup_indices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/boscolio/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/LS_DS_214_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="_H1iMZFlAjVW" colab_type="text"
# Lambda School Data Science
#
# *Unit 2, Sprint 1, Module 4*
#
# ---
# + [markdown] colab_type="text" id="7IXUfiQ2UKj6"
# # Logistic Regression
#
#
# ## Assignment 🌯
#
# You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?
#
# > We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.
#
# - [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.
# - [ ] Begin with baselines for classification.
# - [ ] Use scikit-learn for logistic regression.
# - [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)
# - [ ] Get your model's test accuracy. (One time, at the end.)
# - [ ] Commit your notebook to your fork of the GitHub repo.
#
#
# ## Stretch Goals
#
# - [ ] Add your own stretch goal(s) !
# - [ ] Make exploratory visualizations.
# - [ ] Do one-hot encoding.
# - [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
# - [ ] Get and plot your coefficients.
# - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
# + colab_type="code" id="o9eSnDYhUGD7" colab={}
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# + id="BjfMlRoMAjVd" colab_type="code" colab={}
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv', index_col='Date', parse_dates=['Date'])
# + id="J7F-6liUAjVg" colab_type="code" colab={}
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# + id="wtF_H6xFAjVj" colab_type="code" colab={}
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# + id="427stqqrAjVn" colab_type="code" colab={}
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# + id="E6qrLng9AjVq" colab_type="code" colab={}
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall'])
# + id="a-zEq5WeAjVt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 346} outputId="2d747184-bf10-4788-96b2-88c2e8ef91fc"
df.head()
# + id="wKgSEZDrzsY4" colab_type="code" colab={}
df.dropna(subset=['Great'], inplace=True)
# + id="gn0iKofob8im" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="29e1965b-8f0b-4409-e345-3688810415b5"
df.shape
# + id="XrFpzBGpaSjM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="26c166fb-95cf-4fec-edb6-fca9fcccc481"
df.info()
# + id="W8Vy72GfbWxv" colab_type="code" colab={}
# Split Data
# Dropping anything with a large number of NaNs
target = 'Great'
y = df[target]
X = df.drop([target] + ['Yelp', 'Google', 'Chips', 'Mass (g)', 'Density (g/mL)'], axis=1)
# Doing .drop in two steps because range doesn't play well with others
X = X.drop(X.columns[15:52], axis=1)
# + id="_1pzcpcI00pv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d89fb02e-57ae-4cfa-f79e-500dd1fb2556"
y.isnull().sum()
# + id="DAXZ7Vo7pEgc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="cd51bbb7-73d8-43f4-a2d7-23dbeff1d628"
X.info()
# + id="G9ZB4wXMwgPB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 329} outputId="26099613-72bf-42e6-9b73-e1df6a13c758"
X.head()
# + id="Dhc9fcF4dfi7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="d49437e1-7bfd-44fd-918e-d6bebcfff969"
print(y.shape)
print(X.shape)
# + id="aTty33TC1ncx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="229ff9c8-3e70-483d-e1e0-0d75cfd35df3"
X.info()
# + id="2s2DfuMOlCQU" colab_type="code" colab={}
# Split *target vector* from our *feature matrix*
cutoff_1 = '1/01/2017'
cutoff_2 = '1/01/2018'
mask_1 = X.index < cutoff_1
mask_2 = (cutoff_1 <= X.index) & (X.index < cutoff_2)
mask_3 = X.index >= cutoff_2
X_train, y_train = X.loc[mask_1], y.loc[mask_1]
X_val, y_val = X.loc[mask_2], y.loc[mask_2]
# + id="wYnOimho1PiB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="09cb79e0-e9c0-4957-ed8f-eb40fe4c6baa"
X_val.shape
# + id="a_86lGpqs3ft" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9405e19e-b43a-4fd6-959c-f6a43d5f0c18"
# Establish our Baseline
print('Baseline Accuracy:', y_train.value_counts(normalize=True).max())
# + id="e0ip5XIAzIVa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 171} outputId="9ce7ae29-ab4e-404d-f748-52d0772cbae7"
# Pipeline time
from sklearn.pipeline import make_pipeline
from category_encoders import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
# Instantiate the pipeline
model = make_pipeline(
OneHotEncoder(),
SimpleImputer(),
LogisticRegression()
)
# Fit our model to the training data
model.fit(X_train, y_train);
# + id="FywtPZJb3GVE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="7f648479-ecc1-4c69-e065-542fc3af7371"
# Check Metrics
print('Training Accuracy:', model.score(X_train, y_train))
print('Validation Accuracy:', model.score(X_val, y_val))
# + id="Cl27Hpi53ZSm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c962fed6-e8c1-4e6c-9cd5-0249cf5ecedd"
from sklearn.metrics import accuracy_score
# Predict
y_pred = model.predict(X_val)
accuracy_score(y_val, y_pred)
# + id="dlCFpq0W6m9d" colab_type="code" colab={}
| module4-logistic-regression/LS_DS_214_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
raw_datasets = {
'Iris Dataset': datasets.load_iris(),
'Handwritten Digits Dataset': datasets.load_digits(),
'Wine Dataset': datasets.load_wine(),
}
for raw_label, raw_data in raw_datasets.items():
target = np.unique(raw_data.target).tolist()
target_names = raw_data.target_names.tolist()
# 绘制样本的原始空间前2维散点图
for i, c in enumerate(target):
x = [raw_data.data[j, 0] for j in range(raw_data.data.shape[0]) if raw_data.target[j] == c]
y = [raw_data.data[j, 1] for j in range(raw_data.data.shape[0]) if raw_data.target[j] == c]
plt.scatter(x, y, label=target_names[i])
plt.xlabel('First Dimension')
plt.ylabel('Second Dismension')
plt.title('Original space (%s)' % raw_label)
plt.legend()
plt.show()
# PCA降维
pca_reduced = PCA(n_components=2).fit_transform(raw_data.data)
# 绘制样本的PCA降维子空间
for i, c in enumerate(target):
x = [pca_reduced[j, 0] for j in range(raw_data.data.shape[0]) if raw_data.target[j] == c]
y = [pca_reduced[j, 1] for j in range(raw_data.data.shape[0]) if raw_data.target[j] == c]
plt.scatter(x, y, label=target_names[i])
plt.xlabel('First Component')
plt.ylabel('Second Component')
plt.title('PCA subspace (%s)' % raw_label)
plt.legend()
plt.show()
# LDA降维
lda_reduced = LinearDiscriminantAnalysis(n_components=2).fit_transform(raw_data.data, raw_data.target)
# 绘制样本的LDA降维子空间散点图
for i, c in enumerate(target):
x = [lda_reduced[j, 0] for j in range(raw_data.data.shape[0]) if raw_data.target[j] == c]
y = [lda_reduced[j, 1] for j in range(raw_data.data.shape[0]) if raw_data.target[j] == c]
plt.scatter(x, y, label=target_names[i])
plt.xlabel('First Dimension')
plt.ylabel('Second Dimension')
plt.title('LDA subspace (%s)' % raw_label)
plt.legend()
plt.show()
# 应用分类器LDF, QDF, 1-NN
clf_models = [LinearDiscriminantAnalysis(), QuadraticDiscriminantAnalysis(),
KNeighborsClassifier(n_neighbors=1)]
clf_names = ['LDF', 'QDF', '1-NN']
X_train, X_test, y_train, y_test = train_test_split(raw_data.data, raw_data.target, test_size=0.4,
random_state=0)
for i, clf in enumerate(clf_models):
# PCA reduced
pca_model = PCA(n_components=2)
pca_model.fit(X_train)
pca_X_train = pca_model.transform(X_train)
pca_X_test = pca_model.transform(X_test)
clf.fit(pca_X_train, y_train)
score = clf.score(pca_X_test, y_test)
print('%s, PCA reduced, %s mean accuracy: %s. \n' % (raw_label, clf_names[i], score))
# LDA reduced
lda_model = LinearDiscriminantAnalysis(n_components=2)
lda_model.fit(X_train, y_train)
lda_X_train = lda_model.transform(X_train)
lda_X_test = lda_model.transform(X_test)
clf.fit(lda_X_train, y_train)
score = clf.score(lda_X_test, y_test)
print('%s, LDA reduced, %s mean accuracy: %s. \n' % (raw_label, clf_names[i], score))
| lab_03/feature_extraction_small_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Rnhondova/attention-learn-to-route/blob/master/Attention.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="qHWBoJA3KA53" colab={"base_uri": "https://localhost:8080/"} outputId="1d1c7404-2842-4d5c-90e2-08c83261153c"
# This ensures that a gpu is being used by the current google colab session.
# gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
# + colab={"base_uri": "https://localhost:8080/"} id="ycwxgcUMAWQQ" outputId="09c56552-0c85-4a92-fc37-c739f4374edf"
from google.colab import drive
drive.mount('/content/drive')
# + id="zLd2Yfk653Gu" colab={"base_uri": "https://localhost:8080/"} outputId="f76d9183-0615-449f-bf09-168fca6f709a"
# This code block is used to access your google drive
from google.colab import drive
ROOT = "/content/drive"
drive.mount(ROOT)
# + colab={"base_uri": "https://localhost:8080/"} id="xO6Ir-cHwqnv" outputId="f28f6769-0698-4185-e291-9b98de784bed"
# !git clone https://github.com/Rnhondova/garage.git
# + colab={"base_uri": "https://localhost:8080/"} id="FPW7QCr8IK6v" outputId="b7b07272-8751-40b4-fadc-c7c63b7b3117"
# %cd garage/
# + colab={"base_uri": "https://localhost:8080/"} id="JFov2CnGyviD" outputId="6d83372f-9f01-459b-9986-745852127ce7"
# !git submodule update --init --recursive
# + colab={"base_uri": "https://localhost:8080/"} id="WnXWqHp0w-y3" outputId="10e31624-8bbb-4f63-c1b5-a4f12da06c2f"
# #%cd garage/src/garage/torch/algos/attention-learn-to-route/
# %cd src/attention-learn-to-route/
# + colab={"base_uri": "https://localhost:8080/"} id="OTNHf8woxkr9" outputId="5ff1050a-a35f-4fdc-ed27-891fcd475646"
# !ls
# + colab={"base_uri": "https://localhost:8080/"} id="mqFZroDoBh7T" outputId="74ae0096-6608-4b8c-ecf6-f690f3530c34"
# !pip install --upgrade pip
# !pip install -r garage_requirements.txt
# + colab={"base_uri": "https://localhost:8080/"} id="dGZLM66tz6lS" outputId="08083912-b42b-4ed6-938e-836114cc085a"
# !python run.py --graph_size 20
--batch_size 512 --problem cvrp --baseline rollout --run_name 'vrp100_rollout' --epoch_size 12800 --n_epochs 1
# + id="CwHxwKkL6qh3" colab={"base_uri": "https://localhost:8080/"} outputId="c6280537-3c16-4b25-8a6e-695f3c73292c"
# Make sure this points to the project folder
# %cd drive/'My Drive'/CORL
# + id="ZQXpBJPg0-V1" colab={"base_uri": "https://localhost:8080/"} outputId="b9d1f5ed-e87f-410e-e8b9-eba737d090c6"
# %cd attention
# + colab={"base_uri": "https://localhost:8080/", "height": 102} id="DJVJJzi9Wrzb" outputId="af67a606-70e8-4078-84bc-3864565067db"
import wandb
wandb.login()
# + id="3cgIK0U5M26a" colab={"base_uri": "https://localhost:8080/"} outputId="40f1ba7a-aba6-4c30-b2fb-1192a4c9a5e0"
# This block will run the originial attention code with the below settings
# The save_hrs are the checkpoint hours to save the model
# !python run.py --graph_size 100 --batch_size 64 --problem cvrp --baseline rollout --run_name 'vrp100_rollout' --save_hrs 5 10 --epoch_size 12800 --n_epochs 500
# this is an example of how to run evolution code
# #!python vrp_evolve.py --save_dir ../models/att_evo --save_hrs 2 3 4 5 6 8 10 --sigma 0.001 --lr 0.000001 --dataset_size 12800 --epochs 500
# + id="uEU7zEhHUqzV" colab={"base_uri": "https://localhost:8080/"} outputId="db68e877-7905-4546-f639-53e8a1d29f85"
# !git status
# + id="J_IWMkKGVbyp"
# !git add -A
# + id="sFZpTaAQVwLR"
# !git config --global user.email "<EMAIL>"
# + id="rFFHnmrXV5xd"
# !git config --global user.name "<NAME>"
# + colab={"base_uri": "https://localhost:8080/"} id="tQhvqoQIVfQ1" outputId="c0749741-6613-4824-c1f7-2a9212483c4d"
# !git commit -m "Add required files to set garage"
# + colab={"base_uri": "https://localhost:8080/"} id="Plx8L2EGV_vk" outputId="f0644d70-a0d8-4155-b7df-063c70c00ca2"
# !git push origin HEAD:master
# + colab={"base_uri": "https://localhost:8080/"} id="o796PPQWkpLF" outputId="af349777-091d-49ac-ac1f-961f5fa8ebca"
import numpy as np
test_ = np.array([1,2,3,4,5,6])
test_ = test_.reshape(test_.shape[0],1)
for reward in test_:
print(reward[::-1])
# + id="96-YALbflQCD"
import scipy.signal
def discount_cumsum(x, discount):
"""Discounted cumulative sum.
See https://docs.scipy.org/doc/scipy/reference/tutorial/signal.html#difference-equation-filtering # noqa: E501
Here, we have y[t] - discount*y[t+1] = x[t]
or rev(y)[t] - discount*rev(y)[t-1] = rev(x)[t]
Args:
x (np.ndarrary): Input.
discount (float): Discount factor.
Returns:
np.ndarrary: Discounted cumulative sum.
"""
return scipy.signal.lfilter([1], [1, float(-discount)], x[::-1],
axis=-1)[::-1]
def pad_tensor(x, max_len, mode='zero'):
"""Pad tensors.
Args:
x (numpy.ndarray): Tensors to be padded.
max_len (int): Maximum length.
mode (str): If 'last', pad with the last element, otherwise pad with 0.
Returns:
numpy.ndarray: Padded tensor.
"""
padding = np.zeros_like(x[0])
if mode == 'last':
padding = x[-1]
return np.concatenate(
[x, np.tile(padding, (max_len - len(x), ) + (1, ) * np.ndim(x[0]))])
# + colab={"base_uri": "https://localhost:8080/"} id="YyPUzDDhlgn_" outputId="f288fb84-b8ef-4554-d599-28eb8322b251"
for reward in test_:
(reward[::-1])
# + id="A-<KEY>"
import torch
rewards = torch.Tensor(test_)
returns = torch.Tensor(
np.stack([
discount_cumsum(reward, 0.5)
for reward in test_
]))
# + colab={"base_uri": "https://localhost:8080/"} id="FINI9erenDfK" outputId="31958b6b-369b-4e18-f23c-1f478f7065c3"
returns
# + colab={"base_uri": "https://localhost:8080/"} id="ZM7VWjgQt-ps" outputId="58f339ec-deb0-45f9-aba5-30cd3d360784"
pad_tensor(test_, len(test_), mode='last')
# + colab={"base_uri": "https://localhost:8080/"} id="0JiRC0KHcHVy" outputId="b3d386f3-608f-4013-88cc-7338c4049626"
[20*20]
| Attention.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''base'': conda)'
# language: python
# name: python3
# ---
# # 2. Iris의 세 가지 품종, 분류해볼 수 있겠어요?
# ## 2-12. 프로젝트 (2) load_wine : 와인을 분류해 봅시다
# ### (1) 필요한 모듈 import하기
# +
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import SGDClassifier
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
# -
# ### (2) 데이터 준비
# 데이터 준비
wine = load_wine()
# ### (3) 데이터 이해하기
# - Feature Data 지정하기
wine_data = wine.data
print(wine_data)
# - Label Data 지정하기
wine_label = wine.target
print(wine_label)
# - Target Names 출력해 보기
print(wine.target_names)
# - 데이터 Describe 해 보기
print(wine.DESCR)
# ### (4) train, test 데이터 분리
# train, test 데이터 분리
X_train, X_test, y_train, y_test = train_test_split(wine_data,
wine_label,
test_size=0.2,
random_state=7)
# ### (5) 다양한 모델로 학습시켜보기
# - Decision Tree 사용해보기
# +
# 모델 학습 및 예측
decision_tree = DecisionTreeClassifier(random_state=16)
decision_tree.fit(X_train, y_train)
y_pred = decision_tree.predict(X_test)
print(classification_report(y_test, y_pred))
# -
# 정확도 측정
accuracy = accuracy_score(y_test, y_pred)
print(accuracy)
# - Random Forest 사용해 보기
# +
# 모델 학습 및 예측
random_forest = RandomForestClassifier(random_state=16)
random_forest.fit(X_train, y_train)
y_pred = random_forest.predict(X_test)
print(classification_report(y_test, y_pred))
# -
# 정확도 측정
accuracy = accuracy_score(y_test, y_pred)
print(accuracy)
# - SVM 사용해 보기
# +
# 모델 학습 및 예측
svm_model = svm.SVC(random_state=16)
svm_model.fit(X_train, y_train)
y_pred = svm_model.predict(X_test)
print(classification_report(y_test, y_pred))
# -
# 정확도 측정
accuracy = accuracy_score(y_test, y_pred)
print(accuracy)
# - SGD Classifier 사용해 보기
# +
# 모델 학습 및 예측
sgd_model = SGDClassifier(random_state=16)
sgd_model.fit(X_train, y_train)
y_pred = sgd_model.predict(X_test)
print(classification_report(y_test, y_pred))
# -
# 정확도 측정
accuracy = accuracy_score(y_test, y_pred)
print(accuracy)
# - Logistic Regression 사용해 보기
# +
# 모델 학습 및 예측
logistic_model = LogisticRegression(random_state=16, max_iter=10000) # cf. max_iter를 높은 값을 설정해야 경고 메세지가 뜨지 않음
logistic_model.fit(X_train, y_train)
y_pred = logistic_model.predict(X_test)
print(classification_report(y_test, y_pred))
# -
# 정확도 측정
accuracy = accuracy_score(y_test, y_pred)
print(accuracy)
# ### (6) 모델을 평가해 보기
print("클래스별 샘플 개수:\n",
{n: v for n, v in zip(wine.target_names, np.bincount(wine.target))})
# 1. 모든 wine 클래스(['class_0' 'class_1' 'class_2'])에 대한 accuracy는 3개 모델이 1.00에 근사한 값을 보여주고 있고 나머지 2개의 모델은 값이 현저히 낮다. 모델 5가지를 accuracy 순서로 내림차순한 결과는 Random Forest, Logistic Regression, Decision Tree, SGD Classifier, SVM 이다.
#
# 2. wine 데이터셋은 각 클래스별로 샘플 개수가 불균형하게 이루어져 있음을 확인할 수 있다. 그래서 f1 score 을 사용하는 것이 좋다.
#
# 3. 정답(Postive)을 오답(Negative)으로 판단하면 안 된다. 따라서 FN이 작아야 하는 Recall이 더 중요하다. 모든 wine 클래스에 대한 f1-score weighted avg 를 내림차순해서 살펴보면 Random Forest, Logistic Regression, Decision Tree, SGD Classifier, SVM 이다.
#
# 4. **여러 분류기 모델 중에서 가장 성능이 좋은 모델은 Random Forest 이다.**
#
# ||DecisionTree|RandomForest|SVM|SGD|LogisticRegression|
# |:---:|:---:|:---:|:---:|:---:|:---:|
# |accuracy|0.92|1.00|0.61|0.64|0.97|
# |precision weighted avg|0.93|1.00|0.55|0.68|0.97|
# |recall weighted avg|0.92|1.00|0.61|0.64|0.97|
# |f1-score weighted avg|0.92|1.00|0.54|0.59|0.97|
# ## 회고
# [Move [E-02] retrospect.md]([E-02] retrospect.md)
| EXPLORATION/Node 2/[E-02] wine_classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to XArray
# > This tutorial introduces XArray, a Python library for working with labeled multidimensional arrays.
#
# - toc: false
# - badges: true
# - comments: true
# - categories: [xarray]
# #### DEA uses XArray as its data model. To better understand what it is, let's first do a simple experiment on how we could pack remote sensing data using a combination of plain numpy arrays and Python dictionaries.
#
# #### Suposse we have a satellite image with three bands: Red, NIR and SWIR. These bands are represented as 2-dimensional numpy arrays. We could also store the latitude and longitude coordinates for each dimension using 1-dimensional arrays. Finally, we could also store some metadata to help describe our images.
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from check_answer import check_answer
red = np.random.rand(250,250)
nir = np.random.rand(250,250)
swir = np.random.rand(250,250)
lats = np.linspace(-23.5, -26.0, num=red.shape[0], endpoint=False)
lons = np.linspace(110.0, 112.5, num=red.shape[1], endpoint=False)
title = "Image of the desert"
date = "2019-11-10"
image = {"red": red,
"nir": nir,
"swir": swir,
"latitude": lats,
"longitude": lons,
"title": title,
"date": date}
# -
# #### All our data is conveniently packed in a dictionary. Now we can use this dictionary to work with it:
image["date"], image["latitude"][:4]
# #### We can address any variable inside this image dictionary and work directly with other functions. For example, to plot the nir band and calculate its mean:
# +
plt.imshow(image['nir'])
image["nir"].mean()
# -
# #### Still, the variables inside our dictionary are independent and we don't know how they are linked. For example, we have the variable `latitude` but we don't know to what axis in the image arrays it refers. We also need to use positional indices to select parts data in the numpy arrays containing the image data. Wouldn't it be convenient to be able to select data from the images using the coordinates of the pixels instead of their relative positions?
#
# #### This is exactly what XArray solves! Let's see how it works:
import xarray as xr
from datetime import datetime
# #### To explore XArray we have a file containing some reflectance data of Canberra that has been generated using the DEA library.
#
# #### The object that we get `ds` is a XArray `Dataset`, which in some ways is very similar to the dictionary that we created before, but with lots of convenient functionality available.
# +
ds = xr.open_dataset('data/canberra_ls8.nc')
ds
# -
# #### A `Dataset` can be seen as a dictionary structure packing up the data, dimensions and attributes all linked together.
#
# #### Variables in a `Dataset` object are called `DataArrays` and they share dimensions with the higher level `Dataset`
#
# <img src="data/dataset-diagram.png" alt="drawing" width="600" align="left"/>
# #### So far, we have been using 3-dimensional numpy arrays in which the third dimension represented the bands of images and remote sensing data. Numpy can store data in up to 32 dimensions so we could for example use 4-dimensional arrays to store multispectral images with a temporal dimensions, to perform time series analysis.
#
# #### To facilitate working with these data, DEA follows the convention of storing spectral bands as separate variables storing each one as 3-dimensional cubes containing the temporal dimension.
#
# #### To access a variable we can access as if it were a Python dictionary, or using the `.` notation, which is more convenient.
# +
ds["green"]
#or alternatively
ds.green
# -
# #### Dimensions are also stored as numerical arrays with the same size as the image's axis they are referring.
# +
ds['time']
#or alternatively
ds.time
# -
# #### Metadata is referred as Attributes and is internally stored under `.attrs`, but the same convenient `.` notation applies to them.
# +
ds.attrs['Conventions']
#or alternatively
ds.Conventions
# -
# #### Exercise 7.1: Can you access to the `geospatial_bounds_crs` value in the attributes of this XArray Dataset?
# +
answ = ds.?
check_answer("7.1", answ)
# -
# #### DataArrays store their data internally as multidimensional numpy arrays. But these arrays contain dimensions or labels that make it easier handle the data. To access the underlaying numpy array of a `DataArray` we can use the `.values` notation.
# +
arr = ds.green.values
type(arr), arr.shape
# -
# #### Exercise 7.2: Can you store in the `answ` variable the underlying numpy array containing the longitude dimension in this Dataset?
# +
answ = ?
check_answer("7.2", int(answ[0]*1e6))
# -
# #### Selecting data and subsetting numpy arrays is done using positional indices to specify positions or ranges of values along the different axis of an array. When we use the `[:,:]` notation, we need to know beforehand what is the relative position of each axis in our arrays.
#
# #### XArray provides an abstraction in which we can refer to each axis by its name. Also we can select subsets of the data arrays using two modes or methods:
#
# * `isel()`: For selecting data based on its index (like numpy).
# * `sel()`: For selecting data based on its dimension of label value.
#
# #### For example, for selecting the first element in the temporal dimension of the `green` variable we do:
# +
print("Initial time dimension values:", ds.green.time.values)
ss = ds.green.isel(time=0)
ss
# -
# #### On the other hand we can use the `.sel()` method to select parts of the array by their label or content. See that in this case we do not refer to the data by its positional index but by its dimensional value.
# +
ss = ds.green.sel(time=datetime(2016,1,1))
ss
# -
# #### Both methods `sel()` and `isel()` can receive as many arguments as dimensions have the data array. We can use any order in to pass the dimensions and we can also define slices or ranges of values using the `slice()` notation. For example:
# +
ss = ds.green.sel(time=datetime(2016,1,1), latitude=slice(-35.30,-35.24))
ss
# -
# #### Exercise 7.3: Can you select the region of the red variable delimited by these coordinates:
# * latitude [-35.30,-35.29]
# * longitude [149.11,149.13]
# +
answ = ds.?
check_answer("7.3", answ.shape)
# -
# #### When we use the selection methods on Datasets and DataArrays we get an object of the same type.
# +
ss = ds.green.sel(time=datetime(2016,1,1), latitude=slice(-35.30,-35.24))
type(ss), type(ds.green)
# -
# #### Exercise 7.4: Use the `imshow` function to create an image of the first time of the red channel in the dataset.
#
# > Tip: Use the `.values` method to convert the DataArray object into a numpy array, so matplotlib can work with it.
# +
answ = ?
plt.imshow(answ)
check_answer("7.4", int(answ[0,0])),
# -
# #### Xarray exposes lots of functions to perform analisis on `Datasets` and `DataArrays` with a similar syntax to numpy's. For example to calculate the spatial mean of the green band
print("Mean of green band:", ds.green.mean())
print("Standard deviation of green band:", ds.green.std())
print("Sum of green band:", ds.green.sum())
# #### Exercise 7.5: Can you find the difference between the means of the red and nir channels?
# +
answ = ?
check_answer("7.5", int(answ.values))
# -
# #### Plotting is also conveniently integrated as a method on DataArrays.
#
# > Note: For plotting you need to pass a 2-dimensional DataArray object, so normally a temporal element needs to be selected.
ds["green"].isel(time=0).plot()
# #### We still can do things manually using numpy and matplotlib
# +
rgb = np.dstack((ds.red.isel(time=0).values, ds.green.isel(time=0).values, ds.blue.isel(time=0).values))
rgb = np.clip(rgb, 0, 2000) / 2000
plt.imshow(rgb)
# -
# #### The previous image is upside down, so we'd still need to flip the image vertically in numpy to represent it correctly. This has to do with how numerical arrays are stored in netCDF files.
#
# #### But compare to these chained operations within XArray (Well see more simple ways of doing this in DEA though)
# +
# Selection of the bands | time sel | numpy conv| plot (params for plotting function)
ds[['red', 'green', 'blue']].isel(time=0).to_array().plot.imshow(robust=True, figsize=(6, 6))
# -
# #### Exercise 7.6: Similarly to the previous image, create an RGB image using the `.sel()` functionality select the subset defined by the following dimension values:
#
# * time -> 2017-01-01
# * latitude -> [-35.29, -35.27]
# * longitude -> [149.1, 149.13]
# +
answ = ?
answ.to_array().plot.imshow(robust=True, figsize=(6, 6))
check_answer("7.6", answ.to_array().values.shape)
# -
# #### Exercise 7.7: Can you create an NDVI representation of the whole extend in `ds`?
# +
answ = ?
answ.isel(time=0).plot(figsize=(6, 6), cmap='summer_r')
check_answer("7.7", int(answ.values[0,100,100]*1000))
# -
| dea_materials/day1/7-intro_to_xarray.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import cmapPy.pandasGEXpress.parse as parse
import numpy as np
import matplotlib.pyplot as plt
input_dir = '../0A.download-data/data/'
output_dir = 'data/'
phase1_gctoo = parse.parse(input_dir + 'GSE92742_Broad_LINCS_Level5_COMPZ.MODZ_n473647x12328.gctx')
phase1_data_df = phase1_gctoo.data_df.transpose()
gene_info = pd.read_csv(input_dir + 'GSE92742_Broad_LINCS_gene_info.txt.gz', sep = "\t")
landmark_genes = np.char.mod('%d',gene_info[gene_info.pr_is_lm == 1].pr_gene_id)
#slicing the dataframe to only include landmark genes
phase1_data_df = phase1_data_df.loc[:,phase1_data_df.columns.isin(landmark_genes)]
#add pert_id and cell_id metadata column to phase1_data_df
phase1_sig_info = pd.read_csv(input_dir + 'GSE92742_Broad_LINCS_sig_info.txt.gz', sep = "\t").set_index('sig_id').reindex(index=phase1_data_df.index)
phase1_sig_info = phase1_sig_info.loc[:,phase1_sig_info.columns.isin(['pert_id','cell_id'])]
phase1_df = pd.concat([phase1_sig_info,phase1_data_df], axis=1).reset_index()
phase1_df.head()
#add inchikey metadata column to phase1_data_df
phase1_pert_info = pd.read_csv(input_dir + "GSE92742_Broad_LINCS_pert_info.txt.gz", sep = '\t').set_index('pert_id').reindex(index=phase1_df.pert_id)
phase1_pert_info = phase1_pert_info.loc[:,phase1_pert_info.columns == 'inchi_key_prefix']
phase1_df = phase1_df.set_index("pert_id")
phase1_df = pd.concat([phase1_pert_info, phase1_df], axis=1).reset_index()
phase1_df.head()
phase1_df.to_csv(output_dir + 'L1000_phase1.tsv.gz', sep = '\t', index = False)
| L1000/0B.process-data/1A.processONLYPHASE1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pid_controller_env
# language: python
# name: pid_controller_env
# ---
# %load_ext autoreload
# %autoreload 2
from context import pid_controller
from ipywidgets import interact
import matplotlib.font_manager
from pid_controller.main import run_pid_control
from pid_controller.visualization import setup_control_widgets
interact(run_pid_control, **setup_control_widgets());
| pid_contoller/notebooks/pid_controller.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
pip install efficientnet
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os,cv2
import numpy as np
import random
from IPython.display import Image
from keras.preprocessing import image
from keras import optimizers
from keras.models import Model , Sequential
from keras.applications.imagenet_utils import preprocess_input
import matplotlib.pyplot as plt
import seaborn as sns
from keras import regularizers
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import EarlyStopping,ModelCheckpoint,ReduceLROnPlateau , LearningRateScheduler
from keras.applications.vgg16 import VGG16 , preprocess_input
from keras.layers import Conv2D,MaxPooling2D,Dropout,Activation, Flatten, Dense,GlobalAveragePooling2D,BatchNormalization
from keras.optimizers import Adam , RMSprop
import warnings
warnings.filterwarnings('ignore')
def reset_random_seeds():
os.environ['PYTHONHASHSEED']=str(42)
np.random.seed(42)
random.seed(42)
reset_random_seeds()
# !pwd
# cd /kaggle/input/
train_dir="../input/eight-dance-forms/train"
test_dir="../input/eight-dance-forms/test"
train=pd.read_csv('../input/eight-dance-forms/train.csv')
test = pd.read_csv('../input/eight-dance-forms/test.csv')
print('no of training images ',train.shape[0])
print('no of test images ',test.shape[0])
train.head()
pwd
# +
train_fnames = os.listdir(train_dir)
test_fnames = os.listdir(test_dir)
print(train_fnames[:9])
print(test_fnames[:9])
# -
Image(os.path.join(train_dir,train.iloc[1,0]),width=250,height=250)
import efficientnet.tfkeras as enet
# +
datagen=ImageDataGenerator(rescale=1./255,validation_split=0.25)
train_generator=datagen.flow_from_dataframe(dataframe=train,directory=train_dir,x_col='Image',
y_col='target',class_mode='categorical',batch_size=8,
target_size=(224,224),color_mode='rgb',seed=42)
validation_generator=datagen.flow_from_dataframe(dataframe=train,directory=train_dir,x_col='Image',
y_col='target',class_mode='categorical',batch_size=8,
target_size=(224,224),color_mode='rgb',seed=42)
test_datagen=ImageDataGenerator(rescale=1./255)
test_generator=test_datagen.flow_from_dataframe(dataframe=test,directory=test_dir,x_col="Image",y_col=None,
batch_size=32,seed=42,shuffle=False,
class_mode=None,target_size=(224,224),color_mode='rgb')
# +
base_model = enet.EfficientNetB2(include_top=False, pooling='avg', weights='imagenet')
x=base_model.output
x = BatchNormalization()(x)
x = Dense(256)(x)
x = Dropout(0.7)(x)
preds=Dense(8,activation='softmax')(x) #final layer with softmax activation
model=Model(inputs=base_model.input,outputs=preds)
# -
len(model.layers)
for layer in model.layers[:100]:
layer.trainable=False
for layer in model.layers[100:]:
layer.trainable=True
pwd
'''def scheduler(epoch, lr):
... if epoch < 10:
... return lr
... else:
... return lr * tf.math.exp(-0.1)'''
# cd /kaggle/working/
# +
early = EarlyStopping(monitor='val_loss',patience=5,verbose=1,restore_best_weights=True)
checkpoint = ModelCheckpoint('model.h5', monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max')
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5,
verbose=1, mode='min')
#lrs = LearningRateScheduler(scheduler)
model.compile(optimizer='adam',loss="categorical_crossentropy",metrics=["accuracy"])
# -
history=model.fit_generator(train_generator,epochs=30,
validation_data=validation_generator,
callbacks=[early,checkpoint, reduce_lr])
# +
acc=history.history['loss']
acc_val=history.history['val_loss']
epochs_ = range(len(acc))
plt.plot(epochs_,acc,label='training loss')
plt.xlabel('No of epochs')
plt.ylabel('loss')
## getting validation loss of each epochs
plt.plot(epochs_,acc_val,label="validation loss")
plt.title('no of epochs vs loss')
plt.legend()
# +
acc=history.history['accuracy']
acc_val=history.history['val_accuracy']
epochs_ = range(len(acc))
plt.plot(epochs_,acc,label='training accuracy')
plt.xlabel('No of epochs')
plt.ylabel('loss')
## getting validation loss of each epochs
plt.plot(epochs_,acc_val,label="validation accuracy")
plt.title('no of epochs vs accuracy')
plt.legend()
# -
model.load_weights('model.h5')
model.evaluate_generator(generator=validation_generator)
# +
from tensorflow.keras.applications import DenseNet201
img_size = 224
base_model = DenseNet201(include_top = False,
weights = 'imagenet',
input_shape = (img_size,img_size,3))
for layer in base_model.layers[:675]:
layer.trainable = False
for layer in base_model.layers[675:]:
layer.trainable = True
# -
image_size = 224
model = Sequential()
model.add(base_model)
model.add(GlobalAveragePooling2D())
model.add(Dense(8, activation='softmax'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics=['accuracy'])
model.summary()
# +
filepath= "model_densenet.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max', save_weights_only=False)
early_stopping = EarlyStopping(monitor='val_loss',min_delta = 0, patience = 5, verbose = 1, restore_best_weights=True)
callbacks_list = [
checkpoint,
early_stopping]
# -
history=model.fit_generator(train_generator,epochs=50,
validation_data=validation_generator,
callbacks=callbacks_list)
model.load_weights('model_densenet.h5')
model.evaluate_generator(generator=validation_generator)
test_generator.reset()
pred2=model.predict_generator(test_generator,verbose=1)
pred2
predicted_class_indices=np.argmax(pred2,axis=1)
predicted_class_indices
labels = (train_generator.class_indices)
labels = dict((v,k) for k,v in labels.items())
predictions = [labels[k] for k in predicted_class_indices]
predictions
pwd
# cd /kaggle/working/
filenames=test_generator.filenames
results=pd.DataFrame({"Image":filenames,
"target":predictions})
results.to_csv("mobile_results.csv",index=False)
result = pd.read_csv('./mobile_results.csv')
result.head()
| 8 Dance form/Comparison Efficientnet & DenseNet on 8 Dance data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] colab_type="text" id="tDybPQiEFQuJ"
# In this notebook, we will show how to load pre-trained models and draw things with sketch-rnn
# + colab={} colab_type="code" id="k0GqvYgB9JLC"
# import the required libraries
import numpy as np
import time
import random
import pickle as cPkickle
import codecs
import collections
import os
import math
import json
import tensorflow as tf
from six.moves import xrange
# + colab={} colab_type="code" id="UI4ZC__4FQuL"
# libraries required for visualisation:
from IPython.display import SVG, display
import PIL
from PIL import Image
import matplotlib.pyplot as plt
# set numpy output to something sensible
np.set_printoptions(precision=8, edgeitems=6, linewidth=200, suppress=True)
# + colab={} colab_type="code" id="D7ObpAUh9jrk"
# #!pip install -qU svgwrite
# or
# #!conda install -c omnia svgwrite=1.1.6
# + colab={} colab_type="code" id="4xYY-TUd9aiD"
import svgwrite # conda install -c omnia svgwrite=1.1.6
# + colab={} colab_type="code" id="LebxcF4p90OR"
# #!pip install -q magenta
# -
import magenta
# + colab={} colab_type="code" id="NkFS0E1zFQuU"
# import our command line tools
from magenta.models.sketch_rnn.sketch_rnn_train import *
from magenta.models.sketch_rnn.model import *
from magenta.models.sketch_rnn.utils import *
from magenta.models.sketch_rnn.rnn import *
# -
# By the way. This is a great diagnostic when you think you've installed your packages but they've failed to import. Be sure that 'jupyterlab' and not just 'jupyter' are installed in your local conda env.
import sys
sys.executable
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="NzPSD-XRFQuP" outputId="daa0dd33-6d59-4d15-f437-d8ec787c8884"
tf.logging.info("TensorFlow Version: %s", tf.__version__)
# + colab={} colab_type="code" id="GBde4xkEFQuX"
# little function that displays vector images and saves them to .svg
def draw_strokes(data, factor=0.2, svg_filename = '/tmp/sketch_rnn/svg/sample.svg'):
tf.gfile.MakeDirs(os.path.dirname(svg_filename))
min_x, max_x, min_y, max_y = get_bounds(data, factor)
dims = (50 + max_x - min_x, 50 + max_y - min_y)
dwg = svgwrite.Drawing(svg_filename, size=dims)
dwg.add(dwg.rect(insert=(0, 0), size=dims,fill='white'))
lift_pen = 1
abs_x = 25 - min_x
abs_y = 25 - min_y
p = "M%s,%s " % (abs_x, abs_y)
command = "m"
for i in xrange(len(data)):
if (lift_pen == 1):
command = "m"
elif (command != "l"):
command = "l"
else:
command = ""
x = float(data[i,0])/factor
y = float(data[i,1])/factor
lift_pen = data[i, 2]
p += command+str(x)+","+str(y)+" "
the_color = "black"
stroke_width = 1
dwg.add(dwg.path(p).stroke(the_color,stroke_width).fill("none"))
dwg.save()
display(SVG(dwg.tostring()))
# generate a 2D grid of many vector drawings
def make_grid_svg(s_list, grid_space=10.0, grid_space_x=16.0):
def get_start_and_end(x):
x = np.array(x)
x = x[:, 0:2]
x_start = x[0]
x_end = x.sum(axis=0)
x = x.cumsum(axis=0)
x_max = x.max(axis=0)
x_min = x.min(axis=0)
center_loc = (x_max+x_min)*0.5
return x_start-center_loc, x_end
x_pos = 0.0
y_pos = 0.0
result = [[x_pos, y_pos, 1]]
for sample in s_list:
s = sample[0]
grid_loc = sample[1]
grid_y = grid_loc[0]*grid_space+grid_space*0.5
grid_x = grid_loc[1]*grid_space_x+grid_space_x*0.5
start_loc, delta_pos = get_start_and_end(s)
loc_x = start_loc[0]
loc_y = start_loc[1]
new_x_pos = grid_x+loc_x
new_y_pos = grid_y+loc_y
result.append([new_x_pos-x_pos, new_y_pos-y_pos, 0])
result += s.tolist()
result[-1][2] = 1
x_pos = new_x_pos+delta_pos[0]
y_pos = new_y_pos+delta_pos[1]
return np.array(result)
# + [markdown] colab_type="text" id="if7-UyxzFQuY"
# define the path of the model you want to load, and also the path of the dataset
# + colab={} colab_type="code" id="Dipv1EbsFQuZ"
data_dir = 'http://github.com/hardmaru/sketch-rnn-datasets/raw/master/aaron_sheep/'
models_root_dir = '/tmp/sketch_rnn/models'
model_dir = '/tmp/sketch_rnn/models/aaron_sheep/layer_norm'
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="eaSqI0fIFQub" outputId="06df45a6-cc86-4f50-802e-25ae185037f7"
download_pretrained_models(models_root_dir=models_root_dir)
# + colab={} colab_type="code" id="G4sRuxyn_1aO"
def load_env_compatible(data_dir, model_dir):
"""Loads environment for inference mode, used in jupyter notebook."""
# modified https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/sketch_rnn_train.py
# to work with depreciated tf.HParams functionality
model_params = sketch_rnn_model.get_default_hparams()
with tf.gfile.Open(os.path.join(model_dir, 'model_config.json'), 'r') as f:
data = json.load(f)
fix_list = ['conditional', 'is_training', 'use_input_dropout', 'use_output_dropout', 'use_recurrent_dropout']
for fix in fix_list:
data[fix] = (data[fix] == 1)
model_params.parse_json(json.dumps(data))
return load_dataset(data_dir, model_params, inference_mode=True)
def load_model_compatible(model_dir):
"""Loads model for inference mode, used in jupyter notebook."""
# modified https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/sketch_rnn_train.py
# to work with depreciated tf.HParams functionality
model_params = sketch_rnn_model.get_default_hparams()
with tf.gfile.Open(os.path.join(model_dir, 'model_config.json'), 'r') as f:
data = json.load(f)
fix_list = ['conditional', 'is_training', 'use_input_dropout', 'use_output_dropout', 'use_recurrent_dropout']
for fix in fix_list:
data[fix] = (data[fix] == 1)
model_params.parse_json(json.dumps(data))
model_params.batch_size = 1 # only sample one at a time
eval_model_params = sketch_rnn_model.copy_hparams(model_params)
eval_model_params.use_input_dropout = 0
eval_model_params.use_recurrent_dropout = 0
eval_model_params.use_output_dropout = 0
eval_model_params.is_training = 0
sample_model_params = sketch_rnn_model.copy_hparams(eval_model_params)
sample_model_params.max_seq_len = 1 # sample one point at a time
return [model_params, eval_model_params, sample_model_params]
# + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" id="9m-jSAb3FQuf" outputId="debc045d-d15a-4b30-f747-fa4bcbd069fd"
[train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env_compatible(data_dir, model_dir)
# + colab={"base_uri": "https://localhost:8080/", "height": 479} colab_type="code" id="1pHS8TSgFQui" outputId="50b0e14d-ff0f-43bf-d996-90e9e6a1491e"
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
# + colab={} colab_type="code" id="1gxYLPTQFQuk"
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="bVlDyfN_FQum" outputId="fb41ce20-4c7f-4991-e9f6-559ea9b34a31"
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
# + [markdown] colab_type="text" id="EOblwpFeFQuq"
# We define two convenience functions to encode a stroke into a latent vector, and decode from latent vector to stroke.
# + colab={} colab_type="code" id="tMFlV487FQur"
def encode(input_strokes):
strokes = to_big_strokes(input_strokes).tolist()
strokes.insert(0, [0, 0, 1, 0, 0])
seq_len = [len(input_strokes)]
draw_strokes(to_normal_strokes(np.array(strokes)))
return sess.run(eval_model.batch_z, feed_dict={eval_model.input_data: [strokes], eval_model.sequence_lengths: seq_len})[0]
# + colab={} colab_type="code" id="1D5CV7ZlFQut"
def decode(z_input=None, draw_mode=True, temperature=0.1, factor=0.2):
z = None
if z_input is not None:
z = [z_input]
sample_strokes, m = sample(sess, sample_model, seq_len=eval_model.hps.max_seq_len, temperature=temperature, z=z)
strokes = to_normal_strokes(sample_strokes)
if draw_mode:
draw_strokes(strokes, factor)
return strokes
# + colab={"base_uri": "https://localhost:8080/", "height": 123} colab_type="code" id="fUOAvRQtFQuw" outputId="c8e9a1c3-28db-4263-ac67-62ffece1e1e0"
# get a sample drawing from the test set, and render it to .svg
stroke = test_set.random_sample()
draw_strokes(stroke)
# + [markdown] colab_type="text" id="j114Re2JFQuz"
# Let's try to encode the sample stroke into latent vector $z$
# + colab={"base_uri": "https://localhost:8080/", "height": 123} colab_type="code" id="DBRjPBo-FQu0" outputId="e089dc78-88e3-44c6-ed7e-f1844471f47f"
z = encode(stroke)
# + colab={"base_uri": "https://localhost:8080/", "height": 124} colab_type="code" id="-37v6eZLFQu5" outputId="5ddac2f2-5b3b-4cd7-b81f-7a8fa374aa6b"
_ = decode(z, temperature=0.8) # convert z back to drawing at temperature of 0.8
# + [markdown] colab_type="text" id="M5ft6IEBFQu9"
# Create generated grid at various temperatures from 0.1 to 1.0
# + colab={"base_uri": "https://localhost:8080/", "height": 130} colab_type="code" id="BuhaZI0aFQu9" outputId="d87d4b00-30c2-4302-bec8-46566ef26922"
stroke_list = []
for i in range(10):
stroke_list.append([decode(z, draw_mode=False, temperature=0.1*i+0.1), [0, i]])
stroke_grid = make_grid_svg(stroke_list)
draw_strokes(stroke_grid)
# + [markdown] colab_type="text" id="4xiwp3_DFQvB"
# Latent Space Interpolation Example between $z_0$ and $z_1$
# + colab={"base_uri": "https://localhost:8080/", "height": 123} colab_type="code" id="WSX0uvZTFQvD" outputId="cd67af4e-5ae6-4327-876e-e1385dadbafc"
# get a sample drawing from the test set, and render it to .svg
z0 = z
_ = decode(z0)
# + colab={"base_uri": "https://localhost:8080/", "height": 194} colab_type="code" id="jQf99TxOFQvH" outputId="4265bd5f-8c66-494e-b26e-d3ac874d69bb"
stroke = test_set.random_sample()
z1 = encode(stroke)
_ = decode(z1)
# + [markdown] colab_type="text" id="tDqJR8_eFQvK"
# Now we interpolate between sheep $z_0$ and sheep $z_1$
# + colab={} colab_type="code" id="_YkPNL5SFQvL"
z_list = [] # interpolate spherically between z0 and z1
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z0, z1, t))
# + colab={} colab_type="code" id="UoM-W1tQFQvM"
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False), [0, i]])
# + colab={"base_uri": "https://localhost:8080/", "height": 122} colab_type="code" id="mTqmlL6GFQvQ" outputId="062e015f-29c6-4e77-c6db-e403d5cabd59"
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
# + [markdown] colab_type="text" id="vFwPna6uFQvS"
# Let's load the Flamingo Model, and try Unconditional (Decoder-Only) Generation
# + colab={} colab_type="code" id="HH-YclgNFQvT"
model_dir = '/tmp/sketch_rnn/models/flamingo/lstm_uncond'
# + colab={} colab_type="code" id="-Znvy3KxFQvU"
[hps_model, eval_hps_model, sample_hps_model] = load_model_compatible(model_dir)
# + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" id="cqDNK1cYFQvZ" outputId="d346d57c-f51a-4286-ba55-705bc27d4d0d"
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
# + colab={} colab_type="code" id="7wzerSI6FQvd"
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="6mzk8KjOFQvf" outputId="c450a6c6-22ee-4a58-8451-443462b42d58"
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
# + colab={} colab_type="code" id="X88CgcyuFQvh"
# randomly unconditionally generate 10 examples
N = 10
reconstructions = []
for i in range(N):
reconstructions.append([decode(temperature=0.5, draw_mode=False), [0, i]])
# + colab={"base_uri": "https://localhost:8080/", "height": 149} colab_type="code" id="k57REtd_FQvj" outputId="8bd69652-9d1d-475e-fc64-f205cf6b9ed1"
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
# + [markdown] colab_type="text" id="L-rJ0iUQFQvl"
# Let's load the owl model, and generate two sketches using two random IID gaussian latent vectors
# + colab={} colab_type="code" id="of4SWwGdFQvm"
model_dir = '/tmp/sketch_rnn/models/owl/lstm'
# + colab={"base_uri": "https://localhost:8080/", "height": 255} colab_type="code" id="jJiSZFQeFQvp" outputId="f84360ca-c2be-482f-db57-41b5ecc05768"
[hps_model, eval_hps_model, sample_hps_model] = load_model_compatible(model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
# + colab={"base_uri": "https://localhost:8080/", "height": 141} colab_type="code" id="vR4TDoi5FQvr" outputId="db08cb2c-952c-4949-d2b0-94c11351264b"
z_0 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_0)
# + colab={"base_uri": "https://localhost:8080/", "height": 124} colab_type="code" id="ZX23lTnpFQvt" outputId="247052f2-a0f3-4046-83d6-d08e0429fafb"
z_1 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_1)
# + [markdown] colab_type="text" id="7FjQsF_2FQvv"
# Let's interpolate between the two owls $z_0$ and $z_1$
# + colab={} colab_type="code" id="u6G37E8_FQvw"
z_list = [] # interpolate spherically between z_0 and z_1
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z_0, z_1, t))
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False, temperature=0.1), [0, i]])
# + colab={"base_uri": "https://localhost:8080/", "height": 149} colab_type="code" id="OULjMktmFQvx" outputId="94b7b68e-9c57-4a1b-b216-83770fa4be81"
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
# + [markdown] colab_type="text" id="OiXNC-YsFQv0"
# Let's load the model trained on both cats and buses! catbus!
# + colab={} colab_type="code" id="SL7WpDDQFQv0"
model_dir = '/tmp/sketch_rnn/models/catbus/lstm'
# + colab={"base_uri": "https://localhost:8080/", "height": 255} colab_type="code" id="Cvk5WOqHFQv2" outputId="8081d53d-52d6-4d18-f973-a9dd44c897f2"
[hps_model, eval_hps_model, sample_hps_model] = load_model_compatible(model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
# + colab={"base_uri": "https://localhost:8080/", "height": 106} colab_type="code" id="icvlBPVkFQv5" outputId="f7b415fe-4d65-4b00-c0eb-fb592597dba2"
z_1 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_1)
# + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" id="uaNxd0LuFQv-" outputId="4de5ee9a-cf14-49f4-e5f5-399a0d0b8215"
z_0 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_0)
# + [markdown] colab_type="text" id="VtSYkS6mFQwC"
# Let's interpolate between a cat and a bus!!!
# + colab={} colab_type="code" id="qIDYUxBEFQwD"
z_list = [] # interpolate spherically between z_1 and z_0
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z_1, z_0, t))
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False, temperature=0.15), [0, i]])
# + colab={"base_uri": "https://localhost:8080/", "height": 112} colab_type="code" id="ZHmnSjSaFQwH" outputId="38fe3c7e-698b-4b19-8851-e7f3ff037744"
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
# + [markdown] colab_type="text" id="flZ_OgzCFQwJ"
# Why stop here? Let's load the model trained on both elephants and pigs!!!
# + colab={} colab_type="code" id="S8WwK8FPFQwK"
model_dir = '/tmp/sketch_rnn/models/elephantpig/lstm'
# + colab={"base_uri": "https://localhost:8080/", "height": 255} colab_type="code" id="meOH4AFXFQwM" outputId="764938a7-bbdc-4732-e688-a8a278ab3089"
[hps_model, eval_hps_model, sample_hps_model] = load_model_compatible(model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
# + colab={"base_uri": "https://localhost:8080/", "height": 121} colab_type="code" id="foZiiYPdFQwO" outputId="a09fc4fb-110f-4280-8515-c9b673cb6b90"
z_0 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_0)
# + colab={"base_uri": "https://localhost:8080/", "height": 163} colab_type="code" id="6Gaz3QG1FQwS" outputId="0cfc279c-1c59-419f-86d4-ed74d5e38a26"
z_1 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_1)
# + [markdown] colab_type="text" id="oVtr7NnGFQwU"
# Tribute to an episode of [South Park](https://en.wikipedia.org/wiki/An_Elephant_Makes_Love_to_a_Pig): The interpolation between an Elephant and a Pig
# + colab={} colab_type="code" id="lJs9JbROFQwU"
z_list = [] # interpolate spherically between z_1 and z_0
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z_0, z_1, t))
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False, temperature=0.15), [0, i]])
# + colab={} colab_type="code" id="0FOuNfJMFQwW"
stroke_grid = make_grid_svg(reconstructions, grid_space_x=25.0)
# + colab={"base_uri": "https://localhost:8080/", "height": 130} colab_type="code" id="bZ6zpdiMFQwX" outputId="70679bd1-4dba-4c08-b39f-bbde81d22019"
draw_strokes(stroke_grid, factor=0.3)
| Sketch_RNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ancka019/ComputationsMethods6sem/blob/main/hw6.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="8GdfKmYWBduF"
import pandas as pd
import numpy as np
import math
from copy import copy
from numpy.linalg import norm
from scipy.linalg import hilbert, eig
# + id="6MEQyx9sBhSz"
def max_abs(a): #максимальный по модулю в матрице элемент
i_max,j_max = 0,1
max_item = a[i_max,j_max]
for i in range(a.shape[0]):
for j in range(i+1, a.shape[0]):
if abs(max_item) < abs(a[i,j]):
max_item = a[i, j]
i_max, j_max = i, j
return i_max, j_max
# + id="Ls2f9Te-BkNa"
def jacobi_method(a,eps,strategy="circle"): #метод Якоби
iters = 0
i,j = 0,0
while True:
h = np.identity(a.shape[0])
if strategy == "abs":
i,j = max_abs(a)
else:
if (j < (a.shape[0]-1) and j+1!=i):
j+=1
elif j == a.shape[0]-1:
i+=1
j = 0
else:
j+=2
if i==a.shape[0]-1 and j==a.shape[0]:
return np.diag(a), iters
if abs(a[i, j]) < eps:
return np.diag(a), iters
iters += 1
phi = 0.5*(math.atan((2*a[i, j])/(a[i,i]-a[j,j])))
c,s = math.cos(phi), math.sin(phi)
h[i,i], h[j,j] = c,c
h[i,j], h[j,i] = -s, s
a = h.T@a@h
# + id="YCie4KA5Bm3a"
def gersh_circles(a): #определение кругов Гершгорина
ans = []
for i in range(a.shape[0]):
ans.append((a[i,i],sum(abs(a[i]))-abs(a[i,i])))
return ans
def is_in_circle(gersh,lmda): #проверка в принадлежности с.ч. хотя бы одному кругу
return any([abs(c-lmda)<=r for c,r in gersh])
# + id="7rNfiI7MBqq0"
X0 = np.array([[-5.509882,1.870086,0.422908],
[0.287865,-11.811654,5.7119],
[0.049099,4.308033,-12.970687]]) #матрица из учебника Н.В. Фаддевой и Д.К. Фаддеева
matrixes = [X0,*[hilbert(n) for n in range(3,6)],hilbert(20)]
# + id="PgaT74mUBuQC"
X = pd.DataFrame(columns=['eps=10^(-2),res','eps=10^(-2),iters',
'eps=10^(-3),res', 'eps=10^(-3),iters',
'eps=10^(-4),res','eps=10^(-4),iters',
'eps=10^(-5),res','eps=10^(-5),iters'])
Y = pd.DataFrame(columns=['eps=10^(-2),res','eps=10^(-2),iters',
'eps=10^(-3),res', 'eps=10^(-3),iters',
'eps=10^(-4),res','eps=10^(-4),iters',
'eps=10^(-5),res','eps=10^(-5),iters'])
for matrix in matrixes:
lambda_true = np.sort(eig(matrix)[0])
row_X,row_Y = [],[]
for i in range(2,6):
lambda_abs,abs_iters = jacobi_method(matrix,10**(-i),strategy="abs")
lambda_circle,circle_iters = jacobi_method(matrix,10**(-i),strategy="circle")
row_X.extend([norm(np.sort(lambda_abs)-lambda_true),abs_iters])
row_Y.extend([norm(np.sort(lambda_circle)-lambda_true),circle_iters])
X = X.append(pd.Series(row_X,index=X.columns),True)
Y = Y.append(pd.Series(row_Y,index=Y.columns),True)
X.index = ['X0','hilbert(3)','hilbert(4)','hilbert(5)','hilbert(20)']
Y.index = ['X0','hilbert(3)','hilbert(4)','hilbert(5)','hilbert(20)']
# + [markdown] id="lCpjMRgMB5N9"
# #стратетия с максимальным по модулю с.ч
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="uhEucjsBB2SO" outputId="11c530b2-2acf-4731-a5cb-f07fc353489a"
X
# + [markdown] id="4fQQry-cB9S8"
# #стретия обнуления по порядку
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="6MLno4D7B9Bg" outputId="36d9ce32-19bd-4657-d6ba-15791bf6940f"
Y
# + [markdown] id="0dakTdkbCCsB"
# #Проверка принадлежность найденных значений кругам Гершорина
# + colab={"base_uri": "https://localhost:8080/"} id="Tc4rUEafCHoU" outputId="dc509ace-e1a6-4257-a106-cc948486965d"
for matrix in matrixes:
lambda_abs = jacobi_method(matrix,10**(-5),strategy="abs")[0]
gersh = gersh_circles(matrix)
print(all(([is_in_circle(gersh,lmbd) for lmbd in lambda_abs])))
| hw6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import numpy as np
import pandas as pd
# import matchzoo as mz
import mzcn as mz
print('matchzoo version', mz.__version__)
ranking_task = mz.tasks.Ranking(losses=mz.losses.RankHingeLoss())
ranking_task.metrics = [
mz.metrics.NormalizedDiscountedCumulativeGain(k=3),
mz.metrics.NormalizedDiscountedCumulativeGain(k=5),
mz.metrics.MeanAveragePrecision()
]
print("`ranking_task` initialized with metrics", ranking_task.metrics)
def load_data(tmp_data,tmp_task):
df_data = mz.pack(tmp_data,task=tmp_task)
return df_data
print('data loading ...')
##数据集,并且进行相应的预处理
train=pd.read_csv('./data/train_data.csv').sample(100)
dev=pd.read_csv('./data/dev_data.csv').sample(50)
test=pd.read_csv('./data/test_data.csv').sample(30)
train_pack_raw = load_data(train,ranking_task)
dev_pack_raw = load_data(dev,ranking_task)
test_pack_raw=load_data(test,ranking_task)
# train_pack_raw = load_data(train,cls_task)
# dev_pack_raw = load_data(dev,cls_task)
# test_pack_raw=load_data(test,cls_task)
print('data loaded as `train_pack_raw` `dev_pack_raw` `test_pack_raw`')
# 垃圾回收
import gc
gc.collect()
| test/ranking/.ipynb_checkpoints/init-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
maxDist = 16000 #max distance for each truck
lijst = [814.0,
5060.0,
8722.0,
8814.0,
8814.0,
9580.0,
9580.0,
9580.0,
10308.0,
10308.0,
10315.0,
10315.0,
10315.0,
10315.0,
10676.0]
lijst = list(set(lijst))
df = pd.DataFrame(lijst, columns =['routes'])
dummy = [0 for L in lijst]
df['autoNr'] =dummy
df['ritNr'] =dummy
autoNr = 1
ritNr = 1
def getLongestRemainingRoute(df):
longRide = df.loc[(df['autoNr'] == 0)].routes.max()
if longRide != longRide:
longRide = 0
return longRide
def getRemainingDistanceForRoute(R):
return maxDist - R
def getNextDistanceToDrive(d):
nextD = df.loc[(df['autoNr'] == 0) & (df['routes'] <= d)].routes.min()
if nextD!=nextD:
nextD = 0
return nextD
def pasAanInTabel(df,autoNr,ritNr,afstand):
t = df.loc[df.routes==afstand,'routes'].index[0]
df.iloc[t, df.columns.get_loc('autoNr')] = autoNr
df.iloc[t, df.columns.get_loc('ritNr')] = ritNr
ritNr+=1
return df,ritNr
def maakAutoRoute(autoNr,df,ritNr):
while df.loc[df['autoNr'] == autoNr].routes.sum() < maxDist:
d2 = getRemainingDistanceForRoute(df.loc[df['autoNr'] == autoNr].routes.sum())
d3 = getNextDistanceToDrive(d2)
if d3>0:
df,ritNr = pasAanInTabel(df,autoNr,ritNr,d3)
else:
break
return
while df.loc[(df['autoNr'] == 0)].count()[0]> 0:
d1=getLongestRemainingRoute(df)
df,ritNr = pasAanInTabel(df,autoNr,ritNr,d1)
maakAutoRoute(autoNr,df,ritNr)
autoNr+=1
ritNr = 1
df.head(20)
# -
| truckRoutes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # CNN Model for Driver Drowsiness Detection
# ### CNN Model
# ## Import libraries
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
# ## Data Preprocessing
# ### Preprocessing the training & testing dataset
# +
train_datagen = ImageDataGenerator(
rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True
)
train_set = train_datagen.flow_from_directory(
'./dataset_new/train/eyes',
target_size = (64,64),
batch_size = 32,
class_mode = 'binary'
)
test_datagen = ImageDataGenerator(
rescale = 1./255
)
test_set = test_datagen.flow_from_directory(
'./dataset_new/test/eyes',
target_size = (64,64),
batch_size = 32,
class_mode = 'binary'
)
# -
print(train_set.class_indices)
# ## Building the CNN
# ### Initializing the CNN
cnn = tf.keras.models.Sequential()
# ### Convolution Operation
cnn.add(tf.keras.layers.Conv2D(filters = 32,kernel_size=3,activation = 'relu', input_shape = [64,64,3]))
# ### Pooling
cnn.add(tf.keras.layers.MaxPool2D(pool_size = 2, strides = 2))
# ### Adding a second Convolution Layer and Max Pooling Layer
cnn.add(tf.keras.layers.Conv2D(filters = 32, kernel_size = 3, activation = 'relu'))
cnn.add(tf.keras.layers.MaxPool2D(pool_size = 2, strides = 2))
# ### Flattening
cnn.add(tf.keras.layers.Flatten())
# ### Full Connection
cnn.add(tf.keras.layers.Dense(units = 128, activation = 'relu'))
# ### Output Layer
cnn.add(tf.keras.layers.Dense(units = 1, activation = 'sigmoid'))
# ### Training the CNN
cnn.compile(optimizer = 'adam',loss = 'binary_crossentropy', metrics = ['accuracy'])
cnn.fit(x = train_set, validation_data = test_set, epochs = 25)
# ### Saving CNN model
# serialize model to JSON
model_json = cnn.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
cnn.save_weights("model.h5")
print("Saved model to disk")
| Driver_Drowsiness_Detection/CNN_Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# jumanpp-batch
# =============
#
#
# **Apply JUMAN++ to batch input in parallel**
# <table align="left"><tr>
# <td>
# <a href="https://travis-ci.org/kota7/jumanpp-batch">
# <img src="https://travis-ci.org/kota7/jumanpp-batch.svg?branch=master" alt="Travis-CI Status">
# </a>
# </td>
# <td>
# <a href="https://badge.fury.io/py/jumanpp-batch">
# <img src="https://badge.fury.io/py/jumanpp-batch.svg" alt="PyPI Status">
# </a>
# </td>
# </tr></table>
#
#
# This python package facilitates the usage of [juman++](http://nlp.ist.i.kyoto-u.ac.jp/index.php?JUMAN++) software by providing the functionalities to apply the command (1) to batch input (2) and in parallel.
#
# ## Requirement
#
# - Python 2.7+, 3.4+
# - JUMAN++ 1.0.2, 2.0.0
# ## Installation
#
# ### JUMAN++
#
# #### Version 1
#
# Refer to the official document for the details ([Manual](http://lotus.kuee.kyoto-u.ac.jp/nl-resource/jumanpp/jumanpp-manual-1.01.pdf)).
#
# As of this writing, one can install the v1.0.2 by the following commands:
#
# ```bash
# wget http://lotus.kuee.kyoto-u.ac.jp/nl-resource/jumanpp/jumanpp-1.02.tar.xz
# tar xJvf jumanpp-1.02.tar.xz
# # cd jumanpp-1.02 && ./configure && make && sudo make install && ../
# ```
#
# Test:
#
# ```bash
# jumanpp -v
# #JUMAN++ 1.02
# ```
#
# ```bash
# # echo "すもももももももものうち" | jumanpp
# #すもも すもも すもも 名詞 6 普通名詞 1 * 0 * 0 "代表表記:酸桃/すもも 自動獲得:EN_Wiktionary"
# #@ すもも すもも すもも 名詞 6 普通名詞 1 * 0 * 0 "自動獲得:テキスト"
# #も も も 助詞 9 副助詞 2 * 0 * 0 NIL
# #もも もも もも 名詞 6 普通名詞 1 * 0 * 0 "代表表記:股/もも カテゴリ:動物-部位"
# #@ もも もも もも 名詞 6 普通名詞 1 * 0 * 0 "代表表記:桃/もも 漢字読み:訓 カテゴリ:植物;人工物-食べ物 ドメイン:料理・食事"
# #も も も 助詞 9 副助詞 2 * 0 * 0 NIL
# #もも もも もも 名詞 6 普通名詞 1 * 0 * 0 "代表表記:股/もも カテゴリ:動物-部位"
# #@ もも もも もも 名詞 6 普通名詞 1 * 0 * 0 "代表表記:桃/もも 漢字読み:訓 カテゴリ:植物;人工物-食べ物 ドメイン:料理・食事"
# #の の の 助詞 9 接続助詞 3 * 0 * 0 NIL
# #うち うち うち 名詞 6 副詞的名詞 9 * 0 * 0 "代表表記:うち/うち"
# #EOS
# ```
#
# #### Version 2
#
# Recent versions of JUMAN++ can be installed by the following commands ([Official Repository](https://github.com/ku-nlp/jumanpp)):
#
# ```bash
# VERSION="2.0.0-rc3"
# wget https://github.com/ku-nlp/jumanpp/releases/download/v2.0.0-rc3/jumanpp-$VERSION.tar.xz
# tar xfv jumanpp-$VERSION.tar.xz && cd jumanpp-$VERSION
# # mkdir bld && cd bld && \
# # cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX="$PWD" && \
# # make install
# ```
#
# The following command let us use the version 2 by calling `jumanpp2`.
# Change `/user/local/bin` to any directory within the search path.
#
# ```bash
# ln -s "$PWD/bin/jumanpp" /usr/local/bin/jumanpp2
# ```
#
# Test:
#
# ```bash
# jumanpp2 -v
# #uman++ Version: 2.0.0-rc3 / Dictionary: 20190731-356e143 / LM: K:20190430-7d143fb L:20181122-b409be68 F:20171214-9d125cb
# ```
#
# ```bash
# # echo "おめでとう🎉㊗️23歳かぁ〜若い〜✧" | jumanpp2
# #おめでとう おめでとう おめでとう 感動詞 12 * 0 * 0 * 0 "代表表記:おめでとう/おめでとう"
# #🎉 🎉 🎉 特殊 1 記号 5 * 0 * 0 "代表表記:🎉/* 絵文字種類:ACTIVITIES:EVENT 絵文字:PARTY_POPPER"
# #㊗️ ㊗️ ㊗️ 特殊 1 記号 5 * 0 * 0 "代表表記:㊗️/* 絵文字種類:SYMBOLS:ALPHANUM 絵文字:JAPANESE_CONGRATULATIONS_BUTTON"
# #23 23 23 名詞 6 数詞 7 * 0 * 0 "カテゴリ:数量 未知語:数字"
# #歳 さい 歳 接尾辞 14 名詞性名詞助数辞 3 * 0 * 0 "代表表記:歳/さい 準内容語"
# #かぁ〜 か か 助詞 9 接続助詞 3 * 0 * 0 "非標準表記:DPSL"
# #若い わかい 若い 形容詞 3 * 0 イ形容詞アウオ段 18 基本形 2 "代表表記:若い/わかい"
# #〜 〜 〜 特殊 1 記号 5 * 0 * 0 NIL
# #✧ ✧ ✧ 未定義語 15 その他 1 * 0 * 0 "未知語:その他 品詞推定:特殊"
# #EOS
# ```
# ### jumanpp-batch library
#
# The library can be downloaded from the [PyPI](https://pypi.org/) repository.
#
# ```bash
# pip install jumanpp-batch
# ```
#
# Or install the development version from GitHub.
# ```bash
# git clone https://github.com/kota7/jumanpp-batch.git
# pip install -U ./jumanpp-batch
# ```
# show the library version for running this notebook
# !pip list | grep juman
# ## Quick use
#
# This library provides two main functions:
#
# - `jumanpp_batch`: Execute juman++ jobs and save the results to file(s)
# - `parse_outfiles`: Process the output files
#
# We first apply juman++ software with `jumanpp_batch`, then parse the outputs using `parse_outfiles`.
from jumanpp_batch import jumanpp_batch, parse_outfiles
# `jumanpp_batch` takes a list of strings to analyze by JUMAN++.
# The function returns the list of files where the results are saved.
texts = ["すもももももももものうち", "隣の客はよく柿食う客だ", "犬も歩けば棒に当たる"]
outfiles = jumanpp_batch(texts, outfile_base="results/simple_{}.txt", show_progress=True)
print(outfiles)
# !cat {outfiles[0]}
# If we supply IDs of the texts to `jumanpp_batch`, then the ID information appears as a commend at the beginning of the analysis result of each text. IDs must have the same length as the input texts.
# IDs can help to identify the input text from which the result has been generated.
#
# IDs can be any type, but they are converted to strings during the process.
# IDs cannot contain spaces.
#
# *Note: IDs are not strictly needed since the results preserve the order of the input texts.*
texts = ["すもももももももものうち", "隣の客はよく柿食う客だ", "犬も歩けば棒に当たる"]
ids = ["sumomo", "kaki", "inu"]
outfiles = jumanpp_batch(texts, ids, outfile_base="results/simple_{}.txt", show_progress=True)
# !cat {outfiles[0]}
# `parse_outfiles` takes a single or list of output files and returns a generator of `(id, token list)` pairs.
# Each token is a [namedtuple](https://docs.python.org/3/library/collections.html#collections.namedtuple) object containing a single line information from JUMAN++ results:
#
# 1. `surface` 表層形
# 1. `reading` 読み
# 1. `headword` 見出し語
# 1. `pos` 品詞大分類
# 1. `pos_id` 品詞大分類 ID
# 1. `pos2` 品詞細分類
# 1. `pos2_id` 品詞細分類 ID
# 1. `infltype` 活用型
# 1. `infltype_id` 活用型 ID
# 1. `inflform` 活用形
# 1. `inflform_id` 活用形 ID
# 1. `info` 意味情報
# 1. `is_alternative`
#
# The first 12 information corresponds to the line information.
# The last one `is_alternative` indicates that this line shows an alternative candidate (i.e. the line starts with '@').
# By default, alternative tokens are omitted in the generator created by `parse_outfiles`.
# Set `skip_alternatives=False` to show them.
for id_, tokens in parse_outfiles(outfiles):
print(id_)
print(tokens)
print("***")
# There are several options to configure the parsing outputs:
# - `format_func`: Function to convert token
# - `pos_filter`: Specify the part-of-speeches to include
# - `filter_func`: Function to determine which token should be kept
for id_, tokens in parse_outfiles(outfiles,
format_func=lambda x: "{} ({})".format(x.headword, x.reading),
pos_filter=("名詞", "動詞"),
filter_func=lambda x: x.surface != "犬"):
print(id_)
print(tokens)
print("***")
# ### Note on JUMAN++ 2.0
#
# We can use different version of JUMAN++ by giving the command name as `jumanpp_command`.
texts = ["すもももももももものうち", "おめでとう🎉㊗️23歳かぁ〜若い〜✧"]
ids = ["sumomo", "emoji"]
outfiles = jumanpp_batch(texts, ids, jumanpp_command="jumanpp2",
outfile_base="results/simple_{}.txt", show_progress=True)
# !cat {outfiles[0]}
for id_, tokens in parse_outfiles(outfiles,
format_func=lambda x: "{} ({})".format(x.headword, x.reading)):
print(id_)
print(tokens)
print("***")
# ## Run JUMAN++ in parallel
#
# Set `num_procs` option (default: 1) for `jumanpp_batch` to specify the number of concurrent processes to run.
# The input texts are split into chunks of roughly equal size and fed into separate JUMAN++ jobs.
# %%time
# single process
texts = ["すもももももももものうち", "隣の客はよく柿食う客だ", "犬も歩けば棒に当たる"] * 1000
outfiles1 = jumanpp_batch(texts, num_procs=1,
outfile_base="results/p1_{}.txt", show_progress=True)
print(outfiles1)
# check the number of "EOS" in the files
for f in outfiles1:
ct = 0
with open(f) as fin:
for line in fin:
if line.strip() == "EOS":
ct += 1
print("'{}': {} EOS".format(f, ct))
# %%time
# mutiple process
texts = ["すもももももももものうち", "隣の客はよく柿食う客だ", "犬も歩けば棒に当たる"] * 1000
outfiles2 = jumanpp_batch(texts, num_procs=4,
outfile_base="results/p4_{}.txt", show_progress=True)
print(outfiles)
# check the number of "EOS" in the files
for f in outfiles2:
ct = 0
with open(f) as fin:
for line in fin:
if line.strip() == "EOS":
ct += 1
print("'{}': {} EOS".format(f, ct))
# +
# proof that the outputs are identical
o1 = ""
for path in outfiles1:
with open(path, "r") as f:
o1 += f.read()
o2 = ""
for path in outfiles2:
with open(path, "r") as f:
o2 += f.read()
print(o1==o2)
# +
# time comparison
import os
from datetime import datetime
texts = ["すもももももももものうち", "隣の客はよく柿食う客だ", "犬も歩けば棒に当たる"] * 5000
times = {}
for np in range(1, 9):
print("Start with num procs:", np)
t1 = datetime.now()
jumanpp_batch(texts, num_procs=np,
outfile_base="results/p%s_{}.txt" % np, show_progress=False)
t2 = datetime.now()
times[np] = (t2 - t1).seconds
# +
import matplotlib.pyplot as plt
# %matplotlib inline
fig, ax = plt.subplots()
num_procs = list(times.keys())
time_elapsed = [times[np] for np in num_procs]
ax.plot(num_procs, time_elapsed)
ax.grid()
ax.set_xlabel("Number of processes")
ax.set_ylabel("Seconds")
None
# -
| notebook/jumanpp-batch - Apply jumanpp to batch input in parallel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Exploratory Data Analytics
# +
# Importing relevant packages for all models
import numpy as np
import pandas as pd
# %matplotlib inline
# Load data
# %pwd
filepath = './WA_Fn-UseC_-Telco-Customer-Churn.csv'
churn_data = pd.read_csv(filepath)
# output for prediction
output= 'Churn'
# feature_list generated using list(churn_data.columns)
# customerID is left out as it does not contribute to accuracy of models
feature_list = ['gender', 'SeniorCitizen', 'Partner', 'Dependents', 'tenure', 'PhoneService', 'MultipleLines',
'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport',
'StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling', 'PaymentMethod',
'MonthlyCharges', 'TotalCharges']
features = churn_data[feature_list] # A set of all features (except customerID)
churn_data.head()
# -
# ### 1.1. Understanding the data
# This section explores the data and see if there is a need to modify it
churn_data.isna().sum()
# We observe that there is no NA values and thus we do not have to remove or replace it
# Understanding the data
print("gender: " + str(churn_data['gender'].unique()))
print("Senior Citizen: " + str(churn_data['SeniorCitizen'].unique()))
print("Partner: " + str(churn_data['Partner'].unique()))
print("Dependents: " + str(churn_data['Dependents'].unique()))
print("tenure: " + str(churn_data['tenure'].unique()))
print("PhoneService: " + str(churn_data['PhoneService'].unique()))
print("MultipleLines: " + str(churn_data['MultipleLines'].unique()))
print("InternetService: " + str(churn_data['InternetService'].unique()))
print("OnlineSecurity: " + str(churn_data['OnlineSecurity'].unique()))
print("OnlineBackup: " + str(churn_data['OnlineBackup'].unique()))
print("DeviceProtection: " + str(churn_data['DeviceProtection'].unique()))
print("TechSupport: " + str(churn_data['TechSupport'].unique()))
print("StreamingTV: " + str(churn_data['StreamingTV'].unique()))
print("StreamingMovies: " + str(churn_data['StreamingMovies'].unique()))
print("Contract: " + str(churn_data['Contract'].unique()))
print("PaperlessBilling: " + str(churn_data['PaperlessBilling'].unique()))
print("PaymentMethod: " + str(churn_data['PaymentMethod'].unique()))
print("MonthlyCharges: " + str(churn_data['MonthlyCharges'].unique()))
print("TotalCharges: " + str(churn_data['TotalCharges'].unique()))
# ### 1.2. Transforming variables
# This section converts categorial variables into binary variables
# +
# Transforming output to binary variable 1 == Yes, 0 == No
churn_data[output] = churn_data[output].map(lambda x: 1 if x == "Yes" else 0)
# Transform features to binary features (Only 2 categories)
churn_data['gender'] = churn_data['gender'].map(lambda x: 1 if x == "Yes" else 0)
churn_data['Partner'] = churn_data['Partner'].map(lambda x: 1 if x == "Yes" else 0)
churn_data['Dependents'] = churn_data['Dependents'].map(lambda x: 1 if x == "Yes" else 0)
churn_data['PhoneService'] = churn_data['PhoneService'].map(lambda x: 1 if x == "Yes" else 0)
churn_data['PaperlessBilling'] = churn_data['PaperlessBilling'].map(lambda x: 1 if x == "Yes" else 0)
# Transform categorical features to binary features (>2 categories)
MultipleLines_dummy = pd.get_dummies(churn_data['MultipleLines'], prefix='MultipleLines')
InternetService_dummy = pd.get_dummies(churn_data['InternetService'], prefix='InternetService')
OnlineSecurity_dummy = pd.get_dummies(churn_data['OnlineSecurity'], prefix='OnlineSecurity')
OnlineBackup_dummy = pd.get_dummies(churn_data['OnlineBackup'], prefix='OnlineBackup')
DeviceProtection_dummy = pd.get_dummies(churn_data['DeviceProtection'], prefix='DeviceProtection')
TechSupport_dummy = pd.get_dummies(churn_data['TechSupport'], prefix='TechSupport')
StreamingTV_dummy = pd.get_dummies(churn_data['StreamingTV'], prefix='StreamingTV')
StreamingMovies_dummy = pd.get_dummies(churn_data['StreamingMovies'], prefix='StreamingMovies')
Contract_dummy = pd.get_dummies(churn_data['Contract'], prefix='Contract')
PaymentMethod_dummy = pd.get_dummies(churn_data['PaymentMethod'], prefix='PaymentMethod')
#store a list of temp for rearranging (shifting all continuous variables to the right and shifting output to the last column)
output_temp = churn_data[output]
tenure_temp = churn_data['tenure']
MonthlyCharges_temp = churn_data['MonthlyCharges']
TotalCharges_temp = churn_data['TotalCharges']
# +
# Remove categorical columns with >2 categories (that are not binary)
churn_data= churn_data.drop(['customerID', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup',
'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies', 'Contract', 'PaymentMethod',
'Churn', 'TotalCharges', 'tenure', 'MonthlyCharges'], axis = 1)
# Add in all the newly created columns (categorical with >2 categories)
churn_data = churn_data.join([MultipleLines_dummy, InternetService_dummy, OnlineSecurity_dummy, OnlineBackup_dummy,
DeviceProtection_dummy, TechSupport_dummy, StreamingTV_dummy, StreamingMovies_dummy,
Contract_dummy, PaymentMethod_dummy, tenure_temp,MonthlyCharges_temp,
TotalCharges_temp,output_temp])
# -
# Verify that the data is correctly manipulated
churn_data.info()
# ### 1.3. Changing Data Type
# This section aims to change all entries to float or int instead of object
churn_data = churn_data.drop(churn_data[churn_data['TotalCharges'] == " "].index)
churn_data['TotalCharges'] = churn_data['TotalCharges'].astype(float)
churn_data.info()
# Verification again
# ### 1.4. Splitting the data
# Split into train(70%) and test data(30%) with random seed set to 12345<br>
# +
from sklearn.cross_validation import train_test_split
train_feature = churn_data.iloc[:,:40]
train_target = churn_data.iloc[:,40]
X_train, X_test, y_train, y_test = train_test_split(train_feature, train_target, test_size=0.3, random_state=12345)
# -
# ### 1.5. Determining important features
# The below code is modified from https://www.kaggle.com/grfiv4/plotting-feature-importances <br>
# This section aims to help us determine which features to select for our models based on their importance.<br>
# <br>
# Output:
# <ol>
# <li>Chart that sorts the importance of attribute according to feature_importances_
# <li>A list of column headers that is sorted by feature_importances_
# </ol>
# +
# Source: https://www.kaggle.com/grfiv4/plotting-feature-importances
# Import libraries
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter(action="ignore", category=DeprecationWarning)
from sklearn.exceptions import ConvergenceWarning
warnings.simplefilter(action="ignore", category=ConvergenceWarning)
def plot_feature_importances(clf, X_train, y_train=None,
top_n=10, figsize=(8,8), print_table=False, title="Feature Importances"):
__name__ = "plot_feature_importances"
feat_imp_list = []
try:
if not hasattr(clf, 'feature_importances_'):
clf.fit(X_train.values, y_train.values.ravel())
if not hasattr(clf, 'feature_importances_'):
raise AttributeError("{} does not have feature_importances_ attribute".
format(clf.__class__.__name__))
except (ValueError):
clf.fit(X_train.values, y_train.values.ravel())
feat_imp = pd.DataFrame({'importance':clf.feature_importances_})
feat_imp['feature'] = X_train.columns
feat_imp.sort_values(by='importance', ascending=False, inplace=True)
feat_imp_list = feat_imp['feature']
feat_imp = feat_imp.iloc[:top_n]
feat_imp.sort_values(by='importance', inplace=True)
feat_imp = feat_imp.set_index('feature', drop=True)
feat_imp.plot.barh(title=title, figsize=figsize)
plt.xlabel('Feature Importance Score')
plt.show()
if print_table:
from IPython.display import display
print("Top {} features in descending order of importance".format(top_n))
display(feat_imp.sort_values(by='importance', ascending=False))
print (pd.Series(feat_imp_list))
return feat_imp
# Converting back to a dataframe with column headers for better label of y-axis
X_train_header = pd.DataFrame(X_train, columns=train_feature.columns)
y_train = pd.DataFrame(y_train)
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
clfs = [BaggingClassifier(), LogisticRegression(),
DecisionTreeClassifier(), AdaBoostClassifier(),
RandomForestClassifier()]
for clf in clfs:
try:
_ = plot_feature_importances(clf, X_train_header, y_train, top_n=X_train.shape[1],
title=clf.__class__.__name__)
except AttributeError as e:
print(e)
# -
# ## 1.6. Visualising the data
# ### 1.6.1. Visualising output
# Source: https://www.kaggle.com/pavanraj159/telecom-customer-churn-prediction <br>
# Purpose: To understand the proportion of customer churn
# +
import seaborn as sns
# Creating a new dataframe for visualisation
churn_data_v = pd.read_csv(filepath)
# Dropping missing values first
churn_data_v = churn_data_v.drop(churn_data_v[churn_data_v['TotalCharges'] == " "].index)
plt.figure(figsize=(13,6))
plt.subplot(121)
churn_data["Churn"].value_counts().plot.pie(autopct = "%1.0f%%",
fontsize = 12,
wedgeprops = {"linewidth" : 2,
"edgecolor" : "w"},
)
plt.title("Customer churn percentage in data")
plt.ylabel("")
plt.subplot(122)
ax = sns.countplot(y = churn_data["Churn"],linewidth = 2,
edgecolor = "k"*churn_data["Churn"].nunique())
for i,j in enumerate(churn_data["Churn"].value_counts().values) :
ax.text(.1,i,j,fontsize = 15,color = "k")
plt.title("Count of customer churn")
plt.grid(True,alpha = .1)
plt.show()
# -
# ### 1.6.2. Visualising categorical variables
# Source: https://www.kaggle.com/bandiatindra/telecom-churn-prediction
# +
services = ['gender', 'SeniorCitizen', 'Partner', 'Dependents', 'PhoneService',
'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup',
'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies',
'Contract']
fig, axes = plt.subplots(nrows = 4,ncols = 4,figsize = (15,15))
for i, item in enumerate(services):
if i < 4:
ax = churn_data_v[item].value_counts().plot(kind = 'bar',ax=axes[i,0],rot = 0)
elif i >=4 and i < 8:
ax = churn_data_v[item].value_counts().plot(kind = 'bar',ax=axes[i-4,1],rot = 0)
elif i >=8 and i < 12:
ax = churn_data_v[item].value_counts().plot(kind = 'bar',ax=axes[i-8,2],rot = 0)
elif i < 16:
ax = churn_data_v[item].value_counts().plot(kind = 'bar',ax=axes[i-12,3],rot = 0)
ax.set_title(item)
# -
# ### 1.6.3. Visualising categorical variables with churn ratio
# Source: https://www.kaggle.com/arunsankar/key-insights-from-telco-customer-churn
# +
churn_data_v['SeniorCitizen'] = churn_data_v['SeniorCitizen'].apply(lambda x: "Senior" if x==1 else ("Non-Senior" if x==0 else x))
cols = ['gender', 'SeniorCitizen', 'Partner', 'Dependents', 'PhoneService',
'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup',
'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies',
'Contract', 'PaperlessBilling', 'PaymentMethod']
fig, ax = plt.subplots(4,4,figsize=(18,8), sharex=True)
j=0
k=0
for i in cols:
temp = churn_data_v.pivot_table(churn_data_v, index=[i], columns=['Churn'], aggfunc=len).reset_index()[[i,'tenure']]
temp.columns=[i,'Churn_N','Churn_Y']
temp['Churn_ratio']=(temp['Churn_Y'])/(temp['Churn_Y']+temp['Churn_N'])
a = sns.barplot(x='Churn_ratio', y=i, data=temp, ax=ax[j][k], color="blue")
a.set_yticklabels(labels=temp[i])
for p in ax[j][k].patches:
ax[j][k].text(p.get_width() + .05, p.get_y() + p.get_height()/1.5, '{:,.1%}'.format(p.get_width()),
fontsize=8, color='black', ha='center', va='bottom')
ax[j][k].set_xlabel('', size=12, color="black")
ax[j][k].set_ylabel('', size=12, color="black", rotation=0, horizontalalignment='right')
ax[j][k].set_title(i, size=14, color="black")
#print(j,k)
if k==3:
j=j+1
k=0
else:
k=k+1
fig.suptitle("Churn ratio across attributes", fontsize=20, family='sans-serif', color="red")
plt.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=1, hspace=1)
plt.xlim(0,.5)
plt.show()
# -
# ### 1.6.4. Visualising continuous variables with churn ratio
# Source: https://www.kaggle.com/bandiatindra/telecom-churn-prediction
# Monthly Charges
ax = sns.kdeplot(churn_data_v.MonthlyCharges[(churn_data_v["Churn"] == 'No') ],
color="Red", shade = True)
ax = sns.kdeplot(churn_data_v.MonthlyCharges[(churn_data_v["Churn"] == 'Yes') ],
ax =ax, color="Blue", shade= True)
ax.legend(["Not Churn","Churn"],loc='upper right')
ax.set_ylabel('Density')
ax.set_xlabel('Monthly Charges')
ax.set_title('Distribution of monthly charges by churn')
# Total Charges
ax = sns.kdeplot(churn_data_v.TotalCharges[(churn_data_v["Churn"] == 'No') ],
color="Red", shade = True)
ax = sns.kdeplot(churn_data_v.TotalCharges[(churn_data_v["Churn"] == 'Yes') ],
ax =ax, color="Blue", shade= True)
ax.legend(["Not Churn","Churn"],loc='upper right')
ax.set_ylabel('Density')
ax.set_xlabel('Total Charges')
ax.set_title('Distribution of total charges by churn')
# Tenure
ax = sns.kdeplot(churn_data_v.tenure[(churn_data_v["Churn"] == 'No') ],
color="Red", shade = True)
ax = sns.kdeplot(churn_data_v.tenure[(churn_data_v["Churn"] == 'Yes') ],
ax =ax, color="Blue", shade= True)
ax.legend(["Not Churn","Churn"],loc='upper right')
ax.set_ylabel('Density')
ax.set_xlabel('Tenure')
ax.set_title('Distribution of tenure by churn')
# ### 1.6.5. Correlation of variables with output
# Source: https://www.kaggle.com/bandiatindra/telecom-churn-prediction
#Get Correlation of "Churn" with other variables:
plt.figure(figsize=(15,8))
churn_data.corr()['Churn'].sort_values(ascending = False).plot(kind='bar')
# ## 1.7. Storing all accuracy values
# We will be creating a data frame to store all accuracies of our models
def model_summary(name,accuracy,sensitivity,precision,kfold,rocauc):
results = pd.DataFrame({"Model" : [name],
"Accuracy" : [accuracy],
"Sensitivity" : [sensitivity],
"Precision" : [precision],
"k-fold" : [kfold],
"Area_under_curve": [rocauc],
})
return results
# # 2. Decision tree classification
# Import libraries
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix
from math import sqrt, log
from __future__ import division
from collections import defaultdict
# %matplotlib inline
# ## 2.1. Decision tree with entropy as criterion
# ### 2.1.1. Baseline model (Decision tree - entropy) with all features
# +
# Fit the model on train data
decision_tree_entropy = DecisionTreeClassifier(criterion='entropy')
decision_tree_model_entropy = decision_tree_entropy.fit(X_train, y_train)
# Get predicted labels for test data. This is to be compared later on to build our confusion matrix
y_pred = decision_tree_model_entropy.predict(X_test)
# -
# ### 2.1.2. Evaluation of baseline model (Decision tree - entropy) with confusion matrix
# +
# Create confusion matrix
cm_entropy = confusion_matrix(y_test, y_pred)
TN, FP, FN, TP = cm_entropy.ravel()
print(cm_entropy)
print ("TN: " + str(TN),"FP: " + str(FP), "FN: " + str(FN), "TP: " + str(TP))
# Performance of decision tree model
print ("Accuracy: ", accuracy_score(y_test, y_pred))
print ("Sensitivity: ", recall_score(y_test, y_pred))
print ("Precision: ", precision_score(y_test, y_pred))
# ROC and AUC
from sklearn.metrics import roc_curve, auc, roc_auc_score
# Get predicted scores Pr(y=1): Used as thresholds for calculating TP Rate and FP Rate
score = decision_tree_model_entropy.predict_proba(X_test)[:, 1]
# Plot ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, score) # fpr: FP Rate, tpr: TP Rate, thresholds: Pr(y=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Receiver operating characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print ("Area under curve: ", roc_auc_score(y_test,score))
# -
# ### 2.1.3. Evaluation of baseline model (Decision tree - entropy) with K-fold validation
# Choice of number of folds: 10 (explained in write up)
# +
from sklearn.model_selection import KFold, cross_val_score
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on baseline model
cv = cross_val_score(decision_tree_entropy, # baseline model
train_feature, # Feature matrix
train_target, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
# Report performance of baseline model
print("Baseline decision tree entropy k-fold accuracy: %s" %(cv.mean()))
model1 = model_summary("Decision tree baseline entropy",accuracy_score(y_test, y_pred),recall_score(y_test, y_pred),
precision_score(y_test, y_pred),cv.mean(),roc_auc_score(y_test,score))
# -
# ### 2.1.4. Improved model (Decision tree - entropy) with feature selection
# +
# Feature list adapted from 1.5 Determining Important Features
feat_list = ['MonthlyCharges','TotalCharges','Contract_Month-to-month','tenure','InternetService_Fiber optic',
'Partner','PaperlessBilling','TechSupport_No','PaymentMethod_Electronic check','Dependents','SeniorCitizen',
'PaymentMethod_Mailed check','OnlineSecurity_Yes','DeviceProtection_Yes','MultipleLines_No','OnlineBackup_No',
'PaymentMethod_Bank transfer (automatic)','MultipleLines_Yes','TechSupport_Yes','StreamingMovies_Yes',
'DeviceProtection_No','StreamingTV_Yes','OnlineBackup_Yes','PaymentMethod_Credit card (automatic)',
'OnlineSecurity_No','PhoneService','StreamingMovies_No','StreamingTV_No','Contract_Two year',
'OnlineBackup_No internet service','MultipleLines_No phone service','Contract_One year',
'OnlineSecurity_No internet service','InternetService_DSL','StreamingMovies_No internet service',
'StreamingTV_No internet service','TechSupport_No internet service','InternetService_No',
'DeviceProtection_No internet service','gender']
N_features = range(2, 41) # from 1 to 40 predictors
accuracy_list_cm = [] # A list of confusion matrix accuracy for 1 to 40 top fatures selected
sensitivity_list_cm = [] # A list of confusion matrix sensitivity for 1 to 40 top fatures selected
precision_list_cm = [] # A list of confusion matrix precision for 1 to 40 top fatures selected
auc_list = [] # A list of area under curve for roc for 1 to 40 top fatures selected
accuracy_list_kf = [] # A list of k-fold accuracy for 1 to 40 top fatures selected
best_k = 0
best_acc = 0
best_sensi = 0
best_precision = 0
best_auc = 0
best_kf = 0
for k in N_features:
# Split train test data based on feature list
train_feature_temp = churn_data.loc[:,feat_list[:k]]
train_target_temp = churn_data.iloc[:,40]
X_train, X_test, y_train, y_test = train_test_split(train_feature_temp,
train_target_temp, test_size=0.3, random_state=12345)
# Fit the model on train data
decision_tree_entropy = DecisionTreeClassifier(criterion='entropy')
decision_tree_model_entropy = decision_tree_entropy.fit(X_train, y_train) # Fit categorcal data
# Get predicted labels for test data. This is to be compared later on to build our confusion matrix
y_pred = decision_tree_model_entropy.predict(X_test)
# List of scores
acc = accuracy_score(y_test, y_pred)
sensi = recall_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
auc = roc_auc_score(y_test, y_pred)
# Append to score list
accuracy_list_cm.append(acc)
sensitivity_list_cm.append(sensi)
precision_list_cm.append(precision)
auc_list.append(auc)
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on baseline model
cv = cross_val_score(decision_tree_entropy, # baseline model
train_feature_temp, # Feature matrix
train_target_temp, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
kf_score = cv.mean()
accuracy_list_kf.append(kf_score)
if (acc > best_acc):
best_k = k
best_acc = acc
best_sensi = sensi
best_precision = precision
best_auc = auc
best_kf = kf_score
# -
# ### 2.1.5. Evaluation of improved model (Decision tree - entropy) with CM and K-fold
# +
# Graph for CM
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(N_features, accuracy_list_cm, 'g*-')
ax.plot(N_features[15], accuracy_list_cm[15], marker='o', markersize=12, markeredgewidth=2, markeredgecolor='r', markerfacecolor='None')
plt.grid(True)
plt.xlabel('Number of features')
plt.ylabel('Accuracy of models')
plt.title('Curve for cm')
plt.show()
print("Highest accuracy of entropy model with 17 features selected: " + str(accuracy_list_cm[15]))
model2 = model_summary("Decision tree improved entropy",best_acc,best_sensi,best_precision,best_kf,best_auc)
# -
# ## 2.2. Decision tree with gini as criterion
# ### 2.2.1. Baseline model (Decision tree - gini) with all features
# +
# Fit the model on train data
decision_tree_gini = DecisionTreeClassifier(criterion='gini')
decision_tree_model_gini = decision_tree_gini.fit(X_train, y_train)
# Get predicted labels for test data. This is to be compared later on to build our confusion matrix
y_pred = decision_tree_model_gini.predict(X_test)
# -
# ### 2.2.2. Evaluation of baseline model (Decision tree - gini) with confusion matrix
# +
# Create confusion matrix
cm_gini = confusion_matrix(y_test, y_pred)
TN, FP, FN, TP = cm_gini.ravel()
print(cm_gini)
print ("TN: " + str(TN),"FP: " + str(FP), "FN: " + str(FN), "TP: " + str(TP))
# Performance of decision tree model
print ("Accuracy: ", accuracy_score(y_test, y_pred))
print ("Sensitivity: ", recall_score(y_test, y_pred))
print ("Precision: ", precision_score(y_test, y_pred))
# ROC and AUC
from sklearn.metrics import roc_curve, auc
# Get predicted scores Pr(y=1): Used as thresholds for calculating TP Rate and FP Rate
score = decision_tree_model_gini.predict_proba(X_test)[:, 1]
# Plot ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, score) # fpr: FP Rate, tpr: TP Rate, thresholds: Pr(y=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Receiver operating characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print ("Area under curve: ", roc_auc_score(y_test,score))
# -
# ### 2.2.3 Evaluation of model (Decision tree - gini) with K-Fold validation
# +
from sklearn.model_selection import KFold, cross_val_score
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on baseline model (gini)
cv = cross_val_score(decision_tree_gini, # baseline model (gini)
train_feature, # Feature matrix
train_target, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
# Report performance of Model 3
print("Baseline decision tree gini k-fold accuracy: %s" %(cv.mean()))
model3 = model_summary("Decision tree baseline gini",accuracy_score(y_test, y_pred),recall_score(y_test, y_pred),
precision_score(y_test, y_pred),cv.mean(),roc_auc_score(y_test,score))
# -
# ### 2.2.4. Improved model (Decision tree - gini) with feature selection
# +
# Feature list adapted from 1.5 Determining Important Features
feat_list = ['MonthlyCharges','TotalCharges','Contract_Month-to-month','tenure','InternetService_Fiber optic',
'Partner','PaperlessBilling','TechSupport_No','PaymentMethod_Electronic check','Dependents','SeniorCitizen',
'PaymentMethod_Mailed check','OnlineSecurity_Yes','DeviceProtection_Yes','MultipleLines_No','OnlineBackup_No',
'PaymentMethod_Bank transfer (automatic)','MultipleLines_Yes','TechSupport_Yes','StreamingMovies_Yes',
'DeviceProtection_No','StreamingTV_Yes','OnlineBackup_Yes','PaymentMethod_Credit card (automatic)',
'OnlineSecurity_No','PhoneService','StreamingMovies_No','StreamingTV_No','Contract_Two year',
'OnlineBackup_No internet service','MultipleLines_No phone service','Contract_One year',
'OnlineSecurity_No internet service','InternetService_DSL','StreamingMovies_No internet service',
'StreamingTV_No internet service','TechSupport_No internet service','InternetService_No',
'DeviceProtection_No internet service','gender']
N_features = range(2, 41) # from 1 to 40 predictors
accuracy_list_cm = [] # A list of confusion matrix accuracy for 1 to 40 top fatures selected
sensitivity_list_cm = [] # A list of confusion matrix sensitivity for 1 to 40 top fatures selected
precision_list_cm = [] # A list of confusion matrix precision for 1 to 40 top fatures selected
auc_list = [] # A list of area under curve for roc for 1 to 40 top fatures selected
accuracy_list_kf = [] # A list of k-fold accuracy for 1 to 40 top fatures selected
best_k = 0
best_acc = 0
best_sensi = 0
best_precision = 0
best_auc = 0
best_kf = 0
for k in N_features:
# Split train test data based on feature list
train_feature_temp = churn_data.loc[:,feat_list[:k]]
train_target_temp = churn_data.iloc[:,40]
X_train, X_test, y_train, y_test = train_test_split(train_feature_temp,
train_target_temp, test_size=0.3, random_state=12345)
# Fit the model on train data
decision_tree_gini = DecisionTreeClassifier(criterion='gini')
decision_tree_model_gini = decision_tree_gini.fit(X_train, y_train) # Fit categorcal data
# Get predicted labels for test data. This is to be compared later on to build our confusion matrix
y_pred = decision_tree_model_gini.predict(X_test)
# List of scores
acc = accuracy_score(y_test, y_pred)
sensi = recall_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
auc = roc_auc_score(y_test, y_pred)
# Append to score list
accuracy_list_cm.append(acc)
sensitivity_list_cm.append(sensi)
precision_list_cm.append(precision)
auc_list.append(auc)
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on baseline model
cv = cross_val_score(decision_tree_gini, # baseline model
train_feature_temp, # Feature matrix
train_target_temp, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
kf_score = cv.mean()
accuracy_list_kf.append(kf_score)
if (acc > best_acc):
best_k = k
best_acc = acc
best_sensi = sensi
best_precision = precision
best_auc = auc
best_kf = kf_score
# -
# ### 2.2.5. Evaluation of improved model (Decision tree - gini) with CM and K-fold
# +
# Graph for CM
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(N_features, accuracy_list_cm, 'g*-')
ax.plot(N_features[30], accuracy_list_cm[30], marker='o', markersize=12, markeredgewidth=2, markeredgecolor='r', markerfacecolor='None')
plt.grid(True)
plt.xlabel('Number of features')
plt.ylabel('Accuracy of models')
plt.title('Curve for cm')
plt.show()
print("Highest accuracy of gini model with 32 features selected: " + str(accuracy_list_cm[30]))
model4 = model_summary("Decision tree improved gini",best_acc,best_sensi,best_precision,best_kf,best_auc)
# -
# # 3.Logistic regression
# Linear regression will not be explored. This is due to the fact that the dependent variable is categorical ("Yes", "No")
# Thus, we will be using logistic regression instead to model our data
from sklearn.linear_model import LogisticRegression
# ## 3.1. Logistic regression with lasso regularization
# +
# Fit the model on train data: Using L1-regularization
lr1 = LogisticRegression(fit_intercept=True, max_iter=1000, tol=2e-9, penalty='l1', C=100, random_state=0)
lr1.fit(X=X_train, y=y_train)
# Get coefficients
print (lr1.intercept_, lr1.coef_)
# Predict outputs for test data
y_pred = lr1.predict(X_test)
# -
# ### 3.1.1.Evaluation of model (Logistic - l1) with confusion matrix
# +
# Create confusion matrix
from sklearn.metrics import confusion_matrix
cm_l1 = confusion_matrix(y_test, y_pred)
TN, FP, FN, TP = cm_l1.ravel()
print (cm_l1)
print ("TN: " + str(TN),"FP: " + str(FP), "FN: " + str(FN), "TP: " + str(TP))
# Performance of decision tree model
print ("Accuracy: ", accuracy_score(y_test, y_pred))
print ("Sensitivity: ", recall_score(y_test, y_pred))
print ("Precision: ", precision_score(y_test, y_pred))
# ROC and AUC
from sklearn.metrics import roc_curve, auc
# Get predicted scores Pr(y=1): Used as thresholds for calculating TP Rate and FP Rate
# lr.classes_
score = lr1.predict_proba(X_test)[:, 1]
# Plot ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, score) # fpr: FP Rate, tpr: TP Rate, thresholds: Pr(y=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Receiver operating characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print ("Area under curve: ", roc_auc_score(y_test,score))
# -
# ### 3.1.2. Evaluation of model (Logistic - l1) with K-fold validation
# +
# Deciding the number of folds to use
from sklearn.model_selection import KFold, cross_val_score
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on logistics model with l1 penalty
cv_lr1 = cross_val_score(lr1, # logistics model with l1 penalty
train_feature, # Feature matrix
train_target, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
# Report performance of model lr1
print("Baseline logistic l1 k-fold accuracy: %s" %(cv_lr1.mean()))
model5 = model_summary("Logistic regression l1",accuracy_score(y_test, y_pred),recall_score(y_test, y_pred),
precision_score(y_test, y_pred),cv_lr1.mean(),roc_auc_score(y_test,score))
# -
# ## 3.2. Logistic regression with ridge regularization
# +
# Fit the model on train data: Using L2-regularization
lr2 = LogisticRegression(fit_intercept=True, max_iter=1000, tol=2e-9, penalty='l2', C=100, random_state=0)
lr2.fit(X=X_train, y=y_train)
# Predict outputs for test data
y_pred = lr2.predict(X_test)
# -
# ### 3.2.1. Evaluation of model (Logistic - l2) with confusion matrix
# +
cm_l2 = confusion_matrix(y_test, y_pred)
TN, FP, FN, TP = cm_l2.ravel()
print (cm_l2)
print ("TN: " + str(TN),"FP: " + str(FP), "FN: " + str(FN), "TP: " + str(TP))
# Performance of decision tree model
print ("Accuracy: ", accuracy_score(y_test, y_pred))
print ("Sensitivity: ", recall_score(y_test, y_pred))
print ("Precision: ", precision_score(y_test, y_pred))
# ROC and AUC
from sklearn.metrics import roc_curve, auc
# Get predicted scores Pr(y=1): Used as thresholds for calculating TP Rate and FP Rate
# lr.classes_
score = lr2.predict_proba(X_test)[:, 1]
# Plot ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, score) # fpr: FP Rate, tpr: TP Rate, thresholds: Pr(y=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Receiver operating characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print ("Area under curve: ", roc_auc_score(y_test,score))
# -
# ### 3.2.2. Evaluation of model (Logistic - l2) with K-fold validation
# +
# Deciding the number of folds to use
from sklearn.model_selection import KFold, cross_val_score
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on logistics model with l2 penalty
cv_lr2 = cross_val_score(lr2, # logistics model with l2 penalty
train_feature, # Feature matrix
train_target, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
# Report performance of model lr2
print("Baseline logistic l2 k-fold accuracy: %s" %(cv_lr2.mean()))
model6 = model_summary("Logistic regression l2",accuracy_score(y_test, y_pred),recall_score(y_test, y_pred),
precision_score(y_test, y_pred),cv_lr2.mean(),roc_auc_score(y_test,score))
# -
# # 4.Ensemble learning
# ## 4.1. Bagging
# +
# Bootstrap Aggregating Package
from sklearn.ensemble import BaggingClassifier
# Fit Bagging Model; A bundle of decision trees
BA = BaggingClassifier(n_estimators=100, random_state=12345)
BA_model = BA.fit(X_train, y_train)
print(BA_model.classes_)
# Validation
y_pred = BA_model.predict(X_test)
# -
# ### 4.1.1. Evaluation of model (Bagging) with confusion matrix
# +
# Create confusion matrix
cm = confusion_matrix(y_test, y_pred)
TN, FP, FN, TP = cm.ravel()
print(cm)
print ("TN: " + str(TN),"FP: " + str(FP), "FN: " + str(FN), "TP: " + str(TP))
# Performance of decision tree model
print ("Accuracy: ", accuracy_score(y_test, y_pred))
print ("Sensitivity: ", recall_score(y_test, y_pred))
print ("Precision: ", precision_score(y_test, y_pred))
# ROC and AUC
from sklearn.metrics import roc_curve, auc
# Get predicted scores Pr(y=1): Used as thresholds for calculating TP Rate and FP Rate
score = BA_model.predict_proba(X_test)[:, 1]
# Plot ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, score) # fpr: FP Rate, tpr: TP Rate, thresholds: Pr(y=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Receiver operating characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print ("Area under curve: ", roc_auc_score(y_test,score))
# -
# ### 4.1.2. Evaluation of model (Bagging) with K-fold validation
# +
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on bagging model
cv_BA = cross_val_score(BA, # BA model
train_feature, # Feature matrix
train_target, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
# Report performance of bagging model
print("Bagging k-fold accuracy: %s" %(cv_BA.mean()))
model7 = model_summary("Bagging",accuracy_score(y_test, y_pred),recall_score(y_test, y_pred),
precision_score(y_test, y_pred),cv_BA.mean(),roc_auc_score(y_test,score))
# -
# ## 4.2. Random forest using Entropy
# ### 4.2.1. Baseline model (Random Forest Entropy) with all features
# +
# Random Forest package
from sklearn.ensemble import RandomForestClassifier
# Fit Random Forest Model; Binary Splitting using Entropy
RF = RandomForestClassifier(criterion='entropy', n_estimators=100, random_state=12345)
RF_model = RF.fit(X_train, y_train)
print(RF_model.classes_)
# Validation
y_pred = RF_model.predict(X_test)
# -
# ### 4.2.2. Evaluation of baseline model (Random Forest Entropy) with confusion matrix
# +
# Create confusion matrix
cm = confusion_matrix(y_test, y_pred)
TN, FP, FN, TP = cm.ravel()
print(cm)
print ("TN: " + str(TN),"FP: " + str(FP), "FN: " + str(FN), "TP: " + str(TP))
# Performance of decision tree model
print ("Accuracy: ", accuracy_score(y_test, y_pred))
print ("Sensitivity: ", recall_score(y_test, y_pred))
print ("Precision: ", precision_score(y_test, y_pred))
# ROC and AUC
from sklearn.metrics import roc_curve, auc
# Get predicted scores Pr(y=1): Used as thresholds for calculating TP Rate and FP Rate
score = RF_model.predict_proba(X_test)[:, 1]
# Plot ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, score) # fpr: FP Rate, tpr: TP Rate, thresholds: Pr(y=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Receiver operating characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print ("Area under curve: ", roc_auc_score(y_test,score))
# -
# ### 4.2.3. Evaluation of baseline model (Random Forest Entropy) with K-fold validation
# +
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on random forest model using entropy criterion
cv_forest_entropy = cross_val_score(RF, # Model RF
train_feature, # Feature matrix
train_target, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
# Report performance of random forest entropy
print("Random Forest(entropy) k-fold accuracy: %s" %(cv_forest_entropy.mean()))
model8 = model_summary("Random forest baseline entropy",accuracy_score(y_test, y_pred),recall_score(y_test, y_pred),
precision_score(y_test, y_pred),cv_forest_entropy.mean(),roc_auc_score(y_test,score))
# -
# ### 4.2.4. Improved model model (Random Forest Entropy) with features selection
# +
# Feature list adapted from 1.5 Determining Important Features
feat_list = ['TotalCharges','tenure','MonthlyCharges','Contract_Month-to-month','InternetService_Fiber optic',
'PaperlessBilling','TechSupport_No','Partner','SeniorCitizen','Contract_Two year','Dependents',
'PaymentMethod_Electronic check','OnlineSecurity_No','MultipleLines_No','OnlineBackup_Yes',
'OnlineBackup_No','DeviceProtection_No','Contract_One year','TechSupport_Yes',
'PaymentMethod_Credit card (automatic)','OnlineSecurity_Yes','MultipleLines_Yes','StreamingTV_No',
'PaymentMethod_Bank transfer (automatic)','StreamingMovies_Yes','PaymentMethod_Mailed check','StreamingTV_Yes',
'InternetService_DSL','OnlineBackup_No internet service','DeviceProtection_Yes','StreamingMovies_No',
'DeviceProtection_No internet service','StreamingMovies_No internet service','OnlineSecurity_No internet service',
'StreamingTV_No internet service','MultipleLines_No phone service','PhoneService','TechSupport_No internet service',
'InternetService_No','gender']
N_features = range(2, 41) # from 1 to 40 predictors
accuracy_list_cm = [] # A list of confusion matrix accuracy for 1 to 40 top fatures selected
sensitivity_list_cm = [] # A list of confusion matrix sensitivity for 1 to 40 top fatures selected
precision_list_cm = [] # A list of confusion matrix precision for 1 to 40 top fatures selected
auc_list = [] # A list of area under curve for roc for 1 to 40 top fatures selected
accuracy_list_kf = [] # A list of k-fold accuracy for 1 to 40 top fatures selected
best_k = 0
best_acc = 0
best_sensi = 0
best_precision = 0
best_auc = 0
best_kf = 0
for k in N_features:
# Split train test data based on feature list
train_feature_temp = churn_data.loc[:,feat_list[:k]]
train_target_temp = churn_data.iloc[:,40]
X_train, X_test, y_train, y_test = train_test_split(train_feature_temp,
train_target_temp, test_size=0.3, random_state=12345)
# Fit Random Forest Model; Binary Splitting using Entropy
RF = RandomForestClassifier(criterion='entropy', n_estimators=100, random_state=12345)
RF_model = RF.fit(X_train, y_train)
# Get predicted labels for test data. This is to be compared later on to build our confusion matrix
y_pred = RF_model.predict(X_test)
# List of scores
acc = accuracy_score(y_test, y_pred)
sensi = recall_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
auc = roc_auc_score(y_test, y_pred)
# Append to score list
accuracy_list_cm.append(acc)
sensitivity_list_cm.append(sensi)
precision_list_cm.append(precision)
auc_list.append(auc)
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on baseline model
cv = cross_val_score(RF, # baseline model
train_feature_temp, # Feature matrix
train_target_temp, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
kf_score = cv.mean()
accuracy_list_kf.append(kf_score)
if (acc > best_acc):
best_k = k
best_acc = acc
best_sensi = sensi
best_precision = precision
best_auc = auc
best_kf = kf_score
# -
# ### 4.2.5. Evaluation of improved model (Decision tree - entropy) with CM and K-fold
# +
# Graph for CM
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(N_features, accuracy_list_cm, 'g*-')
ax.plot(N_features[14], accuracy_list_cm[14], marker='o', markersize=12, markeredgewidth=2, markeredgecolor='r', markerfacecolor='None')
plt.grid(True)
plt.xlabel('Number of features')
plt.ylabel('Accuracy of models')
plt.title('Curve for cm')
plt.show()
print("Highest accuracy of entropy model with 16 features selected: " + str(accuracy_list_cm[14]))
model9 = model_summary("Random forest improved entropy",best_acc,best_sensi,best_precision,best_kf,best_auc)
# -
# ## 4.3. Random forest using Gini
# ### 4.3.1. Baseline model (Random Forest Gini) with all features
# +
# Random Forest package
from sklearn.ensemble import RandomForestClassifier
# Fit Random Forest Model; Binary Splitting using Entropy
RF = RandomForestClassifier(criterion='gini', n_estimators=100, random_state=12345)
RF_model = RF.fit(X_train, y_train)
print(RF_model.classes_)
# -
# ### 4.3.2. Evaluation of baseline model (Random Forest Gini) with confusion matrix
# +
# Validation
y_pred = RF_model.predict(X_test)
# Create confusion matrix
cm = confusion_matrix(y_test, y_pred)
TN, FP, FN, TP = cm.ravel()
print(cm)
print ("TN: " + str(TN),"FP: " + str(FP), "FN: " + str(FN), "TP: " + str(TP))
# Performance of decision tree model
print ("Accuracy: ", accuracy_score(y_test, y_pred))
print ("Sensitivity: ", recall_score(y_test, y_pred))
print ("Precision: ", precision_score(y_test, y_pred))
# ROC and AUC
from sklearn.metrics import roc_curve, auc
# Get predicted scores Pr(y=1): Used as thresholds for calculating TP Rate and FP Rate
score = RF_model.predict_proba(X_test)[:, 1]
# Plot ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, score) # fpr: FP Rate, tpr: TP Rate, thresholds: Pr(y=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Receiver operating characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print ("Area under curve: ", roc_auc_score(y_test,score))
# -
# ### 4.3.3. Evaluation of baseline model (Random Forest Gini) with K-fold validation
# +
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on random forest model using gini criterion
cv_forest_gini = cross_val_score(RF, # Model RF
train_feature, # Feature matrix
train_target, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
# Report performance of random forest gini
print("Random Forest(gini) k-fold accuracy: %s" %(cv_forest_gini.mean()))
model10 = model_summary("Random forest baseline gini",accuracy_score(y_test, y_pred),recall_score(y_test, y_pred),
precision_score(y_test, y_pred),cv_forest_gini.mean(),roc_auc_score(y_test,score))
# -
# ### 4.2.4. Improved model model (Random Forest Gini) with features selection
# +
# Feature list adapted from 1.5 Determining Important Features
feat_list = ['TotalCharges','tenure','MonthlyCharges','Contract_Month-to-month','InternetService_Fiber optic',
'PaperlessBilling','TechSupport_No','Partner','SeniorCitizen','Contract_Two year','Dependents',
'PaymentMethod_Electronic check','OnlineSecurity_No','MultipleLines_No','OnlineBackup_Yes',
'OnlineBackup_No','DeviceProtection_No','Contract_One year','TechSupport_Yes',
'PaymentMethod_Credit card (automatic)','OnlineSecurity_Yes','MultipleLines_Yes','StreamingTV_No',
'PaymentMethod_Bank transfer (automatic)','StreamingMovies_Yes','PaymentMethod_Mailed check','StreamingTV_Yes',
'InternetService_DSL','OnlineBackup_No internet service','DeviceProtection_Yes','StreamingMovies_No',
'DeviceProtection_No internet service','StreamingMovies_No internet service','OnlineSecurity_No internet service',
'StreamingTV_No internet service','MultipleLines_No phone service','PhoneService','TechSupport_No internet service',
'InternetService_No','gender']
N_features = range(2, 41) # from 1 to 40 predictors
accuracy_list_cm = [] # A list of confusion matrix accuracy for 1 to 40 top fatures selected
sensitivity_list_cm = [] # A list of confusion matrix sensitivity for 1 to 40 top fatures selected
precision_list_cm = [] # A list of confusion matrix precision for 1 to 40 top fatures selected
auc_list = [] # A list of area under curve for roc for 1 to 40 top fatures selected
accuracy_list_kf = [] # A list of k-fold accuracy for 1 to 40 top fatures selected
best_k = 0
best_acc = 0
best_sensi = 0
best_precision = 0
best_auc = 0
best_kf = 0
for k in N_features:
# Split train test data based on feature list
train_feature_temp = churn_data.loc[:,feat_list[:k]]
train_target_temp = churn_data.iloc[:,40]
X_train, X_test, y_train, y_test = train_test_split(train_feature_temp,
train_target_temp, test_size=0.3, random_state=12345)
# Fit Random Forest Model; Binary Splitting using gini
RF = RandomForestClassifier(criterion='gini', n_estimators=100, random_state=12345)
RF_model = RF.fit(X_train, y_train)
# Get predicted labels for test data. This is to be compared later on to build our confusion matrix
y_pred = RF_model.predict(X_test)
# List of scores
acc = accuracy_score(y_test, y_pred)
sensi = recall_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
auc = roc_auc_score(y_test, y_pred)
# Append to score list
accuracy_list_cm.append(acc)
sensitivity_list_cm.append(sensi)
precision_list_cm.append(precision)
auc_list.append(auc)
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on baseline model
cv = cross_val_score(RF, # baseline model
train_feature_temp, # Feature matrix
train_target_temp, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
kf_score = cv.mean()
accuracy_list_kf.append(kf_score)
if (acc > best_acc):
best_k = k
best_acc = acc
best_sensi = sensi
best_precision = precision
best_auc = auc
best_kf = kf_score
# -
# ### 4.2.5. Evaluation of improved model (Random Forest Gini) with CM and K-fold
# +
# Graph for CM
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(N_features, accuracy_list_cm, 'g*-')
ax.plot(N_features[30], accuracy_list_cm[30], marker='o', markersize=12, markeredgewidth=2, markeredgecolor='r', markerfacecolor='None')
plt.grid(True)
plt.xlabel('Number of features')
plt.ylabel('Accuracy of models')
plt.title('Curve for cm')
plt.show()
print("Highest accuracy of entropy model with 32 features selected: " + str(accuracy_list_cm[30]))
model11 = model_summary("Random forest improved gini",best_acc,best_sensi,best_precision,best_kf,best_auc)
# -
# ## 4.4. Adaboost
# ### 4.4.1. Baseline model (Adaboost) with all features
# +
# AdaBoost package
from sklearn.ensemble import AdaBoostClassifier
# Fit Adaboosting Model
Ada = AdaBoostClassifier(n_estimators=100, random_state=12345)
Ada_model = Ada.fit(X_train, y_train)
Ada_model.classes_
# Validation
y_pred = Ada_model.predict(X_test)
# -
# ### 4.4.2. Evaluation of baseline model (Adaboost) with confusion matrix
# +
# Validation
y_pred = Ada_model.predict(X_test)
# Create confusion matrix
cm = confusion_matrix(y_test, y_pred)
TN, FP, FN, TP = cm.ravel()
print(cm)
print ("TN: " + str(TN),"FP: " + str(FP), "FN: " + str(FN), "TP: " + str(TP))
# Performance of decision tree model
print ("Accuracy: ", accuracy_score(y_test, y_pred))
print ("Sensitivity: ", recall_score(y_test, y_pred))
print ("Precision: ", precision_score(y_test, y_pred))
# ROC and AUC
from sklearn.metrics import roc_curve, auc
# Get predicted scores Pr(y=1): Used as thresholds for calculating TP Rate and FP Rate
score = Ada_model.predict_proba(X_test)[:, 1]
# Plot ROC Curve
fpr, tpr, thresholds = roc_curve(y_test, score) # fpr: FP Rate, tpr: TP Rate, thresholds: Pr(y=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Receiver operating characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
print ("Area under curve: ", roc_auc_score(y_test,score))
# -
# ### 4.4.3. Evaluation of baseline model (Adaboost) with K-fold validation
# +
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on Adaboost model
cv = cross_val_score(Ada, # Model Adaboost
train_feature, # Feature matrix
train_target, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
# Report performance of Adaboost model
print("Adaboost k-fold accuracy: %s" %(cv.mean()))
model12 = model_summary("Adaboost baseline",accuracy_score(y_test, y_pred),recall_score(y_test, y_pred),
precision_score(y_test, y_pred),cv.mean(),roc_auc_score(y_test,score))
# -
# ### 4.4.4. Improved model model (Adaboost) with features selection
# +
# Feature list adapted from 1.5 Determining Important Features
feat_list = ['TotalCharges','MonthlyCharges','tenure','Contract_Month-to-month','InternetService_Fiber optic',
'StreamingMovies_Yes','Contract_One year','OnlineSecurity_No','TechSupport_Yes','SeniorCitizen',
'PaymentMethod_Bank transfer (automatic)','StreamingTV_Yes','PaymentMethod_Electronic check',
'MultipleLines_No','PaperlessBilling','TechSupport_No','StreamingTV_No internet service','Contract_Two year',
'StreamingMovies_No','StreamingMovies_No internet service','PaymentMethod_Credit card (automatic)',
'PaymentMethod_Mailed check','StreamingTV_No','gender','TechSupport_No internet service',
'DeviceProtection_No internet service','DeviceProtection_No','OnlineBackup_Yes','OnlineBackup_No internet service',
'OnlineBackup_No','OnlineSecurity_Yes','OnlineSecurity_No internet service','InternetService_No',
'InternetService_DSL','MultipleLines_Yes','MultipleLines_No phone service','PhoneService','Dependents',
'Partner','DeviceProtection_Yes']
N_features = range(2, 41) # from 1 to 40 predictors
accuracy_list_cm = [] # A list of confusion matrix accuracy for 1 to 40 top fatures selected
sensitivity_list_cm = [] # A list of confusion matrix sensitivity for 1 to 40 top fatures selected
precision_list_cm = [] # A list of confusion matrix precision for 1 to 40 top fatures selected
auc_list = [] # A list of area under curve for roc for 1 to 40 top fatures selected
accuracy_list_kf = [] # A list of k-fold accuracy for 1 to 40 top fatures selected
best_k = 0
best_acc = 0
best_sensi = 0
best_precision = 0
best_auc = 0
best_kf = 0
for k in N_features:
# Split train test data based on feature list
train_feature_temp = churn_data.loc[:,feat_list[:k]]
train_target_temp = churn_data.iloc[:,40]
X_train, X_test, y_train, y_test = train_test_split(train_feature_temp,
train_target_temp, test_size=0.3, random_state=12345)
# Fit Adaboosting Model
Ada = AdaBoostClassifier(n_estimators=100, random_state=12345)
Ada_model = Ada.fit(X_train, y_train)
# Get predicted labels for test data. This is to be compared later on to build our confusion matrix
y_pred = Ada_model.predict(X_test)
# List of scores
acc = accuracy_score(y_test, y_pred)
sensi = recall_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
auc = roc_auc_score(y_test, y_pred)
# Append to score list
accuracy_list_cm.append(acc)
sensitivity_list_cm.append(sensi)
precision_list_cm.append(precision)
auc_list.append(auc)
# 10-Fold Cross Validation
kf = KFold(n_splits=10, shuffle=True, random_state=12345)
# Cross validation on baseline model
cv = cross_val_score(Ada, # baseline model
train_feature_temp, # Feature matrix
train_target_temp, # Output vector
cv=kf, # Cross-validation technique
scoring='accuracy' # Model performance metrics: accuracy
)
kf_score = cv.mean()
accuracy_list_kf.append(kf_score)
if (acc > best_acc):
best_k = k
best_acc = acc
best_sensi = sensi
best_precision = precision
best_auc = auc
best_kf = kf_score
# -
# ### 4.4.5. Evaluation of improved model (Adaboost) with CM and K-fold
# +
# Graph for CM
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(N_features, accuracy_list_cm, 'g*-')
ax.plot(N_features[16], accuracy_list_cm[16], marker='o', markersize=12, markeredgewidth=2, markeredgecolor='r', markerfacecolor='None')
plt.grid(True)
plt.xlabel('Number of features')
plt.ylabel('Accuracy of models')
plt.title('Curve for cm')
plt.show()
print("Highest accuracy of entropy model with 18 features selected: " + str(accuracy_list_cm[16]))
model13 = model_summary("Adaboost improved",best_acc,best_sensi,best_precision,best_kf,best_auc)
# -
# # 5. Comparison of models
results = pd.concat([model1,model2,model3,model4,model5,model6,model7,model8,model9,model10,model11,model12,model13])
results.reset_index(drop = True, inplace = True)
results.head(13)
# +
# Source: https://www.kaggle.com/pavanraj159/telecom-customer-churn-prediction
import plotly.plotly as py #For World Map
import plotly.graph_objs as go
from plotly.offline import iplot
def output_tracer(metric,color) :
tracer = go.Bar(y = results["Model"] , x = results[metric],
orientation = "h",name = metric ,
marker = dict(line = dict(width =.7),color = color)
)
return tracer
layout = go.Layout(dict(title = "Model performances",
plot_bgcolor = "rgb(243,243,243)",
paper_bgcolor = "rgb(243,243,243)",
xaxis = dict(gridcolor = 'rgb(255, 255, 255)',title = "metric",
zerolinewidth=1,ticklen=5,gridwidth=2),
yaxis = dict(gridcolor = 'rgb(255, 255, 255)',
zerolinewidth=1,ticklen=5,gridwidth=2),
margin = dict(l = 250),
height = 780
)
)
trace1 = output_tracer("Accuracy","orange")
trace2 = output_tracer('Sensitivity',"red")
trace3 = output_tracer('Precision',"lightblue")
trace4 = output_tracer('k-fold',"lightgrey")
trace5 = output_tracer('Area_under_curve',"yellow")
data = [trace1,trace2,trace3,trace4,trace5]
fig = go.Figure(data=data,layout=layout)
iplot(fig)
# -
| Customer_Churn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Relation between Language and (Job Satisaction and Salary)
# Our Questions to answer are:
# 1. Which Language gives you the most job satisfaction?
# 2. Which Language is paid best?
# ### Add modules to sys.path
# Throughout this notebook we may use some custom functions defined outside of this notebook. To be able to import and use them in this notebook we append the root directory of the project that this notebook belongs to to the system path.
# +
from pathlib import Path
import sys
PROJECT_ROOT_DIR = Path() / '..'
sys.path.append(str(PROJECT_ROOT_DIR))
# now we can do, for example: "from utils import plot_utils" to import the plot_utils module present in the utils package.
# -
# ### Data Preparation
# To answer our questions we first load the public survey data (https://insights.stackoverflow.com/survey) of 2017 into a DataFrame
# +
from pathlib import Path
import pandas as pd
DATA_DIR = PROJECT_ROOT_DIR / 'data'
FILE_NAME_2017 = 'survey_results_public_2017.csv'
df_2017 = pd.read_csv(str(DATA_DIR / FILE_NAME_2017))
df_2017.head()
# -
# Besides the id column (*Respondent*), the columns of our interest are:
# - *HaveWorkedLanguage* (Which of the following languages have you done extensive development work in over the past year, and which do you want to work in over the next year?)
# - *JobSatisfaction* (Overall, how satisfied are you with your current job? (range from 0 to 10; 0 = “Not at all satisfied”; 10 = “Completely satisfied”))
# - *Salary* (What is your current annual base salary, before taxes, and excluding bonuses, grants, or other compensation?)
#
# We can drop the rows with NaN values for *HaveWorkedLanguage* since they do not hold any information relevant for our questions of interest.
df = df_2017[['Respondent', 'HaveWorkedLanguage', 'JobSatisfaction', 'Salary']]
df = df[df['HaveWorkedLanguage'].notna()]
df.describe()
df.head()
# As we can see above, there is one difficulty with the *HaveWorkedLanguage* column. Instead of just one value it is also possible that it contains a list of semicolon-separted values. So speaking in relational database terms, we have data in an unnormalized form (UNF). To handle that we will turn it into the first normal form (1.NF) by creating an own row for every of the semicolon-separted values.
df = df.assign(HaveWorkedLanguage=df['HaveWorkedLanguage'].str.split(';')).explode('HaveWorkedLanguage')
df['HaveWorkedLanguage'] = df['HaveWorkedLanguage'].apply(lambda x: str(x).strip())
df.head(10)
# ### Question 1 & Question 2 for all data
# Now we can compute the average *JobSatisfaction* and *Salary* per *HaveWorkedLanguage*. As we can see from the DataFrame's head the *JobSatisfaction* and *Salary* columns have a lot of NaN values (also see the `df.describe()` call earlier). Because we are interested in computing/aggregating the mean for that columns (per *HaveWorkedLanguage*) we need to drop the NaN values since they do not hold valuable information for us. But this also means that our results do not represent all Respondents since we discard their NaN responses. In case of *JobSatisfaction* this does not seem too bad because most Respondents answered this question. But in case of *Salary* only about a third of the repondents gave an non NaN answer.
df_for_avg_job_satisfaction = df[['HaveWorkedLanguage', 'JobSatisfaction']].dropna()
job_satisfaction_per_lang = df_for_avg_job_satisfaction.groupby('HaveWorkedLanguage').mean().reset_index()
job_satisfaction_per_lang.head()
df_for_avg_salary = df[['HaveWorkedLanguage', 'Salary']].dropna()
salary_per_lang = df_for_avg_salary.groupby('HaveWorkedLanguage').mean().reset_index()
salary_per_lang.head()
# We know have the data we need to answer our two questions from the beginning. Let's scale it and put it in a nice chart.
# +
from pandas import DataFrame
from sklearn import preprocessing
from utils import plot_utils
chart_df = DataFrame()
chart_df['Language'] = job_satisfaction_per_lang['HaveWorkedLanguage']
chart_df['Avg Job Satisfaction'] = preprocessing.minmax_scale(job_satisfaction_per_lang['JobSatisfaction'])
chart_df['Avg Salary'] = preprocessing.minmax_scale(salary_per_lang['Salary'])
plot_utils.plot_df(df=chart_df,
title='Job Satisfaction and Salary per Language',
figsize=(27, 9),
column_to_sort_by='Avg Job Satisfaction',
ascending=False)
# -
# We can see that Smalltalk and and Objective-C seem to be exceptionally satisfying. Equally seem Clojure and Smalltalk to be paid very well. In Contrast to that we find VB family at the bottom of the chart for satisfaction and salary. It is noticeable that a lot of the top and bottom values belong to languages that are not very commonly known (we will also see that in the next section). Therefore, the values for these languages are less representable and tend to vary stronger than those for languages known by a lot of developers like JavaScript or SQL. In consequence we have to be careful with the values for these 'niche' languages. In the next section we will exclude them.
# ### Question 1 and Question 2 without bottom 4% of popularity
# The chart above is not very concise. Therefore, we disregard the languages that less than 4% of the developers have worked with last year.
# +
counts_per_lang = df['HaveWorkedLanguage'].value_counts()
percentage_per_lang = counts_per_lang / df['Respondent'].nunique()
to_remove = percentage_per_lang[percentage_per_lang < 0.04].index.tolist()
chart_df = chart_df[~chart_df['Language'].isin(to_remove)]
plot_utils.plot_df(df=chart_df,
title='Job Satisfaction and Salary per Language',
figsize=(27, 9),
column_to_sort_by='Avg Job Satisfaction',
ascending=False)
# -
# Now this chart looks far better than the one above. As we can see, Objective-C seems to satisfy developers very well and is not paid badly. Equally is Go very well paid and seems to satisfy the developers. Surprising might be the very low placements of Java and SQL since they are some of the most known programming languages in existence.
| notebooks/language_satisfaction_salary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # Regular expressions and requests
#
# The last lecture concetrated on the basic usage of regular expressions when dealing with strings in Python or a locally imported .txt file. This lecture introduces a new library called **requests**, qhich allows to send a request to a given URL and receive the HTML source of the page as an output/response. The output/response can then be used for scraping purposes, and that's when the RegEx may come handy. This lecture also intrduces some nice methods for working with JSON files.
import re
import requests
# As you may remember, when introducing JSON files, we got acquainted with an API that had data on people currently in Sky. The data was given, as most of the cases with APIs, in JSON format.
url = "http://api.open-notify.org/astros.json"
# In order to get the data from the page, we will use the **get()** function from the **requests** library.
response = requests.get(url)
# The response was saved in a variable, which we decided to call directly **response** (but, of course, you may have called it anything you want). If you check the type, you will see that the **response** variable has a specific type called **Response**.
type(response)
# I believe you may have at least once experienced an error when trying to reach some website. Probably the most popular one is **"404 error - Website was not found"**. Those are coded statements when dealing with websites (more precisely, when dealing with HTTP requests). The full list of HTTP status codes can be found on [Wikipedia](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes). One of the codes is **"200 - OK"**, which basically tells that everything went well. In our case, if we were able to succesfully receive the data from the URL above, then we should actually receive the **200** status code. This can be seen with just printing the response.
print(response)
# As you have seen above, printing the response results in the status code (e.g. OK, error or anything else) but not in the content of the website. To receive the content/data, we must use some of the available functions from the **requests** library. In our case, as the data is given in a JSON format, we will use the **JSON()** function from the **requests** library, that can freely operate on our response object/variable.
data = response.json()
type(data)
# As you can see above the type of our data variable, which includes the data in JSON format, is a usual Python dictionary. Let's print it, to see the content.
print(data)
# When printing a dictionary, especially those that have nested values, the output is not very readible and user friendly. For that purpose, one may use a standout library, to make printing pretty. The pretty printing library is called **pprint** and the function we need to use from that library is again called **pprint**.
from pprint import pprint
pprint(data)
# Now, the content is the same, the type of variable is the same, yet the printing output is much more readible. As can see, unlike before, there are only 3 people in space currently.
#
# You may wonder why we have those **u**-s in front of each string. It stands for the term **unicode** indicating the encoding of a string. It is fine that you see that, but the users will not, so you may just neglect that letter.
# Let's now move to RegEx. Regular expressions deal with text, however our dataset is a JSON file loaded to Python as a dictionary. In order to be able to apply regular expressions on them, it is necessary to convert the dictionary (or part of it, that interests us) to a simple string.
converted = str(data)
type(converted)
print(converted)
# As before, the printing results are not very readible, but what do you think the **pprint()** function will result in?
pprint(converted)
# The latter produces almost the same output, again not very user friendly. The thing is that now our variable of interest (converted) is not a dictionary, which makes it impossible for pprint to understand what are the keys and what are the values.
#
# Anyway, let's apply RegEx to that non-user-friendly string. First method that we will apply from the **re** package is again findall. As the only numeric object is the number of human currently in space, we can easily learn that number just by searching for a number in the whole output text.
output = re.findall('[0-9]',converted)
print output
# Let's now extract the full names of "spacemen". The first letters of names and surnames will be the only letters in the text that are uppercase and are followed by lowercase. There will also be one whitespace character between a name and a surname.
output = re.findall('[A-Z][a-z]+\s[A-Z][a-z]+',converted)
print output
# Please note, that we were lucky that there was no sspanish or mexican guy in the sky, as they might have a superlong full name consisting of several elements separated by several whitespaces (e.g. <NAME>).
# Let's now move back to text files. Last time we went to the Project Gutenberg webpage and found the Theodore Dreiser book titled "The financier". The book was downloaded and then imported to Python. THis time, we will again use that book to apply regular expressions, yet we will not manually download it rather we will read it directly from the URL.
url_book = "https://www.gutenberg.org/files/1840/1840-0.txt"
response_book = requests.get(url_book)
# The response we received about people in space was in JSON format. This time it is in .txt format. The correct method here is the **text** method, which provides the text from the received response.
financier_online = response_book.text
# If you check the type of the variable above (**financiaer_online**) you will see **unicode** as an outcome, which is the encoding standard of the received text.
type(financier_online)
# As the whole book was received as a string, we do not want to print all of it, so let's print part of it.
financier_online[70:205]
# Let's again see how many times <NAME> uses the $ sign in his book.
output = re.findall("\$",financier_online)
print output
# As you can see the "u" gut again appeared in fron of each string inside the list. But, as mentioned before, this will not affec tthe results and will not be visible to end user. To feel safer, one can prove that to himself/herself by just printing a single element from the output list and see that "u" disappears.
print output[0]
# Let's see how many times and on what occasions the symbol "@" was used.
output = re.findall("@",financier_online)
print output
output = re.findall("\S+@\S+",financier_online)
print output
# Well then, let's move from The financier to my webpage and get the HTML source from it.
url_Hrant = "https://hrantdavtyan.github.io/"
response_Hrant = requests.get(url_Hrant)
# As before, we are interested in text content. HTML document is nothing else than text, so we can use the same method to get the data.
my_page = response_Hrant.text
my_page[:500]
# Let's now try to find my e-mail from the website.
output = re.findall('\S+@\S+',my_page)
print output
# It is fine, we were able to find my e-mail (twice!) however, unlike a simple .txt document, here we do not have a lot of whitespaces because almost everything is inside HTML tags. Which means we should actuallt get rid of them using RegEx. There are many approaches that can be used. One of the approaches is to substitute all the tag elements with a whitespace.
output = re.findall('\S+@\S+',my_page)
print re.sub(r'<.*?>'," ",str(output))
# Much better, but still we get the "Email" text before the actual e-mail. The thing is we replaced the tag in the output, while it would have been much more beneficial if we could replace in all the text and then only search for the e-mail match inside the new text without tags.
page_no_tags = re.sub(r'<.*?>'," ",my_page)
print re.findall('\S+@\S+',page_no_tags)
# Now it works. But this does not mean you should always drop the tags (e.g. replace them with a space). Sometimes tags can be useful to find the correct text you are interested in. For example, **```<a>```** tag or the **```href = " "```** element can be useful when searching for hyperlinks.
output = re.findall('<a href\s*=\s*"(.+)"\s*>',my_page)
print output
# It seems we succesfully received all the links inside an **```<a>```** tag (not the links inside a **```<link>```** tag). Let's check how many of them are received.
len(output)
# Excellent! If you manually open my page and try to look for a "href" element, you will find 33 and 4 of them are inside a **```<link>```** tag. So we correctly received all the necessary elements. But as you can see, we also received some text coming after the **```"```** sign. The reason we have this situation is greediness. As we discussed last time, some expressions are greedy and try to match as many things as they can. So now it matched the last possible **```"```** sign in the expression. Also, the URLs matched include some empty ones or those that have only **#** sign. We need to have only URls that are non-empty, i.e. non-whitespace.
output = re.findall(r'<a href\s*=\s*"(\S+)"\s*>',my_page)
print output
# Fine, now we have only those URLs that are not folloewed by anything. This is the reasons there is no place for FB or LinkedIn etc. To get them also we just need to replace the non-whitespace character with any (**.**).
output = re.findall(r'<a href\s*=\s*"(\S+)".*>',my_page)
print output
| Week 3/W3_RegEx_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Coase and Property
#
# > <NAME>. 1960. “The Problem of Social Cost.” *The Journal of Law and Economics* 3:1–44.
#
# > Coase, <NAME>. 1937. “The Nature of the Firm.” *Economica* 4 (16):386–405.
# + [markdown] slideshow={"slide_type": "skip"}
# **Slideshow mode**: this notebook can be viewed as a slideshow by pressing Alt-R if run on a server.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Coase (1960) The Problem of Social Cost
#
# ### A rancher and wheat farmer.
#
# Both are utilizing adjacent plots of land. No fence separates the lands.
# + [markdown] slideshow={"slide_type": "subslide"}
# **The Wheat Farmer:** chooses a production method that delivers a maximum profit of $\Pi_W =8$.
# - to keep this simple suppose this is the farmer's only production choice.
# + [markdown] slideshow={"slide_type": "subslide"}
# **The Rancher:** chooses herd size $x$ to maximize profits $\Pi_C(x) = P \cdot F(x) - c \cdot x^2$
#
# - $P$ is cattle price and $c$ is the cost of feeding each animal.
# + [markdown] slideshow={"slide_type": "fragment"}
# - The herd size $x^*$ that maximizes profits given by:
#
# $$P \cdot F'(x^*) = c$$
# + [markdown] slideshow={"slide_type": "subslide"}
# **Example:** If $F(x) = x$, $c=\frac{1}{2}$.
#
# The FOC are $x^{*} = P_c$
#
# With $P_c=4$ and $c=\frac{1}{2}$, the rancher's privately optimal herd size of $x^* = 4$
# + [markdown] slideshow={"slide_type": "slide"}
# #### Missing Property Rights impose external costs
#
# With no effective barrier separating the fields cattle sometimes strays into the wheat farmer's fields, damaging crops and reducing wheat farmer's profits.
#
# Assume that if the rancher keeps a herd size $x$ net profits in wheat are reduced from $\Pi_W$ to:
#
# $$\Pi_W(x) = \Pi_W - d \cdot x^2$$
# + [markdown] slideshow={"slide_type": "subslide"}
# **The external cost**
#
# Suppose $d=\frac{1}{2}$
#
# At the rancher's private optimum herd size of $x^*=4$, the farmer's profit is reduced from 8 to zero:
#
# $$\begin{align}
# \Pi_W(x) &= \Pi_W - d \cdot x^2 \\
# & = 8 - \frac{1}{2} \cdot 4^2 = 0
# \end{align}$$
# + slideshow={"slide_type": "skip"}
from coase import *
from ipywidgets import interact, fixed
# + [markdown] slideshow={"slide_type": "slide"}
# At private optimum Rancher earns \$8 but imposes external costs that drive the farmer's earnings to zero.
# + slideshow={"slide_type": "fragment"}
coaseplot1()
# + [markdown] slideshow={"slide_type": "slide"}
# Private and social marginal benefits and costs can be plotted to see deadweight loss (DWL) differently:
# + slideshow={"slide_type": "fragment"}
coaseplot2()
# + [markdown] slideshow={"slide_type": "slide"}
# ## The assignment of property rights (liability)
# + [markdown] slideshow={"slide_type": "slide"}
# **Scenario 1:** Farmer is given the right to enjoin (i.e. limit or prohibit) cattle herding.
#
# If the farmer enforces a prohibition on all cattle herding:
#
# - Rancher now earns \$0.
# - Farmer earns \$8.
# + [markdown] slideshow={"slide_type": "fragment"}
# - But this is not efficient! Total output is smaller than it could be.
# - If transactions costs are low the two parties can bargain to a more efficient outcome.
# + [markdown] slideshow={"slide_type": "slide"}
# **Scenario 1:** Farmer is given the right to enjoin (i.e. limit or prohibit) cattle herding.
#
# Rancher reasons that if she were permitted to herd 2 cattle she'd earn $\$6$ while imposing \$2 in damage.
# - She could offer $\$2$ in full compensation for damage, pocketing remaining \$4
# - or they could bargain over how to divide the gains to trade of \$4 in other ways.
# + [markdown] slideshow={"slide_type": "slide"}
# **Scenario 2:** Rancher is granted right to graze with impunity.
#
# Farmer reasons that if herd size could be reduced from 4 to 2
# - farm profits would rise from $\$0$ to $\$6$
# - rancher's profits would fall from $\$8$ to $\$6$
# + [markdown] slideshow={"slide_type": "fragment"}
# - So farmer could offer to fully compensate rancher for $\$2$ loss and pocket remaining $\$4$
# - or they could bargain to divide those gains to trade of $\$4$ in other ways.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Who causes the externality?
#
# - The rancher, because his cows trample the crops?
# - The farmer, for placing his field too close to the rancher?
# - Ronald Coase point is that there is no clear answer to this question.
# - Hence Pigouvian tax/subsidy 'solutions' are not obvious. Should we tax the rancher, or subsidize them to keep their herd size down?
# - 'Externality' problem is due to the non-assignment of property rights.
# + [markdown] slideshow={"slide_type": "slide"}
# ## The 'Coase Theorem'
#
# ### With zero/low transactions costs
#
# - **The initial assignment of property rights does not matter for efficiency:** The parties traded to an efficient solution no matter who first got the rights.
# + [markdown] slideshow={"slide_type": "fragment"}
# - **The 'emergence' of property rights**: Even with no initial third-party assignment of property rights, it should be in the interests of the parties to create such rights and negotiate/trade to an efficient outcome.
# + [markdown] slideshow={"slide_type": "fragment"}
# - **The initial allocation does matter for the distribution of benefits between parties.** Legally tradable entitlements are valuable, generate income to those who can then sell.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Coase Theorem: True, False or Tautology?
#
# > "Costless bargaining is efficient tautologically; if I assume people can agree on socially efficient bargains, then of course they will... In the absence of property rights, a bargain *establishes* a contract between parties with novel rights that needn’t exist ex-ante."
# Cooter (1990)
#
# In the Farmer and Rancher example there was a missing market for legal entitlements.
#
# Once the market is made complete (by an assumed third party) then the First Welfare Theorem applies: complete competitive markets will lead to efficient allocations, regardless of initial allocation of property rights.
#
# The "Coase Theorem" makes legal entitlements tradable.
# + [markdown] slideshow={"slide_type": "slide"}
# In this view insuring efficiency is matter or removing impediments to free exchange of legal entitlements. However,
#
# >"The interesting case is when transaction costs make bargaining difficult. What you should take from Coase is that social efficiency can be enhanced by institutions (including the firm!) which allow socially efficient bargains to be reached by removing restrictive transaction costs, and particularly that the assignment of property rights to different parties can either help or hinder those institutions."
#
# Good further discussions from [<NAME>](http://www.deirdremccloskey.com/docs/pdf/Article_306.pdf) and [here](https://afinetheorem.wordpress.com/2013/09/03/on-coases-two-famous-theorems/):
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## When initial rights allocations matters for efficiency
#
# - 'Coase Theorem' (Stigler) interpretation sweeps under the rug the complicated political question of who gets initial rights.
# - Parties may engage in costly conflict, expend real resources to try to establish control over initial allocation of rights.
# - The [Myerson Satterthaite theorem](https://en.wikipedia.org/wiki/Myerson%E2%80%93Satterthwaite_theorem) establishes that when parties are asymmetrically informed about each other's valuations (e.g. here about the value of damages or benefits) then efficient exchange may become difficult/impossible. Each party may try to extract rents by trying to "hold-up" the other.
# - Suppose we had many farmers and ranchers. It might be costly/difficult to bring all relevant ranchers and farmers together and to agree on bargain terms.
# - Coase himself thought transactions costs mattered and hence initial allocation mechanisms had to be thought through carefully (e.g. spectrum auctions).
# + [markdown] slideshow={"slide_type": "slide"}
# ## A Coasian view of land market development
#
# Suppose there is an open field. In the absence of a land market whoever gets to the land first (possibly the more powerful in the the village) will prepare/clear land until the marginal value product of the last unit of land is equal to the clearing cost. We contrast two situations:
#
# (1) Open frontier: where land is still abundant
#
# (2) Land Scarcity.
#
# There will be a misallocation in (2) shown by DWL in the diagram... but also an incentive for the parties to bargain to a more efficient outcome. A well functionining land market would also deliver that outcome.
# + [markdown] slideshow={"slide_type": "slide"}
# #### Abundant land environment
#
# $\bar T$ units of land and $N$=2 households.
#
# Land clearing cost $c$. Frontier land not yet exhausted.
#
# Maximize profits at $P \cdot F_T(T) = c$
# + [markdown] slideshow={"slide_type": "slide"}
# Land demand for each farmer is given by $P\cdot F_T(T_i) = r$. So for this production $P \frac{1}{\sqrt T_i} = r$ or $P \frac{1}{\sqrt T_i} = cl$ so we can write
#
# $$T^*_i(r) = (P/r)^2$$
#
# If there is an open frontier the sum or demands falls short of total land supply and the marginal cost of land is the cost of clearing $r=c_l$.
# + [markdown] slideshow={"slide_type": "slide"}
# 'Land scarcity' results on the other hand when there is an equilibrium price of land $r>c_l$ where $r$ is found from
#
# $$\sum T^*_i(r) = \bar T$$
#
# Now land rent $r-c$ can be charged on the right to access and use land. Trade in these legal entitlements can raise output and efficiency. But there may be conflict and a 'scramble' to establish those rights of first access.
# + [markdown] slideshow={"slide_type": "slide"}
# #### 'Customary' land rights
#
# - Suppose norm is that all in the village can use as much land as they can farm
# - Higher status individuals get allocation first
# - As long as land is abundant everyone gets the land they want
# - No "land rent" -- cannot charge rent above $c$ since villagers are free to clear at cost $c$
# + slideshow={"slide_type": "slide"}
landmarket(P=5, cl = 3, title = 'Open Frontier')
# + [markdown] slideshow={"slide_type": "slide"}
# ### The closing of the frontier
# - Rising population or improving price or technology increases demand for land.
# - Suppose price at which product can be sold increases
# - demand for land increases.
# - Suppose total demandat clearing cost $c$ exceedsavailable land supply.
# - High-status individuals (who have first-access) leave less land available than is needed to satisfy remaining villagers demand.
# - Inefficient allocation of land
# - marginal products of land not equalized across households.
# - output would increase if we establish a market for trading land
# + slideshow={"slide_type": "slide"}
landmarket(P=8, cl = 3, title = 'Land Scarcity')
# + [markdown] slideshow={"slide_type": "skip"}
# We can solve for the equilibrium rental rate $r$ given environmental paramters including the price $P$, land endowment $\bar T$, population size $N$ and technology parameters $A)
# + [markdown] slideshow={"slide_type": "skip"}
# To do:
# (things to still do in this notebook)
# - indicate DWL on landmarket diagrams
# - create widget to see how diagram shifts with changing parameters
# -
interact(landmarket, P=(4,10,0.2), cl = (0,5,0.5),
title = fixed('Land'), A=fixed(1));
| notebooks/Coase.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# imports
import numpy as np
import pandas as pd
# +
# mean and median
incomes = [10000, 30000, 5000000]
print("Mean:", np.mean(incomes))
print("Median:", np.median(incomes))
# -
# Dataframe mean
df = pd.read_csv("../res/diamonds.csv")
df.mean()
# Get data
salaries = [40000, 40000, 41000, 50000, 54000, 70000, 90000]
# +
# Visualize data
# %matplotlib inline
import matplotlib.pyplot as plt
plt.hist(salaries, bins = 10)
plt.show()
# -
| common/tutorials/src/ML_17_statistics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assignment 1 (Day 6) - OPPs
# + active=""
# Question NO.1:
# For this challenge, Create a bank a/c class that has two attributes
# * Owner Name
# * Balance
# and two methods
# * Deposit
# * Withdraw
# As an added requirement, withdrawals may not exceed the available balance.
# Instantiate your class, make several deposits and withdrawals, and test to make the a/c can't be overdrawn.
# -
class bankAc():
def __init__(self,ownerName,balance):
self.owerName = ownerName
self.balance = balance
def deposit(self,d):
self.balance += d
print("you have added"+str(d)+"to your accont")
print("your current bal is ",self.balance)
def withdral(self,w):
if w>self.balance:
return "You have insuffficient balance"
else:
self.balance-=w
print("you withdral amount is "+str(w)+" your current bal is"+str(self.balance))
ac =bankAc("sharmila",50000)
ac.deposit(25000)
ac.withdral(10000)
| Assignment 1(Day 6) - Bank acc - Sharmila M.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sadiatanjim/human-activity-recognition/blob/master/HAR_TSNE.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="A6md6HWpBriH" colab_type="text"
# # Load Dataset from Git
# + id="wGuaz8L1VLUF" colab_type="code" outputId="02eee279-a8aa-40a1-8957-df50205f4cad" colab={"base_uri": "https://localhost:8080/", "height": 125}
# !git clone "https://github.com/laxmimerit/Human-Activity-Recognition-Using-Accelerometer-Data-and-CNN"
# + id="eTEIbJGSVWdV" colab_type="code" colab={}
#checking os paths
import os
path = os.listdir()[1]
# + id="g0xJlDmtB1yD" colab_type="code" outputId="b767e4b1-d98c-4709-de2c-436c9b3e20f7" colab={"base_uri": "https://localhost:8080/", "height": 35}
path
# + id="ZnaP6A-XB2px" colab_type="code" outputId="91b551b5-b321-4b59-a99f-d6cd5bd1bb2d" colab={"base_uri": "https://localhost:8080/", "height": 107}
os.listdir(path+'/WISDM_ar_v1.1')
# + [markdown] id="iwRxLiHSENkb" colab_type="text"
# # Data Preprocessing
# + id="ocv5I-6mEBbN" colab_type="code" outputId="eb68c1f0-d0c9-4e9b-a0a7-57fc88d1646f" colab={"base_uri": "https://localhost:8080/", "height": 71}
# Loading dataset to a 'Processed List'
file = open(path + '/WISDM_ar_v1.1/WISDM_ar_v1.1_raw.txt')
lines = file.readlines()
processedList = []
for i, line in enumerate(lines):
try:
line = line.split(',')
last = line[5].split(';')[0]
last = last.strip()
if last == '':
break;
temp = [line[0], line[1], line[2], line[3], line[4], last]
processedList.append(temp)
except:
print('Error at line number: ', i)
# + [markdown] id="U9xr1cOWEb85" colab_type="text"
# # Loading Data Into Pandas DataFrame
# + id="WMgEDzUnERtJ" colab_type="code" outputId="d2ab87d5-3d72-4ddc-973c-32e20d168f13" colab={"base_uri": "https://localhost:8080/", "height": 204}
# Create Pandas DataFrame from Processed List
import pandas as pd
columns = ['user', 'activity', 'time', 'x', 'y', 'z']
data = pd.DataFrame(data = processedList, columns = columns)
data.head()
# + id="2-QGPI0uE553" colab_type="code" outputId="a6f3919e-817c-4cb4-b2c0-e59a59d07175" colab={"base_uri": "https://localhost:8080/", "height": 143}
data['activity'].value_counts()
# + [markdown] id="-wtS-fyEFAPn" colab_type="text"
# # Data type to Float
# + id="d0mRslNME9sM" colab_type="code" colab={}
data['x'] = data['x'].astype('float')
data['y'] = data['y'].astype('float')
data['z'] = data['z'].astype('float')
# + [markdown] colab_type="text" id="0LfmCK6Yqt5N"
# # Time Series Visualization:
# + colab_type="code" id="9kNJ6IgLqtz1" colab={}
import matplotlib.pyplot as plt
import numpy as np
Fs = 20 # Sampling Frequency
activities = data['activity'].value_counts().index # Activity names
# + colab_type="code" id="o7-ssNBhqts0" outputId="c621252f-1494-4621-a95b-5690db5d8911" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# Helper functions for plotting activities
def plot_activity(activity, data):
fig, (ax0, ax1, ax2) = plt.subplots(nrows=3, figsize=(15, 7), sharex=True)
plot_axis(ax0, data['time'], data['x'], 'X-Axis')
plot_axis(ax1, data['time'], data['y'], 'Y-Axis')
plot_axis(ax2, data['time'], data['z'], 'Z-Axis')
plt.subplots_adjust(hspace=0.2)
fig.suptitle(activity)
plt.subplots_adjust(top=0.90)
plt.show()
def plot_axis(ax, x, y, title):
ax.plot(x, y, 'g')
ax.set_title(title)
ax.xaxis.set_visible(False)
ax.set_ylim([min(y) - np.std(y), max(y) + np.std(y)])
ax.set_xlim([min(x), max(x)])
ax.grid(True)
for activity in activities:
data_for_plot = data[(data['activity'] == activity)][:Fs*10]
plot_activity(activity, data_for_plot)
# + [markdown] id="ve4j75rrFk_q" colab_type="text"
# ## Dropping User/Time axes
# + id="l4TcohE_Fb5N" colab_type="code" outputId="0b10bd24-dc79-460b-b311-7c6d81701572" colab={"base_uri": "https://localhost:8080/", "height": 204}
df = data.drop(['user', 'time'], axis = 1).copy()
df.head()
# + [markdown] id="cGVAZhjBF2N9" colab_type="text"
# # Balancing Data
# + id="Msa_3ayrFuAn" colab_type="code" colab={}
# Taking the first 3555 samples from each class for balancing data
Walking = df[df['activity']=='Walking'].head(3555).copy()
Jogging = df[df['activity']=='Jogging'].head(3555).copy()
Upstairs = df[df['activity']=='Upstairs'].head(3555).copy()
Downstairs = df[df['activity']=='Downstairs'].head(3555).copy()
Sitting = df[df['activity']=='Sitting'].head(3555).copy()
Standing = df[df['activity']=='Standing'].copy()
# + id="BJ7QuqHNFwuU" colab_type="code" outputId="dbf87dc7-ac50-40f8-c39a-71b35b950191" colab={"base_uri": "https://localhost:8080/", "height": 35}
# Creating balanced dataframe
balanced_data = pd.DataFrame()
balanced_data = balanced_data.append([Walking, Jogging, Upstairs, Downstairs, Sitting, Standing])
balanced_data.shape
# + id="qk4KX5g2Fy7e" colab_type="code" outputId="e76706ec-8d75-45ee-fc63-2189f881ccce" colab={"base_uri": "https://localhost:8080/", "height": 143}
balanced_data['activity'].value_counts()
# + [markdown] id="nWrlhM0hGBNl" colab_type="text"
# # Scaling Features
# + id="HTbsAo4IGHfo" colab_type="code" colab={}
from sklearn.preprocessing import StandardScaler
# + id="Wp9cijpNGiRV" colab_type="code" colab={}
X = balanced_data[['x', 'y', 'z']]
y = balanced_data['activity']
# + id="8p9nkQ0_Gj2v" colab_type="code" outputId="2e8ee658-bc68-4c17-b72b-f0cd613a99ed" colab={"base_uri": "https://localhost:8080/", "height": 419}
# Using Scikit-learn's Standard Scalers to scale the input data
scaler = StandardScaler()
X = scaler.fit_transform(X)
scaled_X = pd.DataFrame(data = X, columns = ['x', 'y', 'z'])
scaled_X['label'] = y.values
scaled_X
# + [markdown] id="6UUI6RR-GzPe" colab_type="text"
# # Creating Frames
# + id="BrabMSi3GmJM" colab_type="code" colab={}
import scipy.stats as stats
import numpy as np
# + id="KyWNqbLPG3GS" colab_type="code" colab={}
Fs = 20 # Sampling Frequency
frame_size = Fs*4 # Taking the frame size of 80 (4 times Sampling Frequency)
hop_size = Fs*2 # Hop size of 40 between frames (2 times Sampling Freuquency)
# + id="IpnO-1IjG5vl" colab_type="code" colab={}
# Helper function for getting frames
def get_frames(df, frame_size, hop_size):
N_FEATURES = 3
frames = []
labels = []
for i in range(0, len(df) - frame_size, hop_size):
x = df['x'].values[i: i + frame_size]
y = df['y'].values[i: i + frame_size]
z = df['z'].values[i: i + frame_size]
# Retrieve the most often used label in this segment
label = stats.mode(df['label'][i: i + frame_size])[0][0]
frames.append([x, y, z])
labels.append(label)
# Bring the segments into a better shape
frames = np.asarray(frames).reshape(-1, frame_size, N_FEATURES)
labels = np.asarray(labels)
return frames, labels
# + id="qg9uf9W7R2fN" colab_type="code" colab={}
X, Y = get_frames(scaled_X, frame_size, hop_size)
# + id="1aIBk3x_SJe3" colab_type="code" outputId="a285df44-8e87-4b03-ad08-0039859f5375" colab={"base_uri": "https://localhost:8080/", "height": 35}
X.shape
# + id="UcpXhPvwPMkO" colab_type="code" colab={}
x = X[:,:,0]
y = X[:,:,1]
z = X[:,:,2]
# + id="sKsoOTyXVUzK" colab_type="code" outputId="115936f7-59f9-4300-b899-9742f640f841" colab={"base_uri": "https://localhost:8080/", "height": 35}
Y.shape
# + [markdown] id="FAHghpwWp7Cn" colab_type="text"
# # t-SNE Reduction : (532,80,3) -> (532,5,3)
# + id="GcUc0hAyawwd" colab_type="code" outputId="3dada95d-e35f-481a-9f2c-a7d64b35d0a1" colab={"base_uri": "https://localhost:8080/", "height": 233}
# !pip install MulticoreTSNE
# + id="FF3p6y6kmjp2" colab_type="code" colab={}
#from sklearn.manifold import TSNE
# Using python module MulticoreTSNE instead of sklearn's TSNE because it supports higher dimensional TSNE
from MulticoreTSNE import MulticoreTSNE as TSNE
# Define TSNE class
tsne = TSNE(n_components = 5, verbose = 1, perplexity = 30, n_iter = 300)
# Apply TSNE individually on x,y and z axis data
tsne_x = tsne.fit_transform(x)
tsne_y = tsne.fit_transform(y)
tsne_z = tsne.fit_transform(z)
# + id="SBZ6RPGyqIdx" colab_type="code" outputId="d72d9a76-3451-44b0-d068-50adae698844" colab={"base_uri": "https://localhost:8080/", "height": 35}
# Stack the TSNE results together to form an array called tsne_stack
tsne_stack = np.stack((tsne_x,tsne_y,tsne_z),axis = 2)
tsne_stack.shape
# + id="oKEs4jDwqJUQ" colab_type="code" outputId="c0895681-5a6e-411d-a82e-ed4acaca80cb" colab={"base_uri": "https://localhost:8080/", "height": 683}
tsne_stack
# + [markdown] id="hLSsNPuoTYje" colab_type="text"
# # Data Preperation
# + id="bCWcN5rzdEQ2" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
# + id="UP7V92ireE1b" colab_type="code" outputId="82d2841a-e634-4f69-b520-0d101b5ecefc" colab={"base_uri": "https://localhost:8080/", "height": 1000}
Y
# + [markdown] id="Cy3TkndHfNbj" colab_type="text"
# ## Label Encoding of y
# + id="4WWUXQqIfReW" colab_type="code" colab={}
from sklearn.preprocessing import LabelEncoder
# + id="MA-Pq9nzfxHm" colab_type="code" colab={}
# Define Label Encoder and transform Y labels to integer values
label = LabelEncoder()
Y_encoded = label.fit_transform(Y)
# + id="cRmg1ihef8Un" colab_type="code" outputId="8c017c03-02e8-481f-e7f0-ff92846b10cb" colab={"base_uri": "https://localhost:8080/", "height": 467}
Y_encoded
# + id="Hbij1eGqq0Kh" colab_type="code" colab={}
# Train-test-split of the Training Data
# X - > Output of TSNE 5 component stacked together
# Y - > Integer Encoded Labels
X_train, X_test, y_train, y_test = train_test_split(tsne_stack, Y_encoded, test_size = 0.2, random_state = 42, stratify = Y_encoded)
# + id="H37a4fQOq4mg" colab_type="code" outputId="73cb8173-22a1-4b17-cb18-b2dfce9ee062" colab={"base_uri": "https://localhost:8080/", "height": 35}
X_train.shape, X_test.shape
# + id="ECAjo-HRq6WB" colab_type="code" outputId="fffa944f-b9dc-4b0b-9572-96249da9dd19" colab={"base_uri": "https://localhost:8080/", "height": 35}
X_train[0].shape, X_test[0].shape
# + id="Y_QTBV3cq8vJ" colab_type="code" colab={}
X_train = X_train.reshape(425, 5, 3, 1)
X_test = X_test.reshape(107, 5, 3, 1)
# + id="LLip7zV1rkDk" colab_type="code" outputId="bcbec321-0ec5-461a-d575-86afe1071a79" colab={"base_uri": "https://localhost:8080/", "height": 35}
X_train[0].shape, X_test[0].shape
# + [markdown] id="I4kd7lB6SSpw" colab_type="text"
# # 2D CNN Model
# + id="PfTDRHBqhXDI" colab_type="code" outputId="3665fab1-6204-4cc7-cada-450b269f3489" colab={"base_uri": "https://localhost:8080/", "height": 82}
# Library imports
from keras.models import Sequential
from keras.layers import Conv2D,Dropout,Flatten,Dense
from keras.optimizers import Adam
# + [markdown] id="Xz5BRqzCo1mQ" colab_type="text"
# # Small CNN
# + id="9XSP-iWIzaXR" colab_type="code" colab={}
# Small CNN model using the sequential keras architecture.
# The architecture is self-explanatory
# Conv2D -> Convolutional Layers followed by (filter size, stride length)
# Activation function : relu [Rectified Linear Unit]
# Dropout -> Dropout Layers
# Dense - > One dimensional Dense Layer of Neurons
model = Sequential()
model.add(Conv2D(32, (2, 2), activation = 'relu', input_shape = X_train[0].shape))
model.add(Dropout(0.1))
model.add(Conv2D(64, (2, 2), activation='relu'))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(64, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(6, activation='softmax'))
# + [markdown] id="EtSKnesFo6JW" colab_type="text"
# # Deeper CNN 1
# + id="LIoUg9sfqEOq" colab_type="code" colab={}
X_train[0].shape
# + id="htd1i44Ro9xB" colab_type="code" colab={}
# Deeper Layer of Keras. Architecture is self-explanatory
model = Sequential()
model.add(Conv2D(16, (2, 2), activation = 'relu', padding = 'same', input_shape = X_train[0].shape))
model.add(Dropout(0.1))
model.add(Conv2D(32, (2, 2), activation='relu', padding = 'same'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (2, 2), activation='relu', padding = 'same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (2, 2), activation='relu', padding = 'same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (2, 2), activation='relu', padding = 'same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (2, 2), activation='relu'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (2, 2), activation='relu'))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(64, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(6, activation='softmax'))
model.summary()
# + id="Q5LyRewUSX5s" colab_type="code" colab={}
# Compiling model using Adam Optimizer
# Loss : Sparse Categorical Crossentropy
model.compile(optimizer= 'Adam' , loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
# + id="K90OzQMHSdbF" colab_type="code" outputId="6d86d197-1411-47a4-93f1-bce857fe5f90" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# Fit Model
history = model.fit(X_train, y_train, epochs = 400, validation_data= (X_test, y_test), verbose=1)
# + id="w23TO9J4SeaL" colab_type="code" colab={}
# Helper functions for plotting learning curve, training and validation loss.
def plot_learningCurve(history, epochs):
# Plot training & validation accuracy values
epoch_range = range(1, epochs+1)
plt.plot(epoch_range, history.history['acc'])
plt.plot(epoch_range, history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(epoch_range, history.history['loss'])
plt.plot(epoch_range, history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
# + id="Bt2MX-_dSblq" colab_type="code" outputId="50a8a27d-8cb4-4e44-80fd-7f61ef04a82f" colab={"base_uri": "https://localhost:8080/", "height": 573}
plot_learningCurve(history, 400)
# + [markdown] id="t6Gq9KrzpmWM" colab_type="text"
# ## Maximum Validation Accuracy:
# + id="ow8eL-rnpZ-L" colab_type="code" outputId="61219695-581f-4b1c-b27b-f8015d90ba58" colab={"base_uri": "https://localhost:8080/", "height": 35}
print("Maximum validation accuracy: " + str(max(history.history['val_acc'])))
# + id="gbcQhXUZauZT" colab_type="code" colab={}
| HAR_TSNE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div>
# <h1 style="margin-top: 50px; font-size: 33px; text-align: center"> Homework 4 </h1>
# <br>
# <div style="font-weight:200; font-size: 20px; padding-bottom: 15px; width: 100%; text-align: center;">
# <right><NAME>, <NAME>, <NAME></right>
# <br>
# </div>
# <hr>
# </div>
# <div>
# <h1 style="margin-top: -5px; font-size: 20px; text-align: center"> 2) Find the duplicates! </h1>
# <br>
# </div>
# Starting from passwords2.txt file as input with each row having a string of 20 characters we want to define a hash function that associates a value to each string.
# The goal is to *check whether there are some duplicate strings*.
# ### Part 1
# For the first part we define duplicate when two strings have the same characters, order is not important. Thus, "AABA" = "AAAB", and we count it as *one duplicate*.
# Since the amount of data is very large, we decided to use Hadoop Spark to process all the data. This choice will let us use the full potential on the CPU enabling the use of multicore and increasing the speed of execution. It will also reduce the amount of RAM needed for the esecution.
# We start by importing the `pyspark` environment, which is an interface to use Spark on python:
import findspark
import pyspark
findspark.init()
sc = pyspark.SparkContext(appName="findDuplicates")
sc
# Then we proceed by telling to the Spark environment which file text take as input.
txt = sc.textFile("data/passwords2.txt")
# Let's not see if the data gets properly imported by printing the first 5 number:
txt.take(5)
# At this point we want to define the function that generates the hash function from the string. A hash function takes an item of a given type and generates an integer hash value within a given range. The input items can be anything: strings, compiled shader programs, files, even directories. The same input always generates the same hash value, and a good hash function tends to generate different hash values when given different inputs. A hash function has no awareness of “other” items in the set of inputs. It just performs some arithmetic and/or bit-magic operations on the input item passed to it. Therefore, there’s always a chance that two different inputs will generate the same hash value [[1]](https://preshing.com/20110504/hash-collision-probabilities/).
#
# Now let's build the function that will generate the first hash function. Since we do not need to encrypt the data, but just count the duplicates, we will not focus on encoding our hash, but just generating a reasonable code that won't take too much computational time or space in memory. In any case we want to mention that at this [link](http://www.metamorphosite.com/one-way-hash-encryption-sha1-data-software) there is an interesting example on how to use the XOR operator to make an hash function encrypted (which is the real purpose of hash functions).
#
# Our hash function is so composed: we extract the ASCII code of every single element in the string and we sum the power to the cube of this number with all the number computed previously.
# In this way we use the commutative property to not consider the actual position of the elements. Furthermore we power the number to the cube to improve the differentation between numbers and try to avoid as much as possible the collision.
def hashingMap(item):
s = 0
for c in item:
s += (ord(c)**3)
return (s, 1)
# It's time to compute the map and reduce functions. The reduce function is written inline, since it consists of just a counter of how many times the elements with the same keys repeats.
ntuples = txt.map(hashingMap).reduceByKey(lambda a, b: a+b)
ntuples.take(5)
reduced = ntuples
# We want to filter now all the elements that does not have duplicates:
duplicates = reduced.filter(lambda x: x[1]>1)
duplicates.take(10)
# And finally we want to count the number of duplicates:
n = duplicates.count()
n
# We can see that we got 9448196 duplicates, which is very close to the 10000000 duplicates expected. Increasing the power to the 4 or 5 would for sure improve the quality of the algorithm avoiding collision better. At the same time it would slow down the code. That's why we think that the power to the cube is the right threshold in between efficiency and collision probability.
# ### Part 2
# For the second part we define duplicate when two strings have the same characters and order is also important. Thus, "AABA" is not equals "AAAB".
# We want to generate a function where position matters. So we decided to elaborate the data in a peculiar way. We started by composing a string that has the binary code of the whole password we have to transform. Then we wanted to unify the strings, so we added zeros where the string was not long enough, and removed characters when it was too long. In this way we controlled the length of the int variable we are going to output. In our case we decided to keep 64 bit as length of our integer, a reasonable value to not take too much space in memory.
def hashingMapUnique(item):
s = ''
for c in item:
s += "{0:b}".format(ord(c))
return (int(s.ljust(64, '0')[:64:],2), 1)
# Again we do the map-reduce procedure to count the data, as we did previously, filtering just the duplicates and counting them.
ntuples = txt.map(hashingMapUnique).reduceByKey(lambda a, b: a+b)
ntuples.take(5)
reduced = ntuples
duplicates = reduced.filter(lambda x: x[1]>1)
duplicates.take(10)
n = duplicates.count()
n
# As result we can see that the number of duplicates in precisely 5000000. This means that the choice of taking 64 bits was reasonable to keep the data not too large and at the same time have a very good precision on the counting, avoiding at all the hash collision.
| Homework_4_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A Sampling Of Monte Carlo Methods
# This notebook is an introduction to Monte Carlo methods. Monte Carlo methods are a class of techniques that use random sampling to simulate a draw from some distribution. By making repeated draws and calculating an aggregate on the distribution of those draws, it's possible to approximate a solution to a problem that may be very hard to calculate directly.
#
# Below we'll explore several examples of using Monte Carlo methods to model a domain and attempt to answer a question. These examples are intentionally basic. They're designed to illustrate the core concept without getting lost in problem-specific details. Consider these a starting point for learning how to apply Monte Carlo more broadly.
#
# One key point that's worth stating - Monte Carlo methods are an <b>approach</b>, not an algorithm. This was confusing to me at first. I kept looking for a "Monte Carlo" python library that implemented everything for me like scikit-learn does. There isn't one. It's a way of thinking about a problem, similar to dynamic programming. Each problem is different. There may be some patterns but they have to be learned over time. It isn't something that can be abstracted into a library.
#
# The application of Monte Carlo methods tends to follow a pattern. There are four general steps, and you'll see below that the problems we tackle pretty much adhere to this formula.
#
# 1) Create a model of the domain<br>
# 2) Generate random draws from the distribution over the domain<br>
# 3) Perform some deterministic calculation on the output<br>
# 4) Aggregate the results<br>
#
# This sequence informs us about the type of problems where the general application of Monte Carlo methods is useful. Specifically, when we have some <b>generative model</b> of a domain (i.e. something that we can use to generate data points from at will) and want to ask a question about that domain that isn't easily answered directly, we can use Monte Carlo to get the answer instead.
#
# To start off, let's tackle one of the simplest domains there is - rolling a pair of dice. This is very straightforward to implement.
# +
# %matplotlib inline
import random
import math
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
def roll_die():
return random.randint(1, 6)
def roll_dice():
return roll_die() + roll_die()
print(roll_dice())
print(roll_dice())
print(roll_dice())
# -
# Think of the dice as a probability distribution. On any given roll, there's some likelihood of getting each possible number. Collectively, these probabilites represent the distribution for the dice-rolling domain. Now imagine you want to know what this distribution looks like, having only the knowledge that you have two dice and each one can roll a 1-6 with equal probability. How would you calculate this distribution analytically? It's not obvious, even for the simplest of domains. Fortunately there's an easy way to figure it out - just roll the dice over and over, and count how many times you get each combination!
# +
def roll_histogram(samples):
rolls = []
for _ in range(samples):
rolls.append(roll_dice())
fig, ax = plt.subplots(figsize=(12, 9))
plt.hist(rolls, bins=11)
roll_histogram(100000)
# -
# The histogram gives us a visual sense of the likelihood of each roll, but what if we want something more targeted? Say, for example, that we wanted to know the probability of rolling a 6 or higher? Again, consider how you would solve this with an equation. It's not easy, right? But with a few very simple lines of code we can write a function that makes this question trivial.
def prob_of_roll_greater_than_equal_to(x, n_samples):
geq = 0
for _ in range(n_samples):
if roll_dice() >= x:
geq += 1
probability = float(geq) / n_samples
print('Probability of rolling greater than or equal to {0}: {1} ({2} samples)'.format(x, probability, n_samples))
# All we're doing is running a loop some number of times and rolling the dice, then recording if the result is greater than or equal to some number of interest. At the end we calculate the proportion of samples that matched our critera, and we have the probability we're interested in. Easy!
#
# You might notice that there's a parameter for the number of samples to draw. This is one of the tricky parts of Monte Carlo. We're relying on the [law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers) to get an accurate result, but how large is large enough? In practice it seems you just have to tinker with the number of samples and see where the result begins to stabilize (think of it as a hyperparameter that can be tuned).
#
# To make this more concrete, let's try calculating the probability of a 6 or higher with varying numbers of samples.
prob_of_roll_greater_than_equal_to(6, n_samples=10)
prob_of_roll_greater_than_equal_to(6, n_samples=100)
prob_of_roll_greater_than_equal_to(6, n_samples=1000)
prob_of_roll_greater_than_equal_to(6, n_samples=10000)
prob_of_roll_greater_than_equal_to(6, n_samples=100000)
prob_of_roll_greater_than_equal_to(6, n_samples=1000000)
# In this case 100 samples wasn't quite enough, but 1,000,000 was probably overkill. This is going to vary depending on the problem though.
#
# Let's move on to something slightly more complicated - calculating the value of pi. If you're not aware, pi is the ratio of a circle's circumference to its diameter. In other words, if you "un-rolled" a circle with a diameter of one you would get a line with a lenth of pi. There are analytical ways to derive the value of pi, but what if we didn't know that? What if all we knew was the definition above? Monte Carlo to the rescue!
#
# To understand the function below, imagine a unit circle inscribed in a unit square. We know that the area of a unit circle is pi/4, so if we generate a bunch of points randomly in a unit square and record how many of them "hit" in the circle's area, the ratio of "hits" to "misses" should be equal to pi/4. We then multiply by 4 to get an approximation of pi. This works with a full circle as well as a quarter circle (which we'll use below).
def estimate_pi(samples):
hits = 0
for _ in range(samples):
x = random.random()
y = random.random()
if math.sqrt((x ** 2) + (y ** 2)) < 1:
hits += 1
ratio = (float(hits) / samples) * 4
print('Estimate with {0} samples: {1}'.format(samples, ratio))
# Let's try it out with varying numbers of samples and see what we get.
estimate_pi(samples=10)
estimate_pi(samples=100)
estimate_pi(samples=1000)
estimate_pi(samples=10000)
estimate_pi(samples=100000)
estimate_pi(samples=1000000)
# We should observe that as we increase the number of samples, the result is converging on the value of pi. If the logic I described above for how we're getting this result isn't clear, a picture might help.
def plot_pi_estimate(samples):
hits = 0
x_inside = []
y_inside = []
x_outside = []
y_outside = []
for _ in range(samples):
x = random.random()
y = random.random()
if math.sqrt((x ** 2) + (y ** 2)) < 1:
hits += 1
x_inside.append(x)
y_inside.append(y)
else:
x_outside.append(x)
y_outside.append(y)
fig, ax = plt.subplots(figsize=(12, 9))
ax.set_aspect('equal')
ax.scatter(x_inside, y_inside, s=20, c='b')
ax.scatter(x_outside, y_outside, s=20, c='r')
fig.show()
ratio = (float(hits) / samples) * 4
print('Estimate with {0} samples: {1}'.format(samples, ratio))
# This function will plot randomly-generated numbers with a color coding indicating if the point falls inside (blue) or outside (red) the area of the unit circle. Let's try it with a moderate number of samples first and see what it looks like.
plot_pi_estimate(samples=10000)
# We can more or less see the contours of the circle forming. It should look much clearer if we raise the sample count a bit.
plot_pi_estimate(samples=100000)
# Better! It's worth taking a moment to consider what we're doing here. After all, approximating pi (at least to a few decimal points) is a fairly trivial problem. What's interesting about this technique though is we didn't need to know anything other than basic geometry to get there. This concept generalizes to much harder problems where no other method of calcuating an answer is known to exist (or where doing so would be computationally intractable). If sacrificing precision is an acceptable tradeoff, then using Monte Carlo techniques as a general problem-solving framework in domains involving randomness and uncertainty makes a lot of sense.
#
# A related use of this technique involves combining Monte Carlo methods with Markov Chains, and is called (appropriately) Markov Chain Monte Carlo (usually abbreviated MCMC). A full explanation of MCMC is well outside of our scope, but I encourage the reader to check out [this notebook](http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter3_MCMC/Ch3_IntroMCMC_PyMC3.ipynb) for more information (side note: it's part of a whole series on Bayesian methods that is really good, and well worth your time). In the interest of not adding required reading to understand the next part, I'll try to briefly summarize the idea behind MCMC.
#
# Like general Monte Carlo methods, MCMC is fundamentally about sampling from a distribution. But unlike before, MCMC is an approach to sampling an <b>unknown</b> distribution, given only some existing samples. MCMC involves using a Markov chain to "search" the space of possible distributions in a guided way. Rather than generating truly random samples, it uses the existing data as a starting point and then "walks" a Markov chain toward a state where the chain (hopefully) converges with the real posterior distribution (i.e. the same distribution that the original sample data came from).
#
# In a sense, MCMC inverts what we saw above. In the dice example, we began with a <b>distribution</b> and drew samples to answer some question about that distribution. With MCMC, we <b>begin</b> with samples from some <b>unknown</b> distribution, and our objective is to approximate, as best we can, the distribution that those samples came from. This way of thinking about it helps to clarify in what situations we need general Monte Carlo methods vs. MCMC. If you already have the "source" distribution and need to answer some question about it, it's a Monte Carlo problem. However, if all you have is some data but you don't know the "source", then MCMC can help you find it.
#
# Let's see an example to make this more concrete. Imagine we have the result of a series of coin flips and we want to know if the coin being used is unbiased (that is, equally likely to land on heads or tails). How would you determine this from the data alone? Let's generate a sequence of coin flips from a coin that we know to be biased so we have some data as a starting point.
# +
p_heads = 0.6
def biased_coin_flip():
if random.random() <= p_heads:
return 1
else:
return 0
n_trials = 100
coin_flips = [biased_coin_flip() for _ in range(n_trials)]
n_heads = sum(coin_flips)
print(n_heads)
# -
# In this case since we're producing the data ourselves we know it is biased, but imagine we didn't know where this data came from. All we know is we have 100 coin flips and 60 are heads. Obviously 60 is greater than 50, which is what we would guess if the coin was fair. On the other hand, it's definitely possible to get 60/100 heads with a fair coin just due to randomness. How do we move from a point estimate to a distribution of the likelihood that the coin is fair? That's where MCMC comes in.
with pm.Model() as coin_model:
p = pm.Uniform('p', lower=0, upper=1)
obs = pm.Bernoulli('obs', p, observed=coin_flips)
step = pm.Metropolis()
trace = pm.sample(100000, step=step)
trace = trace[5000:]
# Understanding this code requires some background in Bayesian statistics as well as PyMC3. Very simply, we define a prior distrbution (p) along with an observed variable (obs) representing our known data. We then configure the algorithm to use (Metropolis-Hastings) and initiate the chain. The result is a sequence of values that should, in aggregate, represent the most likely distribution that characterizes the original data.
#
# To see what we ended up with, we can plot the values in a histogram.
fig, ax = plt.subplots(figsize=(12, 9))
plt.title('Posterior distribution of $p$')
plt.vlines(p_heads, 0, n_trials / 10, linestyle='--', label='true $p$ (unknown)')
plt.hist(trace['p'], range=[0.3, 0.9], bins=25, histtype='stepfilled', normed=True)
plt.legend()
# From this result, we can see that the overwhelming likelihood is that the coin is biased. To actually derive a concrete probability estimate though, we need to specify a range for which we would consider the result "fair" and integrate over the probability density function (basically the histogram above). For the sake of argument, let's say that anything between .45-.55 is fair. We can then compute the result using a simple count.
# +
n_fair = len(np.where((trace['p'] >= 0.45) & (trace['p'] < 0.55))[0])
n_total = len(trace['p'])
print(float(n_fair / n_total))
# -
# By our definition above, there's roughly a 16% chance that the coin is unbiased! Hopefully this provides a good illustration of the power and usefulness of MCMC, and Monte Carlo methods more generally.
| some-ml-examples/ipython-notebooks-master/notebooks/misc/MonteCarlo.ipynb |
# ---
# title: "Quick ioslides"
# subtitle: "Slides generated using R, python and ioslides"
# author: "<NAME>"
# date: "June 15, 2018"
# output:
# ioslides_presentation:
# widescreen: true
# smaller: true
# editor_options:
# chunk_output_type: console
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# ## What is ioslides?
#
# This is the default format in rstudio for building interactive HTML presentations.
#
# Enjoy the [manual](https://rmarkdown.rstudio.com/ioslides_presentation_format.html)!
#
# These slides can be turned to a single HTML file with either a click on 'knitr' in rstudio, or, command line:
# ```bash
# R -e 'rmarkdown::render("ioslides.Rmd")'
# ```
#
# ## A sample plot
#
# <div style="float: left; width: 30%;">
# Here we create a sample data set for plotting.
# + hide_input=false
import pandas as pd
x = pd.Series({'A':1, 'B':3, 'C':2})
# -
# Then, in another column we plot. The R notebook code chunks have many [options](https://yihui.name/knitr/options/).
# For this plot I chose not to display the source code.
# </div>
#
# <div style="float: right; width: 70%;">
# + fig.height=5 fig.width=8 hide_input=true
x.plot(kind='bar', title='Sample plot')
# -
# <div>
| tests/mirror/ioslides.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python で気軽に化学・化学工学
# # 第 6 章 データセットの見える化 (可視化) をする
# ## 6.2 t-distributed Stochastic Neighbor Embedding (t-SNE)
# ## Jupyter Notebook の有用なショートカットのまとめ
# - <kbd>Esc</kbd>: コマンドモードに移行(セルの枠が青)
# - <kbd>Enter</kbd>: 編集モードに移行(セルの枠が緑)
# - コマンドモードで <kbd>M</kbd>: Markdown セル (説明・メモを書く用) に変更
# - コマンドモードで <kbd>Y</kbd>: Code セル (Python コードを書く用) に変更
# - コマンドモードで <kbd>H</kbd>: ヘルプを表示
# - コマンドモードで <kbd>A</kbd>: ひとつ**上**に空のセルを挿入
# - コマンドモードで <kbd>B</kbd>: ひとつ**下**に空のセルを挿入
# - コマンドモードで <kbd>D</kbd><kbd>D</kbd>: セルを削除
# - <kbd>Ctrl</kbd>+<kbd>Enter</kbd>: セルの内容を実行
# - <kbd>Shift</kbd>+<kbd>Enter</kbd>: セルの内容を実行して下へ
# わからないことがありましたら、関係する単語やエラーの文章などでウェブ検索してご自身で調べてみましょう。
# ### あやめのデータセット (iris_with_species.csv)
# 有名な [Fisher’s Iris Data](https://en.wikipedia.org/wiki/Iris_flower_data_set)。150個のあやめについて、がく片長(Sepal Length)、がく片幅(Sepal Width)、花びら長(Petal Length)、花びら幅(Petal Width)が計測されています。
import pandas as pd # pandas のインポート
dataset = pd.read_csv('iris_with_species.csv', index_col=0, header=0) # あやめのデータセットの読み込み
x = dataset.iloc[:, 1:] # 数値データの特徴量のみを x に (あやめのデータでは 0 列目が Species でカテゴリーの特徴量であるため、それ以外の特徴量を取り出しています)
# 特徴量の標準化
autoscaled_x = (x - x.mean()) / x.std() # 平均を引いてから、標準偏差で割ります。x は DataFrame 型、x.mean(), x.std() は Series 型でデータ型は異なりますが、特徴量の名前が同じであるため、x のすべてのサンプルに対して x.mean() を引き、x.std() で割る計算になります。
# ## t-SNE の実行
from sklearn.manifold import TSNE # scikit-learn の中の t-SNE を実行するためのライブラリを取り込みます
perplexity = 30 # 5 から 50 までの値にするが一般的です
tsne = TSNE(perplexity=perplexity, n_components=2, init='pca', random_state=10) # t-SNE を行ったり t-SNE の結果を格納したりするための変数を、tsne として宣言します
tsne.fit(autoscaled_x) # 特徴量の標準化後のデータを用いて、t-SNE を実行
tsne.embedding_ # 主成分スコア T 。array 型で得られます
score = pd.DataFrame(tsne.embedding_) # データ型を、使い慣れた pandas の DataFrame 型に変換
score # 念のため確認
score.index = x.index # スコアのサンプル名を、元のデータセットのサンプル名に
score.columns = ['t1', 't2'] # スコアの列の名前を、t1, t2, ... に
score # 念のため確認
score.to_csv('score_tsne.csv') # スコアを csv ファイルに保存
# score_tsne.csv を Excel 等で開いて中身を確認しましょう。
# データセットの可視化
import matplotlib.pyplot as plt # 描画のためインポート
plt.rcParams['font.size'] = 18 # 横軸や縦軸の名前の文字などのフォントのサイズ
plt.scatter(score.iloc[:, 0], score.iloc[:, 1]) # 散布図の作成
plt.xlabel(score.columns[0]) # 横軸の名前。ここでは、component_number_1 番目の列の名前にしています
plt.ylabel(score.columns[1]) # 縦軸の名前。ここでは、component_number_2 番目の列の名前にしています
plt.show() # 以上の設定において、グラフを描画します
# `perplexity` の値をいくつか変えて実行して、それぞれの可視化の結果を確認してみましょう
# ### 【参考】
# 下のようにすれば、第 4 章や第 6 章 6.1 節の散布図のときと同様にして、あやめの種類ごとにサンプルの色を変えて描画できます。
iris_types = dataset.iloc[:, 0] # あやめの種類
plt.rcParams['font.size'] = 18 # 横軸や縦軸の名前の文字などのフォントのサイズ
plt.scatter(score.iloc[:, 0], score.iloc[:, 1], c=pd.factorize(iris_types)[0], cmap=plt.get_cmap('jet')) # 散布図の作成。あやめの種類ごとにプロットの色を変えています
plt.xlabel(score.columns[0]) # 横軸の名前。ここでは、component_number_1 番目の列の名前にしています
plt.ylabel(score.columns[1]) # 縦軸の名前。ここでは、component_number_2 番目の列の名前にしています
plt.show() # 以上の設定において、グラフを描画します
# あやめの種類ごとに、サンプルが固まって分布していることを確認できます
# 自分のデータセットをお持ちの方は、そのデータセットでも今回の内容を確認してみましょう。
# ### 練習問題
#
# データセット `descriptors_8_with_boiling_point.csv` を読み込み、特徴量の標準化をしてから、t-SNE をして、主成分の散布図を確認しましょう。一番下にコードの例があります。
# ### 沸点のデータセット (descriptors_8_with_boiling_point.csv)
# Hall and Story が収集した[沸点のデータセット](https://pubs.acs.org/doi/abs/10.1021/ci960375x)。294 個の化合物について、沸点 (Boiling Point) が測定されており、8 つの特徴量 (記述子) で化学構造が数値化されています。記述子は、分子量 (MolWt)、水素原子以外の原子で計算された分子量 (HeavyAtomMolWt)、価電子の数 (NumValenceElectrons)、水素原子以外の原子の数 (HeavyAtomCount)、窒素原子と酸素原子の数 (NOCount)、水素原子と炭素原子以外の原子の数 (NumHeteroatoms)、回転可能な結合の数 (NumRotatableBonds)、環の数 (RingCount) です。
# ### 練習問題 コードの例
import pandas as pd # pandas のインポート
dataset = pd.read_csv('descriptors_8_with_boiling_point.csv', index_col=0, header=0) # 沸点のデータセットの読み込み
x = dataset.iloc[:, 1:] # 分子構造の特徴量のみを x に
# 特徴量の標準化
autoscaled_x = (x - x.mean()) / x.std() # 平均を引いてから、標準偏差で割ります。x は DataFrame 型、x.mean(), x.std() は Series 型でデータ型は異なりますが、特徴量の名前が同じであるため、x のすべてのサンプルに対して x.mean() を引き、x.std() で割る計算になります。
# t-SNE
from sklearn.manifold import TSNE # scikit-learn の中の t-SNE を実行するためのライブラリを取り込みます
perplexity = 30 # 5 から 50 までの値にするが一般的です
tsne = TSNE(perplexity=perplexity, n_components=2, init='pca', random_state=10) # t-SNE を行ったり t-SNE の結果を格納したりするための変数を、tsne として宣言します
tsne.fit(autoscaled_x) # 特徴量の標準化後のデータを用いて、t-SNE を実行
score = pd.DataFrame(tsne.embedding_) # データ型を、使い慣れた pandas の DataFrame 型に変換
score.index = x.index # スコアのサンプル名を、元のデータセットのサンプル名に
score.columns = ['t1', 't2'] # スコアの列の名前を、t1, t2, ... に
score # 念のため確認
score.to_csv('score_tsne_bp.csv') # スコアを csv ファイルに保存
# データセットの可視化
import matplotlib.pyplot as plt # 描画のためインポート
plt.rcParams['font.size'] = 18 # 横軸や縦軸の名前の文字などのフォントのサイズ
plt.scatter(score.iloc[:, 0], score.iloc[:, 1]) # 散布図の作成
plt.xlabel(score.columns[0]) # 横軸の名前。ここでは、component_number_1 番目の列の名前にしています
plt.ylabel(score.columns[1]) # 縦軸の名前。ここでは、component_number_2 番目の列の名前にしています
plt.show() # 以上の設定において、グラフを描画します
boiling_point = dataset.iloc[:, 0] # 沸点
boiling_point # 念のため確認
plt.rcParams['font.size'] = 18 # 横軸や縦軸の名前の文字などのフォントのサイズ
plt.scatter(score.iloc[:, 0], score.iloc[:, 1], c=boiling_point, cmap=plt.get_cmap('jet')) # 散布図の作成。あやめの種類ごとにプロットの色を変えています
plt.xlabel(score.columns[0]) # 横軸の名前。ここでは、component_number_1 番目の列の名前にしています
plt.ylabel(score.columns[1]) # 縦軸の名前。ここでは、component_number_2 番目の列の名前にしています
plt.colorbar() # カラーバーを表示します
plt.show() # 以上の設定において、グラフを描画します
# 沸点の値の近い化合物が、プロット上でも近くに分布している傾向があることが確認できます
| sample_program_6_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Overview of the steps this code will take:
# ### 1. load in the data
# ### 2. convert data for training
# ### 3. perform train_test_split
# ### 4. fit the training data to the lda
# ### 5. plot the confusion matrix
# ### 6. get analysis of the fit
#import libaries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#import function to construct a train/test split of the data
from sklearn.model_selection import train_test_split
#import function to construct a confusion matrix
from sklearn.metrics import confusion_matrix
#import methods to run LDA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
#import function to get precision,recall, F1 score given true labels y and predicted labels y_pred
from sklearn.metrics import precision_recall_fscore_support
#import function to find the standardization factor of a given set of numbers
from sklearn.preprocessing import StandardScaler
#import thisto do resampling
from sklearn.utils import resample
#import seaborn for plotting
import seaborn as sns
import os
# %matplotlib inline
# After importing all the necessary packages, load in the data as a csv.
# This file contains all the x,y points for each of the species
# +
#read in the procrustes data
df = pd.read_csv('Procrustes_all.csv')
#set the name of prediction type (column name): species, year, from_tip, from_base
label_name = 'year'
#set the name of prediction type for graphing purposes (not column name)
print_label = label_name
# -
# This data is uneven. Each year has a different number of observations. In order to correct for this, we will upsample the years that have less than the year with the most observations
# +
#set parameters
#set this to true to allow upsampling. Note resample needs to be true also.
upsample = True
#set cutoff value to upsample minority classes
#any class with fewer obs. than this will be upsampled to have this many obs.
#There are the most observationsin 2013 with 2465, so all the other years will be upsampled to 2465
minority_cutoff = 2465
# +
# get counts of observations of each class
class_cnts = df[label_name].value_counts()
#do upsampling
if upsample == True:
#find minority classes
minority_class = class_cnts[class_cnts < minority_cutoff].index
#remove the minority class data from the data frame
#will put the resampled data from these classes back in
data_resampled=df[~df[label_name].isin(minority_class)]
#upsample each species below the minority cutoff
for s in minority_class:
# Separate minority class
data_minority = df[df[label_name] == s]
# Upsample minority specie
data_minority_upsampled = resample(data_minority,
replace=True, # sample with replacement
n_samples=minority_cutoff, # to match majority class
random_state=40) # reproducible results
#concat each upsampled minority obs. into one dataframe
data_resampled = pd.concat([data_resampled,data_minority_upsampled])
#set the main df now to the data of upsampled class obs., and the non-upsampled original obs.
df = data_resampled
# -
# After we normalize the data, we need to format it correctly for the train_test_split function. This involves making a new df to put the newly computed variables in. Then we will run the function.
#
# Train_test_split we have set for 80% of our data randomized for training, and the other 20% will be tested against the training set. This split ratio is arbitrary, but conventional. If the training set was less the model could be overfitted, and if it was more the model would be underfitted.
# +
#set the input (X) and target data (y)
#set predictors (input / input variables / etc.)
X=df[['x1', 'y1', 'x2', 'y2', 'x3', 'y3', 'x4', 'y4', 'x5', 'y5',
'x6', 'y6', 'x7', 'y7', 'x8', 'y8', 'x9', 'y9', 'x10', 'y10', 'x11',
'y11', 'x12', 'y12', 'x13', 'y13', 'x14', 'y14', 'x15', 'y15', 'x16',
'y16', 'x17', 'y17', 'x18', 'y18', 'x19', 'y19', 'x20', 'y20', 'x21',
'y21']]
#set prediction values (labels/classes/response variables)
y=df[[label_name]]
#put prediction values in proper format for learning process
y=y.values
y=y.flatten()
#set the unique class labels
classes = np.unique(y)
#convert to string if needed (for integers etc.)
y=list(map(str, y))
classes = list(map(str,classes))
#define the data split for training and testing. This will be for the confusion matrix only.
#we will use cross validation for the other performance metrics
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
#standardize the both the training and test data using the training standard scalar
sc = StandardScaler()
#get standardizing factor of the training data variables, and then standardize
X_train = sc.fit_transform(X_train)
#apply same transformation to the test data
X_test = sc.transform(X_test)
# -
# Now its time to fit the trained data to the LDA.
#
# "A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule. The model fits a Gaussian density to each class, assuming that all classes share the same covariance matrix. The fitted model can also be used to reduce the dimensionality of the input by projecting it to the most discriminative directions."
#
# -- https://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.html
# +
# set and initialize the LDA model
lda = LinearDiscriminantAnalysis(n_components=4)
#fit the training data, i.e. find mapping between training coordinates and training species
lda_fit = lda.fit(X_train, y_train)
# -
# Now we can visualize this LDA using a confusion matrix. We can also calculate how well the training data did against our training set (the miscalculation rate).
# +
#find the predicted class labels on the testing data
y_pred=lda.predict(X_test)
#get confusion matrix for the test predictions
cm=confusion_matrix(y_test, y_pred, labels=classes)
n_class=cm.shape[0]
miss_classify_rate_list=[] # the miss-classification rate for each species
for i in range(n_class):
miss_classify_rate_list.append(round(1-(cm[i,i]/cm[i,:].sum()),3))
# find the species whose miss-classification rate larger than 0.001
idx_miss=np.array(miss_classify_rate_list)>0.001
classes_miss=np.array(classes)[idx_miss]
rate_miss=np.array(miss_classify_rate_list)[idx_miss]
# creat a DataFrame showing the species whose mis-classfication rate is bigger than 0.5
miss_classified=pd.DataFrame({'misclassified species': classes_miss, 'Percentage of misclassification': rate_miss})
miss_classified
# -
# And we can see the confusion matrix as a plot.
# +
#confusion matrix (heat map of percentage classified)
#this makes the confusion matrix into percentage of a true class, classified as each class
cm_norm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#output confusion matrix heatmap showing the percentage of each class prediction for a given true class,
#over all true class observations
x_axis_labels = classes # labels for x-axis
y_axis_labels = np.flip(classes) # labels for y-axis
ax = sns.heatmap(np.flipud(cm_norm), cmap="Greens", xticklabels=x_axis_labels, yticklabels=y_axis_labels)
#here i used cm to create the heatmap
sns.set(font_scale=1.5)
plt.figure(figsize=(25, 25))
figure = ax.get_figure()
#save the figure
plt.savefig('Final_LDA_ALL_Year.tif', bbox_inches="tight", dpi=600)
# -
# Here we can evaluate the precision, accuracy, recall, and F1 values for this model. The fold number is basically the number of times that the train_test_split outcome is done and averaged.
# +
#set number of unique classes
num_classes=len(np.unique(y))
#set num of folds
folds=1000
#this keeps track of how many times each class was in a test set throughout the k-fold crossvalidation
fold_count = np.zeros(num_classes)
#this gets sets an array with all the indices corresponding to the unique classes
#we'll refer to this below
idx = np.array(list(range(num_classes)))
#initialize performance metric arrays to hold the
#respective values for each class
precision = np.zeros(num_classes)
recall = np.zeros(num_classes)
f1_score = np.zeros(num_classes)
accuracy_scores = np.zeros(num_classes)
for i in range(folds):
#define the data split for training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=i)
#standardize the both the training and test data using the training standard scalar
sc = StandardScaler()
#get standardizing factor of the training data variables, and then standardize
X_train = sc.fit_transform(X_train)
#apply same transformation to the test data
X_test = sc.transform(X_test)
# set and initialize the LDA model
lda = LinearDiscriminantAnalysis(n_components=4)
#fit the model on the training data for these k-1 folds
lda_fit = lda.fit(X_train, y_train)
#make predictions on the test fold
y_pred=lda.predict(X_test)
#find precision, recall, and f1 score
P=precision_recall_fscore_support(y_test,y_pred, labels = classes)
#if any of the performance metrics are nan, set to zero
P[0][np.isnan(P[0])] = 0.0
P[1][np.isnan(P[1])] = 0.0
P[2][np.isnan(P[2])] = 0.0
#sum the performane metrics so after crossvalidation we can get an average performance value
precision+=P[0]
recall+=P[1]
f1_score+=P[2]
#get confusion matrix for the test predictions
cm=confusion_matrix(y_test,y_pred, labels=classes)
#convert to percentage classified
cm_norm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#here accuracy for each class is the percentage of a given class
#that were correctly identified, i.e. TP / P = TP / (TP+FN)
acc=np.array(np.diag(cm_norm))
acc[np.isnan(acc)]=0.0
accuracy_scores += acc
#find which classes are not in the test set
#in these cases the precision_recall_fscore_support function
#simply sets the performance metrics to zero
missing_classes = set(y) - set(y_test)
#find the indices corresponding to the missing classes
missing_idx = np.where(np.isin(classes,list(missing_classes)))
#get non_missing indices
update_idx = np.delete(idx, missing_idx)
#update the fold count (add 1 each time) only for the classes that were actually
#in the test set.
#e.g. if a class was not in the test set 3 of the 10 times we should not be dividing by k=10
#when averaging the performance metrics, but rather divide by 7.
fold_count[update_idx] += 1.0
# +
#now average the scores for the 10 folds
precision /= fold_count
recall /= fold_count
f1_score /= fold_count
accuracy_scores /= fold_count
#define performance table with precsision, recall, and f1
performance_tbl = pd.DataFrame({'Class': classes, 'Precision': precision, 'Recall': recall, 'Accuracy': accuracy_scores, 'F1': f1_score} )
#save this as an excel for later use:
performance_tbl.to_csv('ALL_Year_LDA.csv')
performance_tbl
# -
# The End.
| REVISED_Code/Figure5D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("rawdata.csv")
# +
## df.groupby("location").sum()[200:215]
df[df["location"] == "United States"]
# -
# # Reading total deaths
# +
## Filter results for total deaths for last current date = 08.04.2021
data_continent_total_deaths = df[["total_deaths", "continent", "date", "location"]]
data_total_deaths_08_04_2021 = data_continent_total_deaths[data_continent_total_deaths["date"] == "2021-04-08"]
data_total_deaths_08_04_2021
# +
data_continents_total_deaths_08_04_2021 = data_total_deaths_08_04_2021.groupby("location").sum().sort_values(by=['total_deaths'])[192:198]
data_continents_total_deaths_08_04_2021
# -
data_continents_total_deaths_08_04_2021.plot.bar()
# ## Total Deaths in Europe
# +
data_countryEU_total_deaths = data_total_deaths_08_04_2021[data_total_deaths_08_04_2021["continent"] == "Europe"]
data_countryEU_total_deaths.head()
# -
data_countryEU_total_deaths.plot.bar(x="location")
# ### Total Deaths in EU sorted
# +
## Data for total deaths for each country in Europe, sorted by values
data_total_deaths_groupby_countryEU = data_countryEU_total_deaths.groupby("location").sum().sort_values(by=['total_deaths'])
data_total_deaths_groupby_countryEU.head(10)
# -
data_total_deaths_groupby_countryEU.plot.bar()
# ### Top x=10 Countries in EU
# +
## just look at top 10 Countries
data_total_deaths_groupby_eu_top20 = data_total_deaths_groupby_countryEU.sort_values("total_deaths", ascending=False).head(10)
## Other way to write the code, df.nlargest()
# data_total_deaths_groupby_eu_top20 = data_total_deaths_groupby_countryEU.nlargest(10,"total_deaths")
data_total_deaths_groupby_eu_top20
# +
## data_countryEU_total_deaths = data_country_total_deaths[data_country_total_deaths.continent=="Europe"]
## Other way to write the code is below
## data_countryEU_total_deaths = data_country_total_deaths[data_country_total_deaths["continent"] == "Europe"]
# +
## Data for total deaths in Europe
## data_countryEU_total_deaths
# +
## Data for total deaths for each country in Europe, sorted by values
data_total_deaths_groupby_countryEU = data_countryEU_total_deaths.groupby("location").sum().sort_values(by=['total_deaths'])
## old code below, did it first in 2 steps and then consolidatet
# data_total_deaths_groupby_countryEU_sorted = data_total_deaths_groupby_countryEU.sort_values(by=['total_deaths'])
# -
data_total_deaths_groupby_eu_top20.plot.bar()
# +
data_total_deaths_groupby_countryEU.plot.bar()
## old code below, did it first in 2 steps and then consolidatet
# data_total_deaths_groupby_countryEU_sorted.plot.bar()
# -
## Did this code, but got a better solution. See above, just get the top n values. Dont set a fix issue of deaths, since those may rise in the future
## just look at countries with over 800000 total deaths
data_total_deaths_groupby_eu_80k = data_total_deaths_groupby_countryEU[data_total_deaths_groupby_countryEU["total_deaths"] > 800000]
data_total_deaths_groupby_eu_80k.plot.bar()
df.head()
# # Calculating Smoking Rate
#
data_smokers = df[["total_deaths", "continent", "date", "location", "female_smokers", "male_smokers"]]
data_smokers
# ## Smokers in Europe
data_smokers_EU = data_smokers[data_smokers["continent"] == "Europe"]
data_smokers_EU_08_04_2021 = data_smokers_EU[data_smokers_EU["date"] == "2021-04-08"]
data_smokers_EU_08_04_2021.head()
## All EU Countries with male and female smokers
# +
## Just look at top 10 countries by total death
data_smokers_EU_top10 = data_smokers_EU_08_04_2021.groupby("location").sum().sort_values("total_deaths", ascending=False).head(10)
data_smokers_EU_top10
# -
data_smokers_EU_top10["mean_smokers"]=data_smokers_EU_top10["female_smokers"]/2 + data_smokers_EU_top10["male_smokers"]/2
data_smokers_EU_top10_mean = data_smokers_EU_top10
data_smokers_EU_top10_mean
data_smokers_EU_top10_mean.plot.bar(y="mean_smokers")
table_location_total_deaths_mean_smokers = data_smokers_EU_top10_mean[["total_deaths", "mean_smokers"]].head()
table_location_total_deaths_mean_smokers
x = data_smokers_EU_top10_mean[["mean_smokers"]]
y = data_smokers_EU_top10_mean["total_deaths"]
x
data_smokers_EU_top10
# # Creating bar and line graph
x= data_smokers_EU_top10
x
y = pd.DataFrame({'location': ["UK", "Italy" , "Russia","France","Germany"],
'mean_smokers': [22.35,23.80,40.85,32.85,30.65],
'total_deaths': [127224.0,112861.0,100158.0,98196.0,78049.0]
},
index=['1', '2',"3","4","5"])
y
df_13_14_target.head(1)
# +
ax = y[['location', 'mean_smokers']].plot(x='location', linestyle='-', marker='o', color="0")
y[['location', 'total_deaths']].plot(x='location', kind='bar',ax=ax)
plt.show()
# +
ax = df_13_14_target[['location', '2013_val']].plot(x='location', linestyle='-', marker='o',color="0")
df_13_14_target[['location', '2014_target_val']].plot(x='location', kind='bar',ax=ax)
plt.show()
# -
# +
# create figure and axis objects with subplots()
fig,ax = plt.subplots()
# make a plot
ax.plot(x, color="red", marker="o")
# set x-axis label
ax.set_xlabel("location",fontsize=14)
# set y-axis label
ax.set_ylabel("mean_smokers",color="red",fontsize=14)
# twin object for two different y-axis on the sample plot
ax2=ax.twinx()
# make a plot with different y-axis using second axis object
ax2.plot(y, color="blue",marker="+")#, kind='bar')
ax2.set_ylabel("total_deaths",color="blue",fontsize=14)
plt.show()
# -
countries = data_smokers_EU_08_04_2021.groupby("location")["location"].count()
# +
left_2013 = pd.DataFrame(
{'location': ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep',
'oct', 'nov', 'dec'],
'2013_val': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 9, 6]})
right_2014 = pd.DataFrame({'location': ['jan', 'feb'], '2014_val': [3, 5]})
right_2014_target = pd.DataFrame(
{'location': ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep',
'oct', 'nov', 'dec'],
'2014_target_val': [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]})
right_2014
# -
x=data_countryEU_total_deaths[["total_deaths","location"]]
#y=data_countryEU_total_deaths[["total_deaths","location"]]
x.head()
x = data_smokers_EU_08_04_2021.head()
# +
# x["mean_smokers"] = x["female_smokers"]/ x["male_smokers"]
# x["mean_smokers"] = x["female_smokers"]
# -
df_13_14 = pd.merge(left_2013, right_2014, how='outer')
df_13_14
table_location_total_deaths_mean_smokers.head()
# +
df_13_14 = pd.merge(left_2013, right_2014, how='outer')
df_13_14_target = pd.merge(df_13_14, right_2014_target, how='outer')
df_13_14_target
# -
# +
ax = df_13_14_target[['location', '2013_val']].plot(
x='location', linestyle='-', marker='o')
df_13_14_target[['location', '2014_val']].plot(x='location', kind='bar',ax=ax)
plt.show()
# -
data_total_deaths_groupby_eu_80k.plot.bar()
data_smokers_EU_index_country["mean_smokers"]=data_smokers_EU_index_country["female_smokers"] + data_smokers_EU_index_country["male_smokers"]/2
data_smokers_EU_index_country_mean = data_smokers_EU_index_country
data_smokers_EU_index_country_mean.head()
data_smokers_EU_index_country.plot(y="mean_smokers")
data_smokers_EU_index_country = data_smokers_EU_08_04_2021.set_index("location")#[["female_smokers","male_smokers"]]
data_smokers_EU_index_country.head()
| Correlation_total_deaths_smoking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src='../img/EU-Copernicus-EUM_3Logos.png' alt='Logo EU Copernicus EUMETSAT' align='right' width='50%'></img>
# <br>
# <br>
# # Harmonized Data Access API - Functions
# This notebook lists all `functions` that are defined and used to access data from `WEkEO` with the HDA API. You find the workflow describing how to access data with the HDA API [here](./12_ltpy_WEkEO_harmonized_data_access_api.ipynb).
#
# The following functions are available:
# - [generate_api_key](#generate_api_key)
# - [init](#init)
# - [get_access_token](#get_access_token)
# - [query_metadata](#query_metadata)
# - [accept_TandC](#accept_tc)
# - [get_job_id](#get_job_id)
# - [get_request_status](#get_request_status)
# - [get_results_list](#results_list)
# - [get_order_ids](#get_order_ids)
# - [get_order_status](#get_order_status)
# - [downloadFile](#download_file)
# - [get_filename_from_cd](#get_filename_from_cd)
# - [get_filenames](#get_filenames)
# - [download_data](#download_data)
#
# <hr>
# #### Load required libraries
import requests, re, json
import base64
import shutil
import time, os
import urllib.parse
# <hr>
# ### `generate_api_key`
def generate_api_key(username, password):
"""
Generates a Base64-encoded api key, based on the WEkEO user credentials username:password.
Parameters:
username: WEkEO username
password: <PASSWORD>
Returns:
Returns a string of the Base64-encoded api key
"""
s = username+':'+password
return (base64.b64encode(str.encode(s), altchars=None)).decode()
# ### <a id='init'></a> `init`
def init(dataset_id, api_key, download_dir_path):
"""
Initiates a dictionary with keys needed to use the HDA API.
Parameters:
dataset_id: String representing the WEkEO collection id
api_key: Base64-encoded string
download_dir_path: directory path where data shall be downloaded to
Returns:
Returns the initiated dictionary.
"""
hda_dict = {}
# Data broker address
hda_dict["broker_endpoint"] = "https://wekeo-broker.apps.mercator.dpi.wekeo.eu/databroker"
# Terms and conditions
hda_dict["acceptTandC_address"]\
= hda_dict["broker_endpoint"]\
+ "/termsaccepted/Copernicus_General_License"
# Access-token address
hda_dict["accessToken_address"] = hda_dict["broker_endpoint"]\
+ '/gettoken'
# Dataset id
hda_dict["dataset_id"] = dataset_id
# API key
hda_dict["api_key"] = api_key
# set HTTP success code
hda_dict["CONST_HTTP_SUCCESS_CODE"] = 200
# download directory
hda_dict["download_dir_path"] = download_dir_path
if not os.path.exists(download_dir_path):
os.makedirs(download_dir_path)
return hda_dict
# ### <a id='get_access_token'></a> `get_access_token`
def get_access_token(hda_dict):
"""
Requests an access token to use the HDA API and stores it as separate key in the dictionary
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
Returns:
Returns the dictionary including the access token
"""
headers = {
'Authorization': 'Basic ' + hda_dict['api_key']
}
data = [
('grant_type', 'client_credentials'),
]
print("Getting an access token. This token is valid for one hour only.")
response = requests.get(hda_dict['accessToken_address'], headers=headers, verify=False)
# If the HTTP response code is 200 (i.e. success), then retrive the token from the response
if (response.status_code == hda_dict["CONST_HTTP_SUCCESS_CODE"]):
access_token = json.loads(response.text)['access_token']
print("Success: Access token is " + access_token)
else:
print("Error: Unexpected response {}".format(response))
print(response.headers)
hda_dict['access_token'] = access_token
hda_dict['headers'] = {'Authorization': 'Bearer ' + hda_dict["access_token"], 'Accept': 'application/json'}
return hda_dict
# ### <a id='query_metadata'></a> `query_metadata`
def query_metadata(hda_dict):
"""
Requests metadata for the given dataset id and stores the response of the request in the dictionary.
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
Returns:
Returns the dictionary including the query response
"""
response = requests.get(hda_dict['broker_endpoint'] + '/querymetadata/' + hda_dict['dataset_id'], headers=hda_dict['headers'])
print('Getting query metadata, URL Is ' + hda_dict['broker_endpoint'] + '/querymetadata/' + hda_dict['dataset_id'] +"?access_token=" + hda_dict['access_token'])
print("************** Query Metadata for " + hda_dict['dataset_id'] +" **************")
if (response.status_code == hda_dict['CONST_HTTP_SUCCESS_CODE']):
parsedResponse = json.loads(response.text)
print(json.dumps(parsedResponse, indent=4, sort_keys=True))
print("**************************************************************************")
else:
print("Error: Unexpected response {}".format(response))
hda_dict['parsedResponse']=parsedResponse
return hda_dict
# ### <a id='accept_tc'></a> `acceptTandC`
def acceptTandC(hda_dict):
"""
Checks if the Terms and Conditions have been accepted and it not, they will be accepted. \
The response is stored in the dictionary
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
Returns:
Returns the dictionary including the response from the Terms and Conditions statement.
"""
response = requests.get(hda_dict['acceptTandC_address'], headers=hda_dict['headers'])
isTandCAccepted = json.loads(response.text)['accepted']
if isTandCAccepted is False:
print("Accepting Terms and Conditions of Copernicus_General_License")
response = requests.put(hda_dict['acceptTandC_address'], headers=hda_dict['headers'])
else:
print("Copernicus_General_License Terms and Conditions already accepted")
isTandCAccepted = json.loads(response.text)['accepted']
hda_dict['isTandCAccepted']=isTandCAccepted
return hda_dict
# ### <a id='get_job_id'></a> `get_job_id`
def get_job_id(hda_dict,data):
"""
Assigns a job id for the data request.
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
data: dictionary containing the dataset description
Returns:
Returns the dictionary including the assigned job id.
"""
response = requests.post(hda_dict['broker_endpoint'] + '/datarequest', headers=hda_dict['headers'], json=data, verify=False)
if (response.status_code == hda_dict['CONST_HTTP_SUCCESS_CODE']):
job_id=json.loads(response.text)['jobId']
print ("Query successfully submitted. Job ID is " + job_id)
else:
job_id=""
print("Error: Unexpected response {}".format(response))
hda_dict['job_id']=job_id
get_request_status(hda_dict)
return hda_dict
# ### <a id='get_request_status'></a> `get_request_status`
def get_request_status(hda_dict):
"""
Requests the status of the process to assign a job ID.
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
"""
status = "not started"
while (status != "completed"):
response = requests.get(hda_dict['broker_endpoint'] + '/datarequest/status/' + hda_dict['job_id'], headers=hda_dict['headers'])
if (response.status_code == hda_dict['CONST_HTTP_SUCCESS_CODE']):
status = json.loads(response.text)['status']
print ("Query successfully submitted. Status is " + status)
else:
print("Error: Unexpected response {}".format(response))
# ### <a id='get_results_list'></a> `get_results_list`
def get_results_list(hda_dict):
"""
Generates a list of filenames to be available for download
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
Returns:
Returns the dictionary including the list of filenames to be downloaded.
"""
# params = {'page':'0', 'size':'5'}
response = requests.get(hda_dict['broker_endpoint'] + '/datarequest/jobs/' + hda_dict['job_id'] + '/result', headers=hda_dict['headers'])
results = json.loads(response.text)
hda_dict['results']=results
print("************** Results *******************************n")
print(json.dumps(results, indent=5, sort_keys=True))
print("*******************************************")
return hda_dict
# ### <a id='get_order_ids'></a> `get_order_ids`
def get_order_ids(hda_dict):
"""
Assigns each file to be downloaded a unique order ID.
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
Returns:
Returns the dictionary including the list of order IDs and the request status of assigning the order IDs.
"""
i = 0
order_ids = []
order_sizes = []
for result in hda_dict['results']['content']:
data = {
"jobId": hda_dict['job_id'],
"uri": result['url']
}
order_sizes.append(result['size'])
response = requests.post(hda_dict['broker_endpoint'] + '/dataorder', headers=hda_dict['headers'], json=data, verify=False)
if (response.status_code == hda_dict['CONST_HTTP_SUCCESS_CODE']):
order_ids.append(json.loads(response.text)['orderId'])
print ("Query successfully submitted. Order ID is " + order_ids[i])
response = get_order_status(hda_dict,order_ids[i])
else:
print("Error: Unexpected response {}".format(response))
i += 1
hda_dict['order_ids']=order_ids
hda_dict['order_sizes']=order_sizes
hda_dict['order_status_response']=response
return hda_dict
# ### `get_order_status`
def get_order_status(hda_dict,order_id):
"""
Requests the status of assigning an order ID for a data file.
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
order_id: the order id for the data file
Returns:
Returns the response of assigning an order ID.
"""
status = "not started"
while (status != "completed"):
response = requests.get(hda_dict['broker_endpoint'] + '/dataorder/status/' + order_id, headers=hda_dict['headers'])
if (response.status_code == hda_dict['CONST_HTTP_SUCCESS_CODE']):
status = json.loads(response.text)['status']
print ("Query successfully submitted. Status is " + status)
else:
print("Error: Unexpected response {}".format(response))
return response
# ### <a id='download_file'></a> `downloadFile`
def downloadFile(url, headers, directory, file_name, total_length = 0):
"""
Function to dowload a a single data file.
Parameters:
url: is the download url which included the unique order ID
headers:
directory: download directory, where data file shall be stored
file_name: name of the data file
Returns:
Returns the time needed to download the data file.
"""
r = requests.get(url, headers=headers, stream=True)
print('OK')
print('directory')
if r.status_code == 200:
filename = os.path.join(directory, file_name)
print("Downloading " + filename)
with open(filename, 'wb') as f:
start = time.process_time()
print("File size is: %8.2f MB" % (total_length/(1024*1024)))
dl = 0
for chunk in r.iter_content(64738):
dl += len(chunk)
f.write(chunk)
if total_length is not None: # no content length header
done = int(50 * dl / total_length)
#sys.stdout.write("\r[%s%s] %s kbps" % ('=' * done, ' ' * (50-done), dl//(time.process_time() - start)))
print("\r[%s%s] %8.2f Mbps" % ('=' * done, ' ' * (50-done), (dl//(time.process_time() - start))/(1024*1024)), end='', flush=True)
#print ('')
else:
if( dl % (1024) == 0 ):
print("[%8.2f] MB downloaded, %8.2f kbps" % (dl / (1024 * 1024), (dl//(time.process_time() - start))/1024))
print("[%8.2f] MB downloaded, %8.2f kbps" % (dl / (1024 * 1024), (dl//(time.process_time() - start))/1024))
return (time.process_time() - start)
# ### <a id='get_filename_from_cd'></a> `get_filename_from_cd`
def get_filename_from_cd(cd):
"""
Get the filename from content disposition
Parameters:
cd: content disposition (from https://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html: \
The Content-Disposition response-header field has been proposed as a means for the origin server to suggest a default filename
if the user requests that the content is saved to a file. This usage is derived from the definition of Content-Disposition in RFC 1806 [35].
Returns:
The filename from conten disposition
"""
if not cd:
return None
fname = re.findall('filename=(.+)', cd)
if len(fname) == 0:
return None
return fname[0][2:-1]
# ### <a id='get_filenames'></a> `get_filenames`
def get_filenames(hda_dict):
"""
Generates a list of filenames taken from the results dictionary, retrieved with the function request_results_list.
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
Returns:
Returns a list of filenames for each entry stored in the dictionary returned by the function request_results_list.
"""
fileName = []
for file in hda_dict['results']['content']:
fileName.append(file['filename'])
return fileName
# ### <a id='download_data'></a> `download_data`
def download_data(hda_dict):
"""
Downloads for each of the order IDs the associated data file.
Parameters:
hda_dict: dictionary initied with the function init, that stores all required information to be able to interact with the HDA API
"""
fileName = get_filenames(hda_dict)
print(fileName)
i=0
for order_id in hda_dict['order_ids']:
print (order_id)
file_name=fileName[i]
download_url = hda_dict['broker_endpoint'] + '/dataorder/download/' + order_id
product_size = hda_dict['order_sizes'][i]
print (download_url)
print (hda_dict['headers'])
print (type(hda_dict['download_dir_path']))
print (product_size)
time_elapsed = downloadFile(download_url, hda_dict['headers'], hda_dict['download_dir_path'], file_name, product_size)
print( "Download complete...")
print ("Time Elapsed: " + str(time_elapsed) + " seconds")
response=hda_dict['order_status_response']
if (response.status_code == hda_dict['CONST_HTTP_SUCCESS_CODE']):
order_file = "./" + file_name
if os.path.isfile(order_file):
print ("Query successfully submitted. Response is " + response)
else:
print("Error: Unexpected response {}".format(response))
i += 1
# <hr>
# <p><img src='../img/copernicus_logo.png' align='left' alt='Logo EU Copernicus' width='25%'></img></p>
# <br clear=left>
# <p style="text-align:left;">This project is licensed under the <a href="../LICENSE">MIT License</a> <span style="float:right;"><a href="https://gitlab.eumetsat.int/eumetlab/atmosphere/atmosphere">View on GitLab</a> | <a href="https://training.eumetsat.int/">EUMETSAT Training</a>
| 10_data_access/hda_api_functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''deeplearning'': conda)'
# name: python3
# ---
import numpy as np
# Defining the Perceptron model
class Perceptron():
def __init__(self, num_features):
self.num_features = num_features
self.weights = np.zeros((num_features, 1), dtype = float)
self.bias = np.zeros(1, dtype = float)
def predict(self, x):
activation = np.dot(x, self.weights) + self.bias
predictions = np.where(activation > 0., 1, 0)
return predictions
def evaluate(self, x, y):
prediction = self.predict(x)
if prediction == y:
return 1
else:
return 0
def accuracy(self, x, y):
predictions = self.predict(x).reshape(-1)
acc = np.sum(predictions == y) / y.shape[0]
return acc
def calc_errors(self, x, y):
y_hat = self.predict(x)
errors = y - y_hat
return errors
def train(self, x, y, epochs, lr = 1):
for e in range(epochs):
for i in range(y.shape[0]):
error = self.calc_errors(x[i].reshape(1, self.num_features), y[i]).reshape(-1)
self.weights += (lr * error * x[i]).reshape(self.num_features, 1)
self.bias += lr * error
# +
# Load the dataset
data = np.genfromtxt('toy_data.txt', delimiter='\t')
X, y = data[:, :2], data[:, 2]
y = y.astype(int)
# Shuffling & train/test split
shuffle_idx = np.arange(y.shape[0])
shuffle_rng = np.random.RandomState(123)
shuffle_rng.shuffle(shuffle_idx)
X, y = X[shuffle_idx], y[shuffle_idx]
training_percentage = 80
X_train, X_test = X[shuffle_idx[:training_percentage]], X[shuffle_idx[training_percentage:]]
y_train, y_test = y[shuffle_idx[:training_percentage]], y[shuffle_idx[training_percentage:]]
# Normalize (mean zero, unit variance)
mu, sigma = X_train.mean(axis=0), X_train.std(axis=0)
X_train = (X_train - mu) / sigma
X_test = (X_test - mu) / sigma
# +
print(X_train.mean(axis=0))
print(X_train.std(axis=0))
print(X.shape[0])
print(X_train.shape[0])
print(X_test.shape[0])
# +
# plotting data
import matplotlib.pyplot as plt
plt.scatter(X[y==0, 0], X[y==0, 1], label='class 0', marker='o')
plt.scatter(X[y==1, 0], X[y==1, 1], label='class 1', marker='s')
plt.title('Training set')
plt.xlabel('feature 1')
plt.ylabel('feature 2')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.legend()
plt.show()
plt.scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o')
plt.scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s')
plt.title('Training set')
plt.xlabel('feature 1')
plt.ylabel('feature 2')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.legend()
plt.show()
# +
pm = Perceptron(num_features=2)
pm.train(X_train, y_train, epochs=10)
print('Model parameters:\n\n')
print('Weights: %s\n' % pm.weights)
print('Bias: %s\n' % pm.bias)
# -
train_acc = pm.accuracy(X_train, y_train)
print(train_acc*100)
test_acc = pm.accuracy(X_test, y_test)
print(test_acc*100)
# +
# training the Perceptron model
all_weights = []
all_biases = []
my_perceptron = Perceptron(num_features=2)
acc = 0
for epoch in range(10):
for i in range(X.shape[0]):
all_weights.append(my_perceptron.weights.copy())
all_biases.append(my_perceptron.bias.copy())
my_perceptron.train(X[i].reshape(1, -1), y[i].reshape(-1), epochs=1)
acc = my_perceptron.accuracy(X, y)
if acc == 1.0:
break
if acc == 1.0:
all_weights.append(my_perceptron.weights.copy())
all_biases.append(my_perceptron.bias.copy())
break
# -
acc
# +
# Lets check how the weights were updated
import imageio
scatter_highlight_defaults = {'c': '',
'edgecolor': 'k',
'alpha': 1.0,
'linewidths': 2,
'marker': 'o',
's': 150}
def plot(i):
fig, ax = plt.subplots()
w, b = all_weights[i], all_biases[i]
x_min = -20
y_min = ( (-(w[0] * x_min) - b[0])
/ w[1] )
x_max = 20
y_max = ( (-(w[0] * x_max) - b[0])
/ w[1] )
ax.set_xlim([-5., 5])
ax.set_ylim([-5., 5])
ax.set_xlabel('Iteration %d' % i)
ax.plot([x_min, x_max], [y_min, y_max], color='k')
ax.scatter(X[y==0, 0], X[y==0, 1], label='class 0', marker='o')
ax.scatter(X[y==1, 0], X[y==1, 1], label='class 1', marker='s')
ax.scatter(X[i][0], X[i][1], **scatter_highlight_defaults)
fig.canvas.draw()
image = np.frombuffer(fig.canvas.tostring_rgb(), dtype='uint8')
image = image.reshape(fig.canvas.get_width_height()[::-1] + (3,))
return image
kwargs_write = {'fps':5.0, 'quantizer':'nq'}
imageio.mimsave('training.gif', [plot(i) for i in range(len(all_weights))], fps=1)
# +
# Load the dataset
data_diabetes = np.genfromtxt('diabetes.csv', delimiter=',')
X = data_diabetes[:, :8]
y = data_diabetes[:, 8]
y = y.astype(int)
# Shuffling & train/test split
shuffle_idx = np.arange(y.shape[0])
shuffle_rng = np.random.RandomState(123)
shuffle_rng.shuffle(shuffle_idx)
X, y = X[shuffle_idx], y[shuffle_idx]
# +
all_weights = []
all_biases = []
my_perceptron = Perceptron(num_features=8)
acc = 0
for epoch in range(10):
for i in range(X.shape[0]):
all_weights.append(my_perceptron.weights.copy())
all_biases.append(my_perceptron.bias.copy())
my_perceptron.train(X[i].reshape(1, -1), y[i].reshape(-1), epochs=1)
acc = my_perceptron.accuracy(X, y)
if acc == 1.0:
break
if acc == 1.0:
all_weights.append(my_perceptron.weights.copy())
all_biases.append(my_perceptron.bias.copy())
break
# -
# lets check how does the model perform
acc
| perceptron/perceptron585.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="_LTDn4aqFlOP" colab_type="text"
# A linked list is said to contain a cycle if any node is visited more than once while traversing the list. For example, in the following graph there is a cycle formed when node points back to node .
# image
# Function Description
# Complete the function has_cycle in the editor below. It must return a boolean true if the graph contains a cycle, or false.
# has_cycle has the following parameter(s):
# : a pointer to a Node object that points to the head of a linked list.
# Note: If the list is empty, will be null.
# Input Format
# There is no input for this challenge. A random linked list is generated at runtime and passed to your function.
# Constraints
#
# Output Format
# If the list contains a cycle, your function must return true. If the list does not contain a cycle, it must return false. The binary integer corresponding to the boolean value returned by your function is printed to stdout by our hidden code checker.
# Sample Input
# The following linked lists are passed as arguments to your function:
# image
# image
# Sample Output
# 0
# 1
# Explanation
# The first list has no cycle, so we return false and the hidden code checker prints to stdout.
# The second list has a cycle, so we return true and the hidden code checker prints to stdout.
# + id="PAsR54gLFdEl" colab_type="code" colab={}
"""
Detect a cycle in a linked list. Note that the head pointer may be 'None' if the list is empty.
A Node is defined as:
class Node(object):
def __init__(self, data = None, next_node = None):
self.data = data
self.next = next_node
"""
def has_cycle(head):
slow_move = head
fast_move = head
while slow_move and fast_move and fast_move.next:
slow_move = slow_move.next
fast_move = fast_move.next.next
if (slow_move == fast_move):
return 1
return 0
| easy/Linked_Lists_Detect_a_Cycle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Notebook for plotting Corona data from SSI
# A few more plots than in the main SSI_plots notebook
#
# Run SSI_get_data first to get a dataset of the right format.
#
# Data are read from subfolder : data
# Date of current dataset is read from: data_date.dat
#
#Limit dates to plot
N_days=60 # number of days to include
#N_days=150 # number of days to include
bad_cutoff=25000 # minimum number of tests to consider good
#bad_cutoff=100
# %matplotlib notebook
import matplotlib.pyplot as plt
import pandas as pd
from pathlib import Path
from datetime import date, timedelta
import pickle
# +
# Read date of last data-download
f = open("data_date.dat", 'rb')
date_str=pickle.load(f)
f.close()
print("Dataset is from: " + date_str)
# -
#define the file to read
datafolder=Path("data/")
datafile=datafolder / "Test_pos_over_time.csv"
# Read datafile
# Skips last two lines (which does not convert to date) and converts index to date
# Notice handeling of danish format of the numbers (both decimal and thousands)
df=pd.read_csv(datafile, sep=';', parse_dates=['Date'], index_col=['Date'],error_bad_lines=False, engine='python', skipfooter=2, decimal=',', thousands='.')
# +
# calculate some more numbers
# Positive emperical scaled by number of tests to power of 0.7
# This scaling is based on results in
# SSI "Ekspertrapport af d. 23. oktober 2020 Incidens og fremskrivning af COVID-19 tilfælde"
# https://www.ssi.dk/-/media/ssi-files/ekspertrapport-af-den-23-oktober-2020-incidens-og-fremskrivning-af-covid19-tilflde.pdf?la=da
def calcScaledNumber (row):
if row.NotPrevPos > 0 :
return row.NewPositive / (row.NotPrevPos**0.7) * 50000**0.7 / 50000 *100#Normalized positiv procent to 50000 tests
else:
return 0
df['ScaledNumber']=df.apply(lambda row: calcScaledNumber(row), axis=1)
# Recalculate Positiv procent to get more decimals for plotting
def calcPosPct (row):
if row.NotPrevPos > 0 :
return row.NewPositive / row.NotPrevPos * 100
else:
return 0
df['PosPct']=df.apply(lambda row: calcPosPct(row), axis=1)
# +
# for easy plot make a sub data frame with selected number of days
df_sel=df[date.today()-timedelta(days=N_days):]
# and make index for "bad" datapoints
bad_idx=df_sel['NotPrevPos']<bad_cutoff
# -
# define a common title including date from file
title_str='SSI COVID-19 data, tilfælde opgjort på prøvetagningsdato \n'
title_str += date_str
ax=df_sel.plot(y='NewPositive',title=title_str,style='.');
df_sel[bad_idx].plot(ax=ax,y='NewPositive',style='.',color='red',label='NewPositive (Tested<'+ str(bad_cutoff) + ')');
ax=df_sel.plot(y='NotPrevPos',label='Tested (NotPrevPos)',title=title_str,style='.');
df_sel[bad_idx].plot(ax=ax,y='NotPrevPos',style='.',color='red',label='Tested<'+ str(bad_cutoff) + '');
ax=df_sel.plot(y='PosPct',title=title_str,label='NewPositive / NotPrevPosTested * 100',style='.');
df_sel[bad_idx].plot(ax=ax,y='PosPct',style='.',color='red',label='NewPositive / NotPrevPosTested * 100 (Tested<'+ str(bad_cutoff) + ')');
ax.set_ylabel("%");
ax.set_ylim(0,3)
ax=df_sel.plot(y='ScaledNumber',title=title_str,label='NewPositive/NotPrevPosTested^0.7',style='.');
df_sel[bad_idx].plot(ax=ax,y='ScaledNumber',style='.',color='red', label='NewPositive/NotPrevPosTested^0.7 (Tested<'+ str(bad_cutoff) + ')');
# +
axs=[None]*3 #define axs list as empty 3 entries
fig = plt.figure(figsize=(7, 10))
axs[0] = plt.subplot(311)
axs[1] = plt.subplot(312,sharex=axs[0])
axs[2] = plt.subplot(313,sharex=axs[0])
df_sel.plot(ax=axs[0],y='PosPct',title=title_str,label='NewPositive / NotPrevPosTested * 100',style='.');
df_sel[bad_idx].plot(ax=axs[0],y='PosPct',style='.',color='red',label='NewPositive / NotPrevPosTested * 100 (Tested<'+ str(bad_cutoff) + ')');
axs[0].set_ylabel("%");
axs[0].set_ylim(0,2)
df_sel.plot(ax=axs[1],y='NewPositive',style='.');
df_sel[bad_idx].plot(ax=axs[1],y='NewPositive',style='.',color='red',label='NewPositive (Tested<'+ str(bad_cutoff) + ')');
df_sel.plot(ax=axs[2],y='NotPrevPos',label='Tested (NotPrevPos)',style='.');
df_sel[bad_idx].plot(ax=axs[2],y='NotPrevPos',style='.',color='red',label='Tested<'+ str(bad_cutoff) + '');
# +
axs=[None]*4 #define axs list as empty 4 entries
fig = plt.figure(figsize=(7, 15))
axs[0] = plt.subplot(411)
axs[1] = plt.subplot(412,sharex=axs[0])
axs[2] = plt.subplot(413,sharex=axs[0])
axs[3] = plt.subplot(414,sharex=axs[0])
df_sel.plot(ax=axs[0],y='PosPct',title=title_str,label='NewPositive / NotPrevPosTested * 100',style='.');
df_sel[bad_idx].plot(ax=axs[0],y='PosPct',style='.',color='red',label='NewPositive / NotPrevPosTested * 100 (Tested<'+ str(bad_cutoff) + ')');
axs[0].set_ylabel("%");
axs[0].set_ylim(0,4.5)
axs[0].tick_params(which='both', bottom=True, top=True, left=True, right=True, direction='in')
df_sel.plot(ax=axs[1], y='ScaledNumber',label='NewPositive/NotPrevPosTested^0.7 * 50.000^0.7 / 50.000 *100',style='.');
df_sel[bad_idx].plot(ax=axs[1],y='ScaledNumber',style='.',color='red', label=' (Tested<'+ str(bad_cutoff) + ')');
axs[1].set_ylabel("Positiv Procent [Estimated for 50.000 tests]");
axs[1].tick_params(which='both', bottom=True, top=True, left=True, right=True, direction='in')
axs[1].set_ylim(0,4.5)
df_sel.plot(ax=axs[2],y='NewPositive',style='.');
df_sel[bad_idx].plot(ax=axs[2],y='NewPositive',style='.',color='red',label='NewPositive (Tested<'+ str(bad_cutoff) + ')');
axs[2].tick_params(which='both', bottom=True, top=True, left=True, right=True, direction='in')
df_sel.plot(ax=axs[3],y='NotPrevPos',label='Tested (NotPrevPos)',style='.');
df_sel[bad_idx].plot(ax=axs[3],y='NotPrevPos',style='.',color='red',label='Tested<'+ str(bad_cutoff) + '');
axs[3].tick_params(which='both', bottom=True, top=True, left=True, right=True, direction='in')
#save a pdf for printing
#plt.savefig('All4.pdf')
# -
| SSI_more_plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Py3 research env
# language: python
# name: py3_research
# ---
# + [markdown] id="izA3-6kffbdT"
# ## Practice: A Visual Notebook to Using BERT for the First Time
#
# *Credits: first part of this notebook belongs to <NAME> and his [great blog post](http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/) (while it has minor changes). His blog is a great way to dive into the DL and NLP concepts.*
#
# <img src="https://jalammar.github.io/images/distilBERT/bert-distilbert-sentence-classification.png" />
#
# In this notebook, we will use pre-trained deep learning model to process some text. We will then use the output of that model to classify the text. The text is a list of sentences from film reviews. And we will calssify each sentence as either speaking "positively" about its subject of "negatively".
#
# ### Models: Sentence Sentiment Classification
# Our goal is to create a model that takes a sentence (just like the ones in our dataset) and produces either 1 (indicating the sentence carries a positive sentiment) or a 0 (indicating the sentence carries a negative sentiment). We can think of it as looking like this:
#
# <img src="https://jalammar.github.io/images/distilBERT/sentiment-classifier-1.png" />
#
# Under the hood, the model is actually made up of two model.
#
# * DistilBERT processes the sentence and passes along some information it extracted from it on to the next model. DistilBERT is a smaller version of BERT developed and open sourced by the team at HuggingFace. It’s a lighter and faster version of BERT that roughly matches its performance.
# * The next model, a basic Logistic Regression model from scikit learn will take in the result of DistilBERT’s processing, and classify the sentence as either positive or negative (1 or 0, respectively).
#
# The data we pass between the two models is a vector of size 768. We can think of this of vector as an embedding for the sentence that we can use for classification.
#
#
# <img src="https://jalammar.github.io/images/distilBERT/distilbert-bert-sentiment-classifier.png" />
#
# ## Dataset
# The dataset we will use in this example is [SST2](https://nlp.stanford.edu/sentiment/index.html), which contains sentences from movie reviews, each labeled as either positive (has the value 1) or negative (has the value 0):
#
#
# <table class="features-table">
# <tr>
# <th class="mdc-text-light-green-600">
# sentence
# </th>
# <th class="mdc-text-purple-600">
# label
# </th>
# </tr>
# <tr>
# <td class="mdc-bg-light-green-50" style="text-align:left">
# a stirring , funny and finally transporting re imagining of beauty and the beast and 1930s horror films
# </td>
# <td class="mdc-bg-purple-50">
# 1
# </td>
# </tr>
# <tr>
# <td class="mdc-bg-light-green-50" style="text-align:left">
# apparently reassembled from the cutting room floor of any given daytime soap
# </td>
# <td class="mdc-bg-purple-50">
# 0
# </td>
# </tr>
# <tr>
# <td class="mdc-bg-light-green-50" style="text-align:left">
# they presume their audience won't sit still for a sociology lesson
# </td>
# <td class="mdc-bg-purple-50">
# 0
# </td>
# </tr>
# <tr>
# <td class="mdc-bg-light-green-50" style="text-align:left">
# this is a visually stunning rumination on love , memory , history and the war between art and commerce
# </td>
# <td class="mdc-bg-purple-50">
# 1
# </td>
# </tr>
# <tr>
# <td class="mdc-bg-light-green-50" style="text-align:left">
# <NAME> 's bartleby should have been the be all end all of the modern office anomie films
# </td>
# <td class="mdc-bg-purple-50">
# 1
# </td>
# </tr>
# </table>
#
# ## Installing the transformers library
# Let's start by installing the huggingface transformers library so we can load our deep learning NLP model.
# + id="To9ENLU90WGl" outputId="cb00a129-004a-4654-9e63-5a9a27dd7797" colab={"base_uri": "https://localhost:8080/", "height": 360}
# !pip install transformers
# + id="fvFvBLJV0Dkv"
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
import torch
import transformers as ppb
import warnings
warnings.filterwarnings('ignore')
# + [markdown] id="zQ-42fh0hjsF"
# ## Part 1. Using BERT for text classification.
#
# ### Importing the dataset
# We'll use pandas to read the dataset and load it into a dataframe.
# + id="cyoj29J24hPX"
df = pd.read_csv(
'https://github.com/clairett/pytorch-sentiment-classification/raw/master/data/SST2/train.tsv',
delimiter='\t',
header=None
)
# + [markdown] id="dMVE3waNhuNj"
# For performance reasons, we'll only use 2,000 sentences from the dataset
# + id="gTM3hOHW4hUY"
batch_1 = df[:2000]
# + [markdown] id="PRc2L89hh1Tf"
# We can ask pandas how many sentences are labeled as "positive" (value 1) and how many are labeled "negative" (having the value 0)
# + id="jGvcfcCP5xpZ" outputId="d9af5a28-7af9-46be-f0ef-ac75a2c611cb" colab={"base_uri": "https://localhost:8080/", "height": 68}
batch_1[1].value_counts()
# + [markdown] id="7_MO08_KiAOb"
# ## Loading the Pre-trained BERT model
# Let's now load a pre-trained BERT model.
# + id="q1InADgf5xm2"
# For DistilBERT:
model_class, tokenizer_class, pretrained_weights = (ppb.DistilBertModel, ppb.DistilBertTokenizer, 'distilbert-base-uncased')
## Want BERT instead of distilBERT? Uncomment the following line:
#model_class, tokenizer_class, pretrained_weights = (ppb.BertModel, ppb.BertTokenizer, 'bert-base-uncased')
# Load pretrained model/tokenizer
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
# + [markdown] id="lZDBMn3wiSX6"
# Right now, the variable `model` holds a pretrained distilBERT model -- a version of BERT that is smaller, but much faster and requiring a lot less memory.
#
# ### Step #1: Preparing the Dataset
# Before we can hand our sentences to BERT, we need to so some minimal processing to put them in the format it requires.
#
# ### Tokenization
# Our first step is to tokenize the sentences -- break them up into word and subwords in the format BERT is comfortable with.
# + id="Dg82ndBA5xlN"
tokenized = batch_1[0].apply((lambda x: tokenizer.encode(x, add_special_tokens=True)))
# + id="kf6noS-lYHxV"
# tokenizer.encode(batch_1[0][0], add_special_tokens=True)
# + [markdown] id="mHwjUwYgi-uL"
# <img src="https://jalammar.github.io/images/distilBERT/bert-distilbert-tokenization-2-token-ids.png" />
#
# ### Padding
# After tokenization, `tokenized` is a list of sentences -- each sentences is represented as a list of tokens. We want BERT to process our examples all at once (as one batch). It's just faster that way. For that reason, we need to pad all lists to the same size, so we can represent the input as one 2-d array, rather than a list of lists (of different lengths).
# + id="URn-DWJt5xhP"
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0]*(max_len-len(i)) for i in tokenized.values])
# + [markdown] id="Mdjg306wjjmL"
# Our dataset is now in the `padded` variable, we can view its dimensions below:
# + id="jdi7uXo95xeq" outputId="b4b0d628-0e2a-43c3-f2d5-7d1722966d06" colab={"base_uri": "https://localhost:8080/", "height": 34}
np.array(padded).shape
# + [markdown] id="sDZBsYSDjzDV"
# ### Masking
# If we directly send `padded` to BERT, that would slightly confuse it. We need to create another variable to tell it to ignore (mask) the padding we've added when it's processing its input. That's what attention_mask is:
# + id="4K_iGRNa_Ozc" outputId="ef280283-0565-4225-8a64-87a34f058e92" colab={"base_uri": "https://localhost:8080/"}
attention_mask = np.where(padded != 0, 1, 0)
attention_mask.shape
# + id="RIe9yzPPZ82S"
import matplotlib.pyplot as plt
# + id="f6fbH2shZ5DC" outputId="e7cf00c2-6c21-4f93-fc8c-0dd03555d707" colab={"base_uri": "https://localhost:8080/"}
plt.pcolormesh(attention_mask)
# + [markdown] id="jK-CQB9-kN99"
# ### Step #1: And Now, Deep Learning!
# Now that we have our model and inputs ready, let's run our model!
#
# <img src="https://jalammar.github.io/images/distilBERT/bert-distilbert-tutorial-sentence-embedding.png" />
#
# The `model()` function runs our sentences through BERT. The results of the processing will be returned into `last_hidden_states`.
# + id="39UVjAV56PJz"
input_ids = torch.tensor(padded)
attention_mask = torch.tensor(attention_mask)
with torch.no_grad():
last_hidden_states = model(input_ids, attention_mask=attention_mask)
# + [markdown] id="FoCep_WVuB3v"
# Let's slice only the part of the output that we need. That is the output corresponding the first token of each sentence. The way BERT does sentence classification, is that it adds a token called `[CLS]` (for classification) at the beginning of every sentence. The output corresponding to that token can be thought of as an embedding for the entire sentence.
#
# <img src="https://jalammar.github.io/images/distilBERT/bert-output-tensor-selection.png" />
#
# We'll save those in the `features` variable, as they'll serve as the features to our logitics regression model.
# + id="n_-FBPUSU9O3" outputId="2960f55e-1240-4c0c-fcda-fb79cf24e4d0" colab={"base_uri": "https://localhost:8080/", "height": 34}
input_ids.shape
# + id="JcL8y0VTU9O7" outputId="d7d193ce-e77f-4093-af7f-7968ce93e44c" colab={"base_uri": "https://localhost:8080/", "height": 34}
last_hidden_states[0].shape
# + id="C9t60At16PVs"
features = last_hidden_states[0][:,0,:].numpy()
# + id="QIhq_FESbC6o" outputId="1acc731a-1d6b-45a2-b002-ffe72052e2cf" colab={"base_uri": "https://localhost:8080/", "height": 286}
plt.pcolormesh(features)
# + [markdown] id="_VZVU66Gurr-"
# The labels indicating which sentence is positive and negative now go into the `labels` variable
# + id="JD3fX2yh6PTx"
labels = batch_1[1]
# + [markdown] id="iaoEvM2evRx1"
# ### Step #3: Train/Test Split
# Let's now split our datset into a training set and testing set (even though we're using 2,000 sentences from the SST2 training set).
# + id="ddAqbkoU6PP9"
train_features, test_features, train_labels, test_labels = train_test_split(features, labels)
# + [markdown] id="B9bhSJpcv1Bl"
# <img src="https://jalammar.github.io/images/distilBERT/bert-distilbert-train-test-split-sentence-embedding.png" />
#
# ### [Extra] Grid Search for Parameters
# We can dive into Logistic regression directly with the Scikit Learn default parameters, but sometimes it's worth searching for the best value of the C parameter, which determines regularization strength.
# + id="cyEwr7yYD3Ci"
# parameters = {'C': np.linspace(0.0001, 100, 20)}
# grid_search = GridSearchCV(LogisticRegression(), parameters)
# grid_search.fit(train_features, train_labels)
# print('best parameters: ', grid_search.best_params_)
# print('best scrores: ', grid_search.best_score_)
# + [markdown] id="KCT9u8vAwnID"
# We now train the LogisticRegression model. If you've chosen to do the gridsearch, you can plug the value of C into the model declaration (e.g. `LogisticRegression(C=5.2)`).
# + id="gG-EVWx4CzBc" outputId="2cc25784-f768-42e5-94d9-4946b100a30e" colab={"base_uri": "https://localhost:8080/", "height": 102}
lr_clf = LogisticRegression()
lr_clf.fit(train_features, train_labels)
# + [markdown] id="3rUMKuVgwzkY"
# <img src="https://jalammar.github.io/images/distilBERT/bert-training-logistic-regression.png" />
#
# ### Step #4: Evaluating Model
# So how well does our model do in classifying sentences? One way is to check the accuracy against the testing dataset:
# + id="iCoyxRJ7ECTA" outputId="28df18ee-de75-4faa-dade-8bd3b72bd6fc" colab={"base_uri": "https://localhost:8080/", "height": 34}
lr_clf.score(test_features, test_labels)
# + [markdown] id="75oyhr3VxHoE"
# How good is this score? What can we compare it against? Let's first look at a dummy classifier:
# + id="lnwgmqNG7i5l" outputId="e974fc75-227d-4628-ba34-a844a0951257" colab={"base_uri": "https://localhost:8080/", "height": 34}
from sklearn.dummy import DummyClassifier
clf = DummyClassifier()
scores = cross_val_score(clf, train_features, train_labels)
print("Dummy classifier score: %0.3f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
# + [markdown] id="7Lg4LOpoxSOR"
# So our model clearly does better than a dummy classifier. But how does it compare against the best models?
#
# ### Proper SST2 scores
# For reference, the [highest accuracy score](http://nlpprogress.com/english/sentiment_analysis.html) for this dataset is currently **96.8**. DistilBERT can be trained to improve its score on this task – a process called **fine-tuning** which updates BERT’s weights to make it achieve a better performance in this sentence classification task (which we can call the downstream task). The fine-tuned DistilBERT turns out to achieve an accuracy score of **90.7**. The full size BERT model achieves **94.9**.
#
#
#
# And that’s it! That’s a good first contact with BERT. The next step would be to head over to the documentation and try your hand at [fine-tuning](https://huggingface.co/transformers/examples.html#glue). You can also go back and switch from distilBERT to BERT and see how that works.
# + [markdown] id="J_sSWQIpbaJy"
# ## Part 2: Looking back.
#
# __Now it is your turn to reproduce the steps above.__
#
# We shall revisit the first homework and see whether we could improve the results a little bit more. The average ROC-AUC on test set was around $0.9$ (using the words embeddings).
#
# __Let's see whether we can beat it.__
# + id="_aO4XqMhU9Pf" outputId="13b0c541-3599-4d5a-d903-de76c08d751d" colab={"base_uri": "https://localhost:8080/", "height": 51}
# Loading data
try:
data = pd.read_csv('./datasets/comments_small_dataset/comments.tsv', sep='\t')
except FileNotFoundError:
# ! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/datasets/comments_small_dataset/comments.tsv -nc
data = pd.read_csv("comments.tsv", sep='\t')
# + [markdown] id="L0zGkJWuU9Pi"
# Example output, just like before.
# + id="j209rCGUU9Pi" outputId="ad25edc3-89e0-4d33-db92-3a431a24e6db" colab={"base_uri": "https://localhost:8080/", "height": 204}
texts = data['comment_text'].values
target = data['should_ban'].values
data[5fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b]
# + [markdown] id="HdvCF3V8U9Pn"
# Splitting the data
# + id="Sx-XdA1yU9Po"
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
# + [markdown] id="NrlmR3mwU9Pr"
# Now, tokenize the train and test parts of the dataset.
# + id="_Kd_31LPU9Pr"
texts_train_tokenized = [tokenizer.encode(x, max_length=512, truncation=True, add_special_tokens=True) for x in texts_train]# YOUR CODE HERE
# + id="LXQ8QZjNU9Pu"
texts_test_tokenized = [tokenizer.encode(x, max_length=512, truncation=True, add_special_tokens=True) for x in texts_test]# YOUR CODE HERE# YOUR CODE HERE
# + id="P6dciSSTU9Px"
def pad_texts(texts_tokenized):
max_len = 0
for i in texts_tokenized:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0]*(max_len-len(i)) for i in texts_tokenized])
return padded
# + id="MOlgOB6ZU9P1"
padded_train = pad_texts(texts_train_tokenized)
padded_test = pad_texts(texts_test_tokenized)
# + id="3pqr6wc6U9P4"
def attention_mask(padded_text):
return np.where(padded_text != 0, 1, 0)
# + id="WCgxSNj1U9P7" outputId="b797c536-869c-4510-d914-14bd72d1469a" colab={"base_uri": "https://localhost:8080/", "height": 34}
attention_mask_train = attention_mask(padded_train)
attention_mask_train.shape
# + id="7oK2J4lfU9P-" outputId="943cd195-783d-4c8b-f43b-1d3119fa6c72" colab={"base_uri": "https://localhost:8080/", "height": 34}
attention_mask_test = attention_mask(padded_test)
attention_mask_test.shape
# + [markdown] id="ESMYHu0XU9QB"
# Now move the model to GPU and check if it is in `eval` mode.
# + id="ZF_Aw9OBdsaA"
device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
# + id="xtzc5iRvcvAc" outputId="014da196-a055-405c-8593-339b31699319" colab={"base_uri": "https://localhost:8080/", "height": 1000}
model = model_class.from_pretrained(pretrained_weights)
model.eval()
model.to(device)
# + id="pUx7lJGccerO"
# + [markdown] id="m6qQE4RnU9QF"
# Finally, process all the data with the BERT model:
# + id="ETCHJuqsU9QF"
import tqdm
# + id="QOsGDfYyc052"
torch.cuda.empty_cache()
# + id="hnZvfeIAc4OZ" outputId="77bdf11d-d690-4e24-d26d-395b9105cea6" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["c07a44f4964840e285d47953d7d9eaed", "362ad79fc8ca49a8ac17147b2be71009", "<KEY>", "<KEY>", "<KEY>", "52e449fcbead4b3d9c4a785aacec23db", "1ca78f33231b4ce0a608af2e4fd9fb21", "<KEY>"]}
output = []
batch_size = 16
for idx in tqdm.tnrange(0, 500, batch_size):
batch = torch.tensor(padded_train[idx:idx+batch_size]).to(device)
local_attention_mask = torch.tensor(attention_mask_train[idx:idx+batch_size]).to(device)
with torch.no_grad():
last_hidden_states = model(batch, attention_mask=local_attention_mask)[0][:,0,:].cpu().numpy()
output.append(last_hidden_states)
# + id="8lJOoV4QdnpB" outputId="091edcb1-9715-46ab-c139-40748fe07df6" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["4acbf8e05b70498bae84961b71a7c4f4", "f4b85d0dc324470ba24527a682ca7352", "873acb75087c4e86b524b03b626d06f1", "5bbeb6800cd245a581e8861b8e46c85e", "<KEY>", "89c957ac1fac4d339f903ee113779da3", "7c5e793c7de845b6a632587bfe6fe112", "39d4969a800049e1a7f8bb1e9adf4290"]}
output_test = []
batch_size = 16
for idx in tqdm.tnrange(0, 500, batch_size):
batch = torch.tensor(padded_test[idx:idx+batch_size]).to(device)
local_attention_mask = torch.tensor(attention_mask_test[idx:idx+batch_size]).to(device)
with torch.no_grad():
last_hidden_states = model(batch, attention_mask=local_attention_mask)[0][:, 0, :].cpu().numpy()
output_test.append(last_hidden_states)
# + [markdown] id="9ifp7V7dU9QI"
# Cast the result to the numpy (e.g.) array:
# + id="Pm5uzRmmU9QJ"
train_features = np.vstack(output)
# + id="90QGDLe1U9QL"
test_features = np.vstack(output_test)
# + id="wo8N4j6jU9QO" outputId="9f1d779a-8391-47af-9403-4912524a5336" colab={"base_uri": "https://localhost:8080/", "height": 102}
lr_clf = LogisticRegression()
lr_clf.fit(train_features, y_train)
# + id="PIXK4rC4U9QT" outputId="6491d42e-485e-4c7f-d084-f32e284bc1d0" colab={"base_uri": "https://localhost:8080/", "height": 34}
lr_clf.score(test_features, y_test)
# + id="QveETDX4U9QY"
from sklearn.metrics import roc_auc_score, roc_curve
# + id="P24bSMl0U9Qb" outputId="f9d87a49-884d-40c1-f090-2fff8a5792f7" colab={"base_uri": "https://localhost:8080/", "height": 34}
y_train.shape
# + id="1OLh5zq3U9Qg"
from matplotlib import pyplot as plt
# + id="xSA5V5vjU9Ql" outputId="4689c598-2272-4d42-c5a3-4a4848fd9d93" colab={"base_uri": "https://localhost:8080/", "height": 282}
proba = lr_clf.predict_proba(train_features)[:, 1]
auc = roc_auc_score(y_train, proba)
plt.plot(*roc_curve(y_train, proba)[:2], label='%s AUC=%.4f' % ('train', auc))
proba = lr_clf.predict_proba(test_features)[:, 1]
auc = roc_auc_score(y_test, proba)
plt.plot(*roc_curve(y_test, proba)[:2], label='%s AUC=%.4f' % ('test', auc))
plt.legend()
# + [markdown] id="5ojO0kiTU9Qp"
# So, how does it look? Did we achieve better results?
#
# Here come some further ideas:
#
# * Try using the larger BERT (e.g. BERT-base or BERT-large) and compare the results (be careful, they require more memory).
#
# * Using BERT output for translation? Why not ;)
# + [markdown] id="7NjMrjyGrthS"
# __How to use output hidden states?__
# + id="JItae26Mg5DF"
config = ppb.DistilBertConfig.from_pretrained(pretrained_weights, output_hidden_states=True)
model = model_class.from_pretrained(pretrained_weights, config=config).to(device)
# + id="rMvwzBKaq0ys" outputId="15b58967-69aa-4166-e787-78bffd103d4d" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["555fc833379d430c94e3a17f873fe3ab", "09ed34eb269d413ebf50d1564441178c", "<KEY>", "<KEY>", "<KEY>", "8cb84f4f9fde4b76a774801e28160e69", "537972550dc144beaf9adb5b46fafbb4", "8de54719c4d04663b88c259129ac5dbc"]}
output = []
batch_size = 16
for idx in tqdm.tnrange(0, 16, batch_size):
batch = torch.tensor(padded_train[idx:idx+batch_size]).to(device)
local_attention_mask = torch.tensor(attention_mask_train[idx:idx+batch_size]).to(device)
with torch.no_grad():
example = model(batch, attention_mask=local_attention_mask)
# + id="Hl-EDeF2rWH8" outputId="e50030b4-b80c-4498-c791-814711037147" colab={"base_uri": "https://localhost:8080/", "height": 34}
torch.allclose(example[0], example[1][6])
# + id="zxmfzJlcrZ7b"
| sequence-classification/a-visual-notebook-to-using-bert-for-the-first-time.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="aokMFYco5Dki"
import random
import math
import time
import pandas as pd
import numpy as np
import torch
import torch.utils.data as data
import torch.nn as nn
import torch.nn.init as init
import torch.nn.functional as F
import torch.optim as optim
# + id="wvHo-7O-5Dkl"
torch.manual_seed(1234)
np.random.seed(1234)
random.seed(1234)
# + id="1lKxvOS65Dko"
from utils.dataloader import make_datapath_list, DataTransform, VOCDataset
rootpath = "./data/VOCdevkit/VOC2012/"
train_img_list, train_anno_list, val_img_list, val_anno_list = make_datapath_list(
rootpath=rootpath)
color_mean = (0.485, 0.456, 0.406)
color_std = (0.229, 0.224, 0.225)
train_dataset = VOCDataset(train_img_list, train_anno_list, phase="train", transform=DataTransform(
input_size=475, color_mean=color_mean, color_std=color_std))
val_dataset = VOCDataset(val_img_list, val_anno_list, phase="val", transform=DataTransform(
input_size=475, color_mean=color_mean, color_std=color_std))
batch_size = 16
train_dataloader = data.DataLoader(
train_dataset, batch_size=batch_size, shuffle=True)
val_dataloader = data.DataLoader(
val_dataset, batch_size=batch_size, shuffle=False)
dataloaders_dict = {"train": train_dataloader, "val": val_dataloader}
# + id="h8AkC5NH5Dkr" outputId="9d661cec-bbc4-455c-ff02-9e6639385f11"
from utils.pspnet import PSPNet
net = PSPNet(n_classes=150)
state_dict = torch.load("./weights/pspnet50_ADE20K.pth")
net.load_state_dict(state_dict)
# 분류용 합성곱 층을 출력 수 21로 바꾼다.
n_classes = 21
net.decode_feature.classification = nn.Conv2d(
in_channels=512, out_channels=n_classes, kernel_size=1, stride=1, padding=0)
net.aux.classification = nn.Conv2d(
in_channels=256, out_channels=n_classes, kernel_size=1, stride=1, padding=0)
# 교체한 합성곱 층 초기화, 활성화 함수는 시그모이드 함수이므로 Xavier 사용
def weights_init(m):
if isinstance(m, nn.Conv2d):
nn.init.xavier_normal_(m.weight.data)
if m.bias is not None:
nn.init.constant_(m.bias, 0.0)
net.decode_feature.classification.apply(weights_init)
net.aux.classification.apply(weights_init)
# + id="gXuKq8VP5Dku" outputId="7257e247-3e56-4f49-e72c-8ceb68357433"
net
# + id="C0Y6_cZh5Dkx"
class PSPLoss(nn.Module):
def __init__(self, aux_weight=0.4):
super(PSPLoss, self).__init__()
self.aux_weight = aux_weight
def forward(self, outputs, targets):
loss = F.cross_entropy(outputs[0], targets, reduction='mean')
loss_aux = F.cross_entropy(outputs[1], targets, reduction='mean')
return loss+self.aux_weight*loss_aux
criterion = PSPLoss(aux_weight=0.4)
# + id="1HQuH4465Dk0"
optimizer = optim.SGD([
{'params': net.feature_conv.parameters(), 'lr': 1e-3},
{'params': net.feature_res_1.parameters(), 'lr': 1e-3},
{'params': net.feature_res_2.parameters(), 'lr': 1e-3},
{'params': net.feature_dilated_res_1.parameters(), 'lr': 1e-3},
{'params': net.feature_dilated_res_2.parameters(), 'lr': 1e-3},
{'params': net.pyramid_pooling.parameters(), 'lr': 1e-3},
{'params': net.decode_feature.parameters(), 'lr': 1e-2},
{'params': net.aux.parameters(), 'lr': 1e-2},
], momentum=0.9, weight_decay=0.0001)
def lambda_epoch(epoch):
max_epoch = 30
return math.pow((1-epoch/max_epoch), 0.9)
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_epoch)
# + id="QDo8kP3r5Dk3"
def train_model(net, dataloaders_dict, criterion, scheduler, optimizer, num_epochs):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
net.to(device)
torch.backends.cudnn.benchmark = True
num_train_imgs = len(dataloaders_dict["train"].dataset)
num_val_imgs = len(dataloaders_dict["val"].dataset)
batch_size = dataloaders_dict["train"].batch_size
# 반복자의 카운터 설정
iteration = 1
logs = []
# multiple minibatch
batch_multiplier = 3
for epoch in range(num_epochs):
t_epoch_start = time.time()
t_iter_start = time.time()
epoch_train_loss = 0.0
epoch_val_loss = 0.0
print('-------------')
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-------------')
# 에폭별 훈련 및 검증 루프
for phase in ['train', 'val']:
if phase == 'train':
net.train() # 모델을 훈련 모드로
scheduler.step() # 최적화 스케줄러 갱신
optimizer.zero_grad()
print('(train)')
else:
if((epoch+1) % 5 == 0):
net.eval() # 모델을 검증 모드로
print('-------------')
print('(val)')
else:
continue
count = 0 # multiple minibatch
for imges, anno_class_imges in dataloaders_dict[phase]:
# 미니 배치 크기가 1이면 배치 정규화에서 오류가 발생하여 피한다.
if imges.size()[0] == 1:
continue
imges = imges.to(device)
anno_class_imges = anno_class_imges.to(device)
# 멀티플 미니 배치로 파라미터 갱신
if (phase == 'train') and (count == 0):
optimizer.step()
optimizer.zero_grad()
count = batch_multiplier
# 순전파 계산
with torch.set_grad_enabled(phase == 'train'):
outputs = net(imges)
loss = criterion(
outputs, anno_class_imges.long()) / batch_multiplier
# 훈련 시에는 역전파
if phase == 'train':
loss.backward() # 경사 계산
count -= 1 # multiple minibatch
if (iteration % 10 == 0):
t_iter_finish = time.time()
duration = t_iter_finish - t_iter_start
print('{} || Loss: {:.4f} || 10iter: {:.4f} sec.'.format(
iteration, loss.item()/batch_size*batch_multiplier, duration))
t_iter_start = time.time()
epoch_train_loss += loss.item() * batch_multiplier
iteration += 1
else:
epoch_val_loss += loss.item() * batch_multiplier
t_epoch_finish = time.time()
print('-------------')
print('epoch {} || Epoch_TRAIN_Loss:{:.4f} ||Epoch_VAL_Loss:{:.4f}'.format(
epoch+1, epoch_train_loss/num_train_imgs, epoch_val_loss/num_val_imgs))
print('timer: {:.4f} sec.'.format(t_epoch_finish - t_epoch_start))
t_epoch_start = time.time()
log_epoch = {'epoch': epoch+1, 'train_loss': epoch_train_loss /
num_train_imgs, 'val_loss': epoch_val_loss/num_val_imgs}
logs.append(log_epoch)
df = pd.DataFrame(logs)
df.to_csv("log_output.csv")
torch.save(net.state_dict(), 'weights/pspnet50_' +
str(epoch+1) + '.pth')
# + id="1XEy-KBb5Dk5" outputId="9f1b844f-178f-45f0-80fb-b992b8b452dc"
num_epochs = 30
train_model(net, dataloaders_dict, criterion, scheduler, optimizer, num_epochs=num_epochs)
| 3_semantic_segmentation/3-7_PSPNet_training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Compute HCQTs from the audio files for each dataset in the test set.
#
# *Running this requires a folder containing all the audio files from all test datasets. Not present in this github repository :(*
import sox
import mir_eval
import os
import json
import numpy as np
import glob
import shutil
import medleydb as mdb
# cd ../deepsalience/
import compute_training_data as C
import glob
import os
import numpy as np
def compute_test_inputs(audio_files, test_dir):
for f in audio_files:
print(f)
prefix = os.path.basename(f).split('.')[0]
save_path = os.path.join(test_dir, "{}_input.npy".format(prefix))
if os.path.exists(save_path):
print(" > already done!")
continue
hcqt = C.compute_hcqt(f)
np.save(save_path, hcqt.astype(np.float32))
print(" > done!")
# # Melody Datasets
# +
# MedleyDB Melody
mdb_mel_path = "../../multif0_ismir2017/multitask_test_data/medleydb_melody"
with open("../outputs/data_splits.json", 'r') as fhandle:
splits = json.load(fhandle)
test_tracks = splits['test']
# # copy relevant files from medleydb folder to this folder
for trackid in test_tracks:
mtrack = mdb.MultiTrack(trackid)
current_audio_path = mtrack.mix_path
target_audio_path = os.path.join(mdb_mel_path, os.path.basename(current_audio_path))
melody_file = mtrack.melody2_fpath
target_mel_file = os.path.join(mdb_mel_path, os.path.basename(melody_file))
if not os.path.exists(target_audio_path) and os.path.exists(melody_file):
print(trackid)
shutil.copyfile(current_audio_path, target_audio_path)
if not os.path.exists(target_mel_file):
shutil.copyfile(melody_file, target_mel_file)
# compute npy features
audio_files = glob.glob(os.path.join(mdb_mel_path, '*.wav'))
compute_test_inputs(audio_files, mdb_mel_path)
# -
# Orchset
orchset_path = "../../multif0_ismir2017/multitask_test_data/orchset"
audio_files = glob.glob(os.path.join(orchset_path, '*.wav'))
compute_test_inputs(audio_files, orchset_path)
# WJD - melody
wjd_mel_path = '../../multif0_ismir2017/multitask_test_data/weimar_jazz/melody/'
audio_files = glob.glob(os.path.join(wjd_mel_path, '*.wav'))
compute_test_inputs(audio_files, wjd_mel_path)
# # Bass Dataset
# WJD - bass
wjd_bass_path = '../../multif0_ismir2017/multitask_test_data/weimar_jazz/bass/'
audio_files = glob.glob(os.path.join(wjd_bass_path, '*.wav'))
compute_test_inputs(audio_files, wjd_bass_path)
# # Multif0 Datasets
# Bach10
bach10_path = '../../multif0_ismir2017/multitask_test_data/bach10'
audio_files = glob.glob(os.path.join(bach10_path, '*.wav'))
compute_test_inputs(audio_files, bach10_path)
# Su
su_path = '../../multif0_ismir2017/multitask_test_data/su'
audio_files = glob.glob(os.path.join(su_path, '*.wav'))
compute_test_inputs(audio_files, su_path)
# MAPS
maps_path = '../../multif0_ismir2017/multitask_test_data/maps'
audio_files = glob.glob(os.path.join(maps_path, '*.wav'))
compute_test_inputs(audio_files, maps_path)
# +
import mir_eval
import csv
import matplotlib.pyplot as plt
# %matplotlib inline
def time_freq_to_ragged_time_series(times, freqs):
max_time = np.max(times)
t_uniform = np.arange(0, max_time, 256./22050.)
time_idx = np.digitize(times, t_uniform) - 1
idx = time_idx < len(t_uniform)
time_idx = time_idx[idx]
freq_list = [[] for _ in t_uniform]
for i, f in zip(time_idx, freqs):
freq_list[i].append(f)
freq_arrays = [np.array(lst) for lst in freq_list]
return t_uniform, freq_arrays
maps_path = '../../multif0_ismir2017/multitask_test_data/maps/orig_txt_files/'
save_path = '../../multif0_ismir2017/multitask_test_data/maps'
txt_files = glob.glob(os.path.join(maps_path, '*.txt'))
step = 256./22050.
for txt_file in txt_files:
intervals = []
labels = []
with open(txt_file, 'r') as fhandle:
print(txt_file)
reader = csv.reader(fhandle, delimiter='\t')
first_line = True
for line in reader:
if first_line:
first_line = False
continue
intervals.append([float(line[0]), float(line[1])])
labels.append(int(line[2]))
hz_labels = mir_eval.util.midi_to_hz(np.array(labels))
times_list = []
freqs_list = []
for interval, hz_val in zip(intervals, hz_labels):
interval_extended = np.arange(
interval[0], interval[1] + step, step)
labels_extended = hz_val * np.ones(interval_extended.shape)
times_list.append(interval_extended)
freqs_list.append(labels_extended)
times = np.concatenate(times_list)
freqs = np.concatenate(freqs_list)
t_uniform, freq_arrays = time_freq_to_ragged_time_series(times, freqs)
save_file = os.path.join(save_path, os.path.basename(txt_file))
with open(save_file, 'w') as fhandle:
writer = csv.writer(fhandle, delimiter='\t')
for t, f_array in zip(t_uniform, freq_arrays):
line = np.concatenate([np.array([t]), f_array]).tolist()
writer.writerow(line)
# +
mdb_synth_txt_files = glob.glob(
"../../multif0_ismir2017/multitask_test_data/medleydb_multif0/orig_text_files/*.txt")
for txt_file in mdb_synth_txt_files:
print(txt_file)
times, freqs = mir_eval.io.load_time_series(txt_file, delimiter='\t')
t_uniform, freq_array = time_freq_to_ragged_time_series(times, freqs)
save_file = os.path.join(
"../../multif0_ismir2017/multitask_test_data/medleydb_multif0",
os.path.basename(txt_file))
with open(save_file, 'w') as fhandle:
writer = csv.writer(fhandle, delimiter='\t')
for t, f_array in zip(t_uniform, freq_array):
f_array_nonzero = f_array[f_array > 0]
line = np.concatenate([np.array([t]), f_array_nonzero]).tolist()
writer.writerow(line)
# -
data_path = "../../multif0_ismir2017/multitask_test_data/medleydb_multif0/"
trackid = "CelestialShore_DieForUs"
X = np.load(
os.path.join(data_path,
"{}_MIX_complete_nosynth_input.npy".format(trackid)))
times, freqs = mir_eval.io.load_ragged_time_series(
os.path.join(data_path,
"{}_multif0_nosynth_annotation.txt".format(trackid)))
plt.figure(figsize=(15, 8))
plt.subplot(2, 1, 1)
plt.imshow(X[0], origin='lower')
plt.axis('auto')
plt.subplot(2, 1, 2)
texp = []
fexp = []
for t, flist in zip(times, freqs):
for f in flist:
if f > 0:
texp.append(t)
fexp.append(f)
plt.plot(texp, fexp, '.')
plt.show()
# +
# mir_eval.io.load_valued_intervals?
# +
# MedleyDB Multif0 (Synth)
mdb_mf0_path = "../../multif0_ismir2017/multitask_test_data/medleydb_multif0"
with open("../outputs/data_splits.json", 'r') as fhandle:
splits = json.load(fhandle)
test_tracks = splits['test']
# # copy relevant files from medleydb synth to this folder
mdb_synth_folder = "/scratch/rmb456/multif0_ismir2017/multitask_data"
for trackid in test_tracks:
current_audio_path = os.path.join(
mdb_synth_folder, "{}_MIX_complete_nosynth.wav".format(trackid))
target_audio_path = os.path.join(mdb_mf0_path, os.path.basename(current_audio_path))
if not os.path.exists(current_audio_path):
print("{} not found".format(current_audio_path))
continue
multif0_file = os.path.join(
mdb_synth_folder, "{}_multif0_nosynth_annotation.txt".format(trackid))
target_mf0_file = os.path.join(mdb_mf0_path, os.path.basename(multif0_file))
if not os.path.exists(target_audio_path) and os.path.exists(multif0_file):
print(trackid)
shutil.copyfile(current_audio_path, target_audio_path)
if not os.path.exists(target_mf0_file):
shutil.copyfile(multif0_file, target_mf0_file)
# compute npy features
audio_files = glob.glob(os.path.join(mdb_mf0_path, '*.wav'))
compute_test_inputs(audio_files, mdb_mf0_path)
# -
# # Vocal Datasets
# iKala
ikala_path = '../../multif0_ismir2017/multitask_test_data/ikala'
audio_files = glob.glob(os.path.join(ikala_path, '*.wav'))
compute_test_inputs(audio_files, ikala_path)
| notebooks/2 - Build Multitask Test Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import math
from matplotlib.colors import LogNorm
from matplotlib import cm
df=pd.read_csv('/Users/jeff/simcore_files/reduced_pf0.2/condensed_results'
'/soft_pf0.2_sp010_lp100_condensed_filament.polar_order',
skiprows=2,
delim_whitespace=True,
header=None)
data = df.replace(0,1)
data=data/data.sum().sum()
min_data = data.min().min()
if (min_data == 0):
min_data = 1
max_data = data.max().max()
log_norm = LogNorm(vmin=min_data, vmax=max_data)
cbar_ticks = [math.pow(10, i) for i in range(math.floor(math.log10(min_data)),
1+math.ceil(math.log10(max_data)))]
sns.heatmap(
data,
norm=log_norm,
cbar_kws={"ticks": cbar_ticks},
cmap=cm.viridis
)
data.min().min()
math.log10(max_data)
range(math.floor(math.log10(min_data)), 1+math.ceil(math.log10(max_data)))
df=pd.read_csv('./soft_pf0.4_sp025_lp020_reload009_reduced10_filament.polar_order',skiprows=2,delim_whitespace=True,header=None)
# +
data = df.replace(0,1)
data=data/data.sum().sum()
min_data = data.min().min()
if (min_data == 0):
min_data = 1
max_data = data.max().max()
log_norm = LogNorm(vmin=min_data, vmax=max_data)
cbar_ticks = [math.pow(10, i) for i in range(math.floor(math.log10(min_data)), 1+math.ceil(math.log10(max_data)))]
sns.heatmap(
data,
norm=log_norm,
cbar_kws={"ticks": cbar_ticks},
cmap=cm.jet
)
# -
df=pd.read_csv('./soft_pf0.4_sp015_lp020_reload010_reduced10_filament.polar_order',skiprows=2,delim_whitespace=True,header=None)
data = df.replace(0,1)
data=data/data.sum().sum()
min_data = data.min().min()
if (min_data == 0):
min_data = 1
max_data = data.max().max()
log_norm = LogNorm(vmin=min_data, vmax=max_data)
cbar_ticks = [math.pow(10, i) for i in range(math.floor(math.log10(min_data)), 1+math.ceil(math.log10(max_data)))]
sns.heatmap(
data,
norm=log_norm,
cbar_kws={"ticks": cbar_ticks},
cmap=cm.jet
)
df=pd.read_csv('soft_pf0.4_sp025_lp020_reload006_reduced10.overlaps',delim_whitespace=True)
df.n_overlaps.mean()
df=pd.read_csv('soft_pf0.4_sp015_lp020_reload006_reduced10.overlaps',delim_whitespace=True)
df.n_overlaps.mean()
data = data/(data.sum().sum())
data=data.drop(0,axis=1)
data
| notebooks/polarOrder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **This notebook is an exercise in the [Data Cleaning](https://www.kaggle.com/learn/data-cleaning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/handling-missing-values).**
#
# ---
#
# In this exercise, you'll apply what you learned in the **Handling missing values** tutorial.
#
# # Setup
#
# The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex1 import *
print("Setup Complete")
# # 1) Take a first look at the data
#
# Run the next code cell to load in the libraries and dataset you'll use to complete the exercise.
# +
# modules we'll use
import pandas as pd
import numpy as np
# read in all our data
sf_permits = pd.read_csv("../input/building-permit-applications-data/Building_Permits.csv")
# set seed for reproducibility
np.random.seed(0)
# -
# Use the code cell below to print the first five rows of the `sf_permits` DataFrame.
sf_permits.head()
# Does the dataset have any missing values? Once you have an answer, run the code cell below to get credit for your work.
# Check your answer (Run this code cell to receive credit!)
q1.check()
# +
# Line below will give you a hint
#q1.hint()
# -
# # 2) How many missing data points do we have?
#
# What percentage of the values in the dataset are missing? Your answer should be a number between 0 and 100. (If 1/4 of the values in the dataset are missing, the answer is 25.)
# +
# get the number of missing data points per column
missing_values_count = sf_permits.isnull().sum()
# how many total missing values do we have?
total_cells = np.product(sf_permits.shape)
total_missing = missing_values_count.sum()
# percent of data that is missing
percent_missing = (total_missing/total_cells) * 100
q2.check()
# +
# Lines below will give you a hint or solution code
#q2.hint()
#q2.solution()
# -
# # 3) Figure out why the data is missing
#
# Look at the columns **"Street Number Suffix"** and **"Zipcode"** from the [San Francisco Building Permits dataset](https://www.kaggle.com/aparnashastry/building-permit-applications-data). Both of these contain missing values.
# - Which, if either, are missing because they don't exist?
# - Which, if either, are missing because they weren't recorded?
#
# Once you have an answer, run the code cell below.
# Check your answer (Run this code cell to receive credit!)
q3.check()
# +
# Line below will give you a hint
#q3.hint()
# -
# # 4) Drop missing values: rows
#
# If you removed all of the rows of `sf_permits` with missing values, how many rows are left?
#
# **Note**: Do not change the value of `sf_permits` when checking this.
# TODO: Your code here!
sf_permits.dropna()
# Once you have an answer, run the code cell below.
# Check your answer (Run this code cell to receive credit!)
q4.check()
# +
# Line below will give you a hint
#q4.hint()
# -
# # 5) Drop missing values: columns
#
# Now try removing all the columns with empty values.
# - Create a new DataFrame called `sf_permits_with_na_dropped` that has all of the columns with empty values removed.
# - How many columns were removed from the original `sf_permits` DataFrame? Use this number to set the value of the `dropped_columns` variable below.
# +
# TODO: Your code here
sf_permits_with_na_dropped = sf_permits.dropna(axis=1)
cols_in_original_dataset = sf_permits.shape[1]
cols_in_na_dropped = sf_permits_with_na_dropped.shape[1]
dropped_columns = cols_in_original_dataset - cols_in_na_dropped
# Check your answer
q5.check()
# +
# Lines below will give you a hint or solution code
#q5.hint()
#q5.solution()
# -
# # 6) Fill in missing values automatically
#
# Try replacing all the NaN's in the `sf_permits` data with the one that comes directly after it and then replacing any remaining NaN's with 0. Set the result to a new DataFrame `sf_permits_with_na_imputed`.
# +
# TODO: Your code here
sf_permits_with_na_imputed = sf_permits.fillna(method='bfill', axis=0).fillna(0)
# Check your answer
q6.check()
# +
# Lines below will give you a hint or solution code
#q6.hint()
#q6.solution()
# -
# # More practice
#
# If you're looking for more practice handling missing values:
#
# * Check out [this noteboook](https://www.kaggle.com/alexisbcook/missing-values) on handling missing values using scikit-learn's imputer.
# * Look back at the "Zipcode" column in the `sf_permits` dataset, which has some missing values. How would you go about figuring out what the actual zipcode of each address should be? (You might try using another dataset. You can search for datasets about San Fransisco on the [Datasets listing](https://www.kaggle.com/datasets).)
#
# # Keep going
#
# In the next lesson, learn how to [**apply scaling and normalization**](https://www.kaggle.com/alexisbcook/scaling-and-normalization) to transform your data.
# ---
#
#
#
#
# *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/172650) to chat with other Learners.*
| Data Cleaning/1.Handling Missing Values.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/python
import os
import pickle
import re
import sys
sys.path.append( "../tools/" )
from parse_out_email_text import parseOutText
"""
Starter code to process the emails from Sara and Chris to extract
the features and get the documents ready for classification.
The list of all the emails from Sara are in the from_sara list
likewise for emails from Chris (from_chris)
The actual documents are in the Enron email dataset, which
you downloaded/unpacked in Part 0 of the first mini-project. If you have
not obtained the Enron email corpus, run startup.py in the tools folder.
The data is stored in lists and packed away in pickle files at the end.
"""
from_sara = open("from_sara.txt", "r")
from_chris = open("from_chris.txt", "r")
from_data = []
word_data = []
### temp_counter is a way to speed up the development--there are
### thousands of emails from Sara and Chris, so running over all of them
### can take a long time
### temp_counter helps you only look at the first 200 emails in the list so you
### can iterate your modifications quicker
temp_counter = 0
for name, from_person in [("sara", from_sara), ("chris", from_chris)]:
for path in from_person:
### only look at first 200 emails when developing
### once everything is working, remove this line to run over full dataset
#temp_counter += 1
if temp_counter < 200:
path = os.path.join('..', path[:-1])
#print (path)
email = open(path, "r")
### use parseOutText to extract the text from the opened email
text = parseOutText(email)
### use str.replace() to remove any instances of the words
### ["sara", "shackleton", "chris", "germani"]
text =text.replace("sara", "")
text =text.replace("shackleton", "")
text =text.replace("chris", "")
text =text.replace("germani", "")
text =text.replace("sshacklensf", "")
text =text.replace("cgermannsf", "")
### append the text to word_data
word_data.append(text)
if name == "sara":
from_data.append(0)
elif name == "chris":
from_data.append(1)
### append a 0 to from_data if email is from Sara, and 1 if email is from Chris
email.close()
print ("emails processed")
print(word_data[152])
from_sara.close()
from_chris.close()
pickle.dump( word_data, open("your_word_data.pkl", "wb") )
pickle.dump( from_data, open("your_email_authors.pkl", "wb") )
### in Part 4, do TfIdf vectorization here
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer( smooth_idf = False,stop_words='english')
vectorizer.fit_transform(word_data)
words = vectorizer.get_feature_names()
#len(vectorizer.get_feature_names())
print(words[34597])
| text_learning/text_processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# +
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect = True)
# -
# We can view all of the classes that automap found
hawaii = Base.classes.keys()
hawaii
# Save references to each table
measurement = Base.classes.measurement
station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# # Exploratory Climate Analysis
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Perform a query to retrieve the data and precipitation scores
climate = session.query(measurement.date,measurement.prcp).\
filter(measurement.date.between(dt.date(2016,8,23),dt.date(2017,8,23))).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
results = pd.DataFrame(climate, columns=['date', 'prcp'])
results.set_index('date', inplace = True)
# Sort the dataframe by date
results.sort_values('date')
# Use Pandas Plotting with Matplotlib to plot the data
results.plot(rot=90)
plt.title("Exploratory Climate Analysis")
plt.xlabel("Date")
plt.ylabel("Precipitation (In)")
# -
# Use Pandas to calcualte the summary statistics for the precipitation data
precipitation = results.describe()
precipitation
# Design a query to show how many stations are available in this dataset?
num_stations = session.query(measurement.station).distinct().count()
num_stations
# What are the most active stations? (i.e. what stations have the most rows)?
active_stations = session.query(measurement.station, func.count(measurement.station))
# List the stations and the counts in descending order.
active_stations.group_by(measurement.station).order_by(func.count(measurement.station).desc()).all()
# +
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
most_active_temp = session.query(func.min(measurement.tobs),
func.max(measurement.tobs),
func.avg(measurement.tobs)).filter(measurement.station == 'USC00519281').all()
print(f"Most Active Station: USC00519281")
print(f"Min Temp: {most_active_temp[0][0]}")
print(f"Max Temp: {most_active_temp[0][1]}")
print(f"Avg Temp: {round(most_active_temp[0][2],1)}")
# -
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station
obvs = session.query(measurement.station, measurement.tobs).\
filter(measurement.station == 'USC00519281').\
filter(measurement.date.between(dt.date(2016,8,23),dt.date(2017,8,23))).all()
# Plot the results as a histogram
temp_histogram = pd.DataFrame(obvs, columns=['date','temperature'])
temp_histogram.set_index('date', inplace=True)
temp_histogram.plot.hist(bins = 12)
plt.xlabel = "Temperature"
plt.show()
| climate_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Nodebooks: Introducing Node.js Data Science Notebooks
#
# Notebooks are where data scientists process, analyse, and visualise data in an iterative, collaborative environment. They typically run environments for languages like Python, R, and Scala. For years, data science notebooks have served academics and research scientists as a scratchpad for writing code, refining algorithms, and sharing and proving their work. Today, it's a workflow that lends itself well to web developers experimenting with data sets in Node.js.
#
# To that end, pixiedust_node is an add-on for Jupyter notebooks that allows Node.js/JavaScript to run inside notebook cells. Not only can web developers use the same workflow for collaborating in Node.js, but they can also use the same tools to work with existing data scientists coding in Python.
#
# pixiedust_node is built on the popular PixieDust helper library. Let’s get started.
#
# > Note: Run one cell at a time or unexpected results might be observed.
#
#
# ## Part 1: Variables, functions, and promises
#
#
# ### Installing
# Install the [`pixiedust`](https://pypi.python.org/pypi/pixiedust) and [`pixiedust_node`](https://pypi.python.org/pypi/pixiedust-node) packages using `pip`, the Python package manager.
# install or upgrade the packages
# restart the kernel to pick up the latest version
# !pip install pixiedust --upgrade
# !pip install pixiedust_node --upgrade
# ### Using pixiedust_node
# Now we can import `pixiedust_node` into our notebook:
import pixiedust_node
# And then we can write JavaScript code in cells whose first line is `%%node`:
# %%node
// get the current date
var date = new Date();
# It’s that easy! We can have Python and Node.js in the same notebook. Cells are Python by default, but simply starting a cell with `%%node` indicates that the next lines will be JavaScript.
# ### Displaying HTML and images in notebook cells
# We can use the `html` function to render HTML code in a cell:
# %%node
var str = '<h2>Quote</h2><blockquote cite="https://www.quora.com/Albert-Einstein-reportedly-said-The-true-sign-of-intelligence-is-not-knowledge-but-imagination-What-did-he-mean">"Imagination is more important than knowledge"\n<NAME></blockquote>';
html(str)
# If we have an image we want to render, we can do that with the `image` function:
# %%node
var url = 'https://github.com/IBM/nodejs-in-notebooks/blob/master/notebooks/images/pixiedust_node_schematic.png?raw=true';
image(url);
# ### Printing JavaScript variables
#
# Print variables using `console.log`.
# %%node
var x = { a:1, b:'two', c: true };
console.log(x);
# Calling the `print` function within your JavaScript code is the same as calling `print` in your Python code.
# %%node
var y = { a:3, b:'four', c: false };
print(y);
# ### Visualizing data using PixieDust
# You can also use PixieDust’s `display` function to render data graphically. Configuring the output as line chart, the visualization looks as follows:
# + pixiedust={"displayParams": {"aggregation": "SUM", "chartsize": "99", "handlerId": "lineChart", "keyFields": "x", "rowCount": "500", "valueFields": "cos,sin"}}
# %gui tk
# + pixiedust={"displayParams": {"aggregation": "SUM", "chartsize": "99", "handlerId": "lineChart", "keyFields": "x", "rowCount": "500", "valueFields": "cos,sin"}}
# %%node
var data = [];
for (var i = 0; i < 1000; i++) {
var x = 2*Math.PI * i/ 360;
var obj = {
x: x,
i: i,
sin: Math.sin(x),
cos: Math.cos(x),
tan: Math.tan(x)
};
data.push(obj);
}
// render data
display(data);
# -
# PixieDust presents visualisations of DataFrames using Matplotlib, Bokeh, Brunel, d3, Google Maps and, MapBox. No code is required on your part because PixieDust presents simple pull-down menus and a friendly point-and-click interface, allowing you to configure how the data is presented:
#
# <img src="https://github.com/IBM/nodejs-in-notebooks/blob/master/notebooks/images/pd_chart_types.png?raw=true"></img>
# ### Adding npm modules
# There are thousands of libraries and tools in the npm repository, Node.js’s package manager. It’s essential that we can install npm libraries and use them in our notebook code.
# Let’s say we want to make some HTTP calls to an external API service. We could deal with Node.js’s low-level HTTP library, or an easier option would be to use the ubiquitous `request` npm module.
# Once we have pixiedust_node set up, installing an npm module is as simple as running `npm.install` in a Python cell:
npm.install('request');
# Once installed, you may `require` the module in your JavaScript code:
# %%node
var request = require('request');
var r = {
method:'GET',
url: 'http://api.open-notify.org/iss-now.json',
json: true
};
request(r, function(err, req, body) {
console.log(body);
});
# As an HTTP request is an asynchronous action, the `request` library calls our callback function when the operation has completed. Inside that function, we can call print to render the data.
# We can organise our code into functions to encapsulate complexity and make it easier to reuse code. We can create a function to get the current position of the International Space Station in one notebook cell:
# %%node
var request = require('request');
var getPosition = function(callback) {
var r = {
method:'GET',
url: 'http://api.open-notify.org/iss-now.json',
json: true
};
request(r, function(err, req, body) {
var obj = null;
if (!err) {
obj = body.iss_position
obj.latitude = parseFloat(obj.latitude);
obj.longitude = parseFloat(obj.longitude);
obj.time = new Date().getTime();
}
callback(err, obj);
});
};
# And use it in another cell:
# %%node
getPosition(function(err, data) {
console.log(data);
});
# ### Promises
# If you prefer to work with JavaScript Promises when writing asynchronous code, then that’s okay too. Let’s rewrite our `getPosition` function to return a Promise. First we're going to install the `request-promise` module from npm:
npm.install( ('request', 'request-promise') )
# Notice how you can install multiple modules in a single call. Just pass in a Python `list` or `tuple`.
# Then we can refactor our function a little:
# %%node
var request = require('request-promise');
var getPosition = function(callback) {
var r = {
method:'GET',
url: 'http://api.open-notify.org/iss-now.json',
json: true
};
return request(r).then(function(body) {
var obj = null;
obj = body.iss_position;
obj.latitude = parseFloat(obj.latitude);
obj.longitude = parseFloat(obj.longitude);
obj.time = new Date().getTime();
return obj;
});
};
# And call it in the Promises style:
# %%node
getPosition().then(function(data) {
console.log(data);
}).catch(function(err) {
console.error(err);
});
# Or call it in a more compact form:
# %%node
getPosition().then(console.log).catch(console.error);
# In the next part of this notebook we'll illustrate how you can access local and remote data sources from within the notebook.
# ***
# # Part 2: Working with data sources
#
# You can access any data source using your favorite public or home-grown packages. In the second part of this notebook you'll learn how to retrieve data from an Apache CouchDB (or Cloudant) database and visualize it using PixieDust or third-party libraries.
#
# ## Accessing Cloudant data sources
#
#
# To access data stored in an Apache CouchDB or Cloudant database, we can use the [`cloudant-quickstart`](https://www.npmjs.com/package/cloudant-quickstart) npm module:
npm.install('cloudant-quickstart')
# With our Cloudant URL, we can start exploring the data in Node.js. First we make a connection to the remote Cloudant database:
# %%node
// connect to Cloudant using cloudant-quickstart
const cqs = require('cloudant-quickstart');
const cities = cqs('https://56953ed8-3fba-4f7e-824e-5498c8e1d18e-bluemix.cloudant.com/cities');
# > For this code pattern example a remote database has been pre-configured to accept anonymous connection requests. If you wish to explore the `cloudant-quickstart` library beyond what is covered in this nodebook, we recommend you create your own replica and replace above URL with your own, e.g. `https://myid:mypassword@mycloudanthost/mydatabase`.
#
# Now we have an object named `cities` that we can use to access the database.
#
# ### Exploring the data using Node.js in a notebook
#
# We can retrieve all documents using `all`.
# %%node
// If no limit is specified, 100 documents will be returned
cities.all({limit:3}).then(console.log).catch(console.error)
# Specifying the optional `limit` and `skip` parameters we can paginate through the document list:
#
# ```
# cities.all({limit:10}).then(console.log).catch(console.error)
# cities.all({skip:10, limit:10}).then(console.log).catch(console.error)
# ```
# If we know the IDs of documents, we can retrieve them singly:
# %%node
cities.get('2636749').then(console.log).catch(console.error);
# Or in bulk:
# %%node
cities.get(['5913490', '4140963','3520274']).then(console.log).catch(console.error);
# Instead of just calling `print` to output the JSON, we can bring PixieDust's `display` function to bear by passing it an array of data to visualize. Using mapbox as renderer and satelite as basemap, we can display the location and population of the selected cities:
# + pixiedust={"displayParams": {"basemap": "satellite-v9", "chartsize": "76", "coloropacity": "53", "colorrampname": "Orange to Purple", "handlerId": "mapView", "keyFields": "latitude,longitude", "kind": "simple-cluster", "legend": "false", "mapboxtoken": "<KEY>", "rendererId": "mapbox", "rowCount": "500", "valueFields": "population,name"}}
# %%node
cities.get(['5913490', '4140963','3520274']).then(display).catch(console.error);
# -
# We can also query a subset of the data using the `query` function, passing it a [Cloudant Query](https://cloud.ibm.com/docs/services/Cloudant/api/cloudant_query.html#query) statement. Using mapbox as renderer, the customizable output looks as follows:
# + pixiedust={"displayParams": {"basemap": "outdoors-v9", "colorrampname": "Yellow to Blue", "handlerId": "mapView", "keyFields": "latitude,longitude", "mapboxtoken": "<KEY>", "rowCount": "500", "valueFields": "name,population"}}
# %%node
// fetch cities in UK above latitude 54 degrees north
cities.query({country:'GB', latitude: { "$gt": 54}}).then(display).catch(console.error);
# -
# ### Aggregating data
# The `cloudant-quickstart` library also allows aggregations (sum, count, stats) to be performed in the Cloudant database.
# Let’s calculate the sum of the population field:
# %%node
cities.sum('population').then(console.log).catch(console.error);
# Or compute the sum of the `population`, grouped by the `country` field, displaying 10 countries with the largest population:
# + pixiedust={"displayParams": {"aggregation": "SUM", "handlerId": "barChart", "keyFields": "name", "mapboxtoken": "<KEY>", "orientation": "vertical", "rendererId": "google", "rowCount": "100", "sortby": "Values DESC", "valueFields": "population"}}
# %%node
// helper function
function top10(data) {
// convert input data structure to array
var pop_array = [];
Object.keys(data).forEach(function(n,k) {
pop_array.push({name: n, population: data[n]});
});
// sort array by population in descending order
pop_array.sort(function(a,b) {
return b.population - a.population;
});
// display top 10 entries
pop_array.slice(0,10).forEach(function(e) {
console.log(e.name + ' ' + e.population.toLocaleString());
});
}
// fetch aggregated data and invoke helper routine
cities.sum('population','country').then(top10).catch(console.error);
# -
# The `cloudant-quickstart` package is just one of several Node.js libraries that you can use to access Apache CouchDB or Cloudant. Follow [this link](https://medium.com/ibm-watson-data-lab/choosing-a-cloudant-library-d14c06f3d714) to learn more about your options.
# ### Visualizing data using custom charts
#
# If you prefer, you can also use third-party Node.js charting packages to visualize your data, such as [`quiche`](https://www.npmjs.com/package/quiche).
npm.install('quiche');
# +
# %%node
var Quiche = require('quiche');
var pie = new Quiche('pie');
// fetch cities in UK
cities.query({name: 'Cambridge'}).then(function(data) {
var colors = ['ff00ff','0055ff', 'ff0000', 'ffff00', '00ff00','0000ff'];
for(i in data) {
var city = data[i];
pie.addData(city.population, city.name + '(' + city.country +')', colors[i]);
}
var imageUrl = pie.getUrl(true);
image(imageUrl);
});
# -
# ***
# # Part 3: Sharing data between Python and Node.js cells
#
# You can share variables between Python and Node.js cells. Why woud you want to do that? Read on.
#
# The Node.js library ecosystem is extensive. Perhaps you need to fetch data from a database and prefer the syntax of a particular Node.js npm module. You can use Node.js to fetch the data, move it to the Python environment, and convert it into a Pandas or Spark DataFrame for aggregation, analysis and visualisation.
#
# PixieDust and pixiedust_node give you the flexibility to mix and match Python and Node.js code to suit the workflow you are building and the skill sets you have in your team.
#
# Mixing Node.js and Python code in the same notebook is a great way to integrate the work of your software development and data science teams to produce a collaborative report or dashboard.
#
#
# ### Sharing data
#
# Define variables in a Python cell.
# define a couple variables in Python
a = 'Hello from Python!'
b = 2
c = False
d = {'x':1, 'y':2}
e = 3.142
f = [{'a':1}, {'a':2}, {'a':3}]
# Access or modify their values in Node.js cells.
# +
# %%node
// print variable values
console.log(a, b, c, d, e, f);
// change variable value
a = 'Hello from Node.js!';
// define a new variable
var g = 'Yes, it works both ways.';
# -
# Inspect the manipulated data.
# display modified variable and the new variable
print('{} {}'.format(a,g))
# **Note:** PixieDust natively supports [data sharing between Python and Scala](https://ibm-watson-data-lab.github.io/pixiedust/scalabridge.html), extending the loop for some data types:
# ```
# # %%scala
# println(a,b,c,d,e,f,g)
#
# (Hello from Node.js!,2,null,null,null,null,Yes, it works both ways.)
# ```
# ### Sharing data from an asynchronous callback
#
# If you wish transfer data from Node.js to Python from an asynchronous callback, make sure you write the data to a global variable.
#
# Load a csv file from a GitHub repository.
# +
# %%node
// global variable
var sample_csv_data = '';
// load csv file from GitHub and store data in the global variable
request.get('https://github.com/ibm-watson-data-lab/open-data/raw/master/cars/cars.csv').then(function(data) {
sample_csv_data = data;
console.log('Fetched sample data from GitHub.');
});
# -
# Create a Pandas DataFrame from the downloaded data.
# + pixiedust={"displayParams": {}}
import pandas as pd
import io
# create DataFrame from shared csv data
pandas_df = pd.read_csv(io.StringIO(sample_csv_data))
# display first five rows
pandas_df.head(5)
# -
# **Note**: Above example is for illustrative purposes only. A much easier solution is to use [PixieDust's sampleData method](https://ibm-watson-data-lab.github.io/pixiedust/loaddata.html#load-a-csv-using-its-url) if you want to create a DataFrame from a URL.
# #### References:
# * [Nodebooks: Introducing Node.js Data Science Notebooks](https://medium.com/ibm-watson-data-lab/nodebooks-node-js-data-science-notebooks-aa140bea21ba)
# * [Nodebooks: Sharing Data Between Node.js & Python](https://medium.com/ibm-watson-data-lab/nodebooks-sharing-data-between-node-js-python-3a4acae27a02)
# * [Sharing Variables Between Python & Node.js in Jupyter Notebooks](https://medium.com/ibm-watson-data-lab/sharing-variables-between-python-node-js-in-jupyter-notebooks-682a79d4bdd9)
| notebooks/Web APIs/DOM/nodebook_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import altair as alt
import numpy as np
import pandas as pd
import pickle
def results_dataframe(systems, fnames):
frames = []
for sys, fname in zip(systems, fnames):
with open(fname, 'rb') as fp:
scaling_dict = pickle.load(fp)
df = pd.DataFrame(scaling_dict)
df['system'] = sys
frames.append(df)
results = pd.concat(frames)
results['max_job_time'] = results['max_job_time'] / np.timedelta64(1,'s')
return results
# +
sys_names = [
'pywren',
'cloudknot (default)',
'cloudknot (custom)',
]
nargs_fnames = [
'pywren_nargs_scaling.pkl',
'cloudknot_nargs_scaling_default_params.pkl',
'cloudknot_nargs_scaling.pkl',
]
syssize_fnames = [
'pywren_syssize_scaling.pkl',
'cloudknot_syssize_scaling_default_params.pkl',
'cloudknot_syssize_scaling.pkl',
]
nargs_scaling_results = results_dataframe(sys_names, nargs_fnames)
syssize_scaling_results = results_dataframe(sys_names, syssize_fnames)
# -
nargs_scaling_results
syssize_scaling_results
# +
lines = alt.Chart(nargs_scaling_results).mark_line().encode(
alt.X("npoints:Q", scale=alt.Scale(base=2, type="log"), axis=alt.Axis(title="Number of arguments (log scale)")),
alt.Y('max_job_time:Q', scale=alt.Scale(base=10, type="log"), axis=alt.Axis(title='Execution Time (s)')),
color=alt.Color("system", legend=alt.Legend(orient="top-left", title="System"))
)
lines = lines.configure_legend(fillColor='#ffffff', strokeColor='black', cornerRadius=5,
padding=5)
lines
# +
lines = alt.Chart(syssize_scaling_results).mark_line().encode(
alt.X("side_len:Q", axis=alt.Axis(title='Side Length')),
alt.Y('max_job_time:Q', scale=alt.Scale(base=10, type="log"), axis=alt.Axis(title='Execution Time (s)')),
color=alt.Color("system", legend=alt.Legend(orient="bottom-right", title="System"))
)
data2 = pd.DataFrame([{"ThresholdValue": 125, "Threshold": "cloudknot only"}])
data3 = pd.DataFrame([{"start": 125, "end": 180,}])
rule = alt.Chart(data2).mark_rule().encode(
x='ThresholdValue:Q'
)
text = alt.Chart(data2).mark_text(
align='left', baseline='middle',
dx=261, dy=10
).encode(
alt.Y('ThresholdValue:Q'),
text=alt.value('cloudknot only')
)
rect = alt.Chart(data3).mark_rect(color="#807f80", opacity=0.6).encode(
x="start:Q",
x2="end:Q",
)
chart = rect + rule + lines + text
chart = chart.configure_legend(fillColor='#ffffff', strokeColor='black', cornerRadius=5,
padding=5)
chart
# -
| examples/scipy-2018-paper-examples/compare_heateq_results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python
# coding: utf8
from __future__ import unicode_literals, print_function
import plac
import random
from pathlib import Path
import spacy
from spacy.util import minibatch, compounding
from tqdm.auto import tqdm
# -
from spacy.gold import GoldCorpus
CFG = {'device': 0, 'cpu_count': 4}
TESTS = False
spacy.require_gpu()
g=GoldCorpus('../../data/UD_Russian-SynTagRus/ru_syntagrus-ud-train.json',
'../../data/UD_Russian-SynTagRus/ru_syntagrus-ud-test.json')
g.limit=None
# +
# from importlib import reload
# import utils.corpus
# reload(utils.corpus)
# -
from utils.corpus import Corpus, tag_morphology
SynTagRus = Corpus.from_gold('ru', g)
display(sorted(SynTagRus.ds_train.pos)[:5])
display(sorted(SynTagRus.ds_train.dep)[:5])
display(sorted(SynTagRus.ds_train.ner)[:5])
import pandas
nlp = spacy.blank('ru')
for d, gp in tqdm(SynTagRus.ds_train.iter(nlp, limit=2)):
print(gp.is_projective)
pos_list = [tag_morphology(x)['POS'] for x in gp.tags]
display(pandas.DataFrame(zip(d, gp.heads, gp.labels, pos_list, gp.tags, gp.ner)))
break
for x in dir(gp):
if not '__' in x and not callable(getattr(gp, x)):
print(x, ':', getattr(gp, x))
| notebooks/corpora/load_syntagrus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="CIrc-n1lqkaz" outputId="7a655804-ebf7-4f59-9510-abc3de872fe6" colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="hw-EtdUNHXPq"
# lip_readingフォルダへ移動
# + id="YkkSYEhIsSTM" outputId="5ddf5fb6-27fa-4a1f-b134-c59c765fdbba" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %cd /content/drive/'My Drive'/lip_reading
# + [markdown] id="eeCEYvCwDm_P"
# import
# + id="QZH5IMQnBDCA" outputId="2725861f-d9c1-4a54-c6de-38c0bd969c2c" colab={"base_uri": "https://localhost:8080/", "height": 34}
import numpy as np
import glob
import time
import os.path as osp
from PIL import Image
import torch
from torch import nn, optim
from torchvision import transforms
from sklearn.model_selection import train_test_split
from torch.nn.modules import TransformerEncoder, TransformerEncoderLayer
from torchvision.models import MobileNetV2
from tqdm import tqdm
torch.cuda.is_available()
# + [markdown] id="TbGokNlUI_3w"
# 学習するモデルを選ぶ
# + id="Z8DkuIWUI9kA"
LEARN_NETWORK = "Transformer"
# LEARN_NETWORK = "LSTM"
# + [markdown] id="osygTkYsC33Q"
# モデルの定義
# + id="5Wo1K1zmBdy8"
class MobileNetV2NotClassify(MobileNetV2):
def __init__(self):
super().__init__()
def _forward_impl(self, x):
x = self.features(x)
# Cannot use "squeeze" as batch-size can be 1 => must use reshape with x.shape[0]
x = nn.functional.adaptive_avg_pool2d(x, 1).reshape(x.shape[0], -1)
return x
class Model_Transformer(nn.Module):
def __init__(self, device):
super().__init__()
self.device = device
self.mobilenet = MobileNetV2NotClassify()
encoder_layer = TransformerEncoderLayer(1280, 8)
self.encoder = TransformerEncoder(encoder_layer, 1)
self.last_dropout = nn.Dropout(0.2)
self.last_linear = nn.Linear(1280, 4)
print(self.last_linear.weight)
nn.init.kaiming_normal_(self.last_linear.weight)
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
x = self.each_apply(x)
x = self.encoder(x)
x = self.last_dropout(x[:, 0, :])
x = self.last_linear(x)
output = self.softmax(x)
return output
def each_apply(self, x):
"""
this function means
result = torch.zeros(x.size()[0], 20, 1280).to(self.device)
for i in range(0, 20):
x[:, i] = self.mobilenet(x[:, i])
"""
result = torch.zeros(x.size()[0], 20, 1280).to(self.device)
result[:, 0] = self.mobilenet(x[:, 0])
result[:, 1] = self.mobilenet(x[:, 1])
result[:, 2] = self.mobilenet(x[:, 2])
result[:, 3] = self.mobilenet(x[:, 3])
result[:, 4] = self.mobilenet(x[:, 4])
result[:, 5] = self.mobilenet(x[:, 5])
result[:, 6] = self.mobilenet(x[:, 6])
result[:, 7] = self.mobilenet(x[:, 7])
result[:, 8] = self.mobilenet(x[:, 8])
result[:, 9] = self.mobilenet(x[:, 9])
result[:, 10] = self.mobilenet(x[:, 10])
result[:, 11] = self.mobilenet(x[:, 11])
result[:, 12] = self.mobilenet(x[:, 12])
result[:, 13] = self.mobilenet(x[:, 13])
result[:, 14] = self.mobilenet(x[:, 14])
result[:, 15] = self.mobilenet(x[:, 15])
result[:, 16] = self.mobilenet(x[:, 16])
result[:, 17] = self.mobilenet(x[:, 17])
result[:, 18] = self.mobilenet(x[:, 18])
result[:, 19] = self.mobilenet(x[:, 19])
return result
class Model_LSTM(nn.Module):
def __init__(self, device):
super().__init__()
self.device = device
self.mobilenet = MobileNetV2NotClassify()
self.lstm = nn.LSTM(1280, 1280)
self.dropout = nn.Dropout(0.2)
self.linear = nn.Linear(1280, 4)
print(self.linear.weight)
nn.init.kaiming_normal_(self.linear.weight)
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
x = self.each_apply(x)
_, (x, _) = self.lstm(x.resize(20, x.size()[0], 1280))
print(x[0].size())
x = self.dropout(x[0])
x = self.linear(x)
output = self.softmax(x)
return output
def each_apply(self, x):
"""
this function means
result = torch.zeros(x.size()[0], 20, 1280).to(self.device)
for i in range(0, 20):
x[:, i] = self.mobilenet(x[:, i])
"""
result = torch.zeros(x.size()[0], 20, 1280).to(self.device)
result[:, 0] = self.mobilenet(x[:, 0])
result[:, 1] = self.mobilenet(x[:, 1])
result[:, 2] = self.mobilenet(x[:, 2])
result[:, 3] = self.mobilenet(x[:, 3])
result[:, 4] = self.mobilenet(x[:, 4])
result[:, 5] = self.mobilenet(x[:, 5])
result[:, 6] = self.mobilenet(x[:, 6])
result[:, 7] = self.mobilenet(x[:, 7])
result[:, 8] = self.mobilenet(x[:, 8])
result[:, 9] = self.mobilenet(x[:, 9])
result[:, 10] = self.mobilenet(x[:, 10])
result[:, 11] = self.mobilenet(x[:, 11])
result[:, 12] = self.mobilenet(x[:, 12])
result[:, 13] = self.mobilenet(x[:, 13])
result[:, 14] = self.mobilenet(x[:, 14])
result[:, 15] = self.mobilenet(x[:, 15])
result[:, 16] = self.mobilenet(x[:, 16])
result[:, 17] = self.mobilenet(x[:, 17])
result[:, 18] = self.mobilenet(x[:, 18])
result[:, 19] = self.mobilenet(x[:, 19])
return result
# + [markdown] id="-MFZdLjBDblH"
# データローダー、Utility Class, Functionの定義
# + id="plfrZABKC0tQ"
class ImageTransform():
def __init__(self, size, mean, std):
self.data_transform = transforms.Compose([
transforms.Resize(size),
transforms.ToTensor(),
transforms.Normalize(mean,std)
])
def __call__(self, data):
result = torch.Tensor(20, 3, 224, 224)
for i in range(0, 20):
if data[i, 0, 0, 0] == -1:
prev_img = self.data_transform(Image.fromarray(data[i - 1].astype(np.uint8)))
for j in range(i, 20):
result[j] = prev_img
return result
else:
result[i] = self.data_transform(Image.fromarray(data[i].astype(np.uint8)))
return result
class MyDataset(torch.utils.data.Dataset):
def __init__(self, file_list, transform=None):
self.file_list = file_list
self.transform = transform
def __len__(self):
return len(self.file_list)
def __getitem__(self, index):
img_path = self.file_list[index]
img = np.load(img_path)
img_transformed = self.transform(img)
label = int(img_path[10]) - 1
return img_transformed, label
def make_datapath_list():
rootpath = "./dataset/"
path_list = []
for i in range(1,5):
target_path = osp.join(rootpath + str(i) + '/*.npy')
for path in glob.glob(target_path):
path_list.append(path)
return train_test_split(np.array(path_list), train_size=0.8)
def train_model(net, device, dataloaders_dict, criterion, optimizer, num_epochs):
start = None
end = None
net.to(device)
torch.backends.cudnn.benchmark = True
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-----------------------------')
for phase in ['train', 'test']:
if phase == 'train':
start = time.time()
net.train()
else:
net.eval()
epoch_loss = 0.0
epoch_corrects = 0.0
for inputs, labels in tqdm(dataloaders_dict[phase]):
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
outputs = net(inputs)
loss = criterion(outputs, labels)
_, preds = torch.max(outputs, dim=1)
if phase == 'train':
loss.backward()
optimizer.step()
epoch_loss += loss.item() * inputs.size(0)
epoch_corrects += torch.sum(preds == labels.data)
if phase == 'train':
end = time.time()
print("Epock {}| {}s".format(epoch+1, end - start))
epoch_loss = epoch_loss / len(dataloaders_dict[phase].dataset)
epoch_acc = epoch_corrects.double() / len(dataloaders_dict[phase].dataset)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
if LEARN_NETWORK == "Transformer":
save_path = './transformer_weights.pth'
else:
save_path = './lstm_weights.pth'
torch.save(net.state_dict(), save_path)
# + [markdown] id="PHR39Q2LDZm0"
# 実行
# + id="BKAi7ktwDY0x" outputId="9df8a84d-a32c-415b-ce7b-0f1f13a05203" colab={"base_uri": "https://localhost:8080/", "height": 799}
size = (224, 224)
mean = (0.485, 0.486, 0.406)
std = (0.229, 0.224, 0.225)
train_path, test_path = make_datapath_list()
batch_size = 8
transform = ImageTransform(size, mean, std)
train_dataset = MyDataset(train_path, transform)
test_dataset = MyDataset(test_path, transform)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
dataloaders_dict = {"train": train_dataloader, "test": test_dataloader}
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("使用デバイス", device)
if LEARN_NETWORK == "Transformer":
net = Model_Transformer(device) # Transformerを使っているモデル
else:
net = Model_LSTM(device) # LSTMを使っているモデル
optimizer = optim.SGD(net.parameters(), lr=1e-2, momentum=0.9)
criterion = nn.CrossEntropyLoss()
num_epochs = 20
train_model(net, device, dataloaders_dict, criterion, optimizer, num_epochs=num_epochs)
| transformer_model_learn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Aggregating and joining data
#
# This is the second introductory tutorial to Ibis. If you are new to Ibis, you may want to start
# by the first tutorial, _01-Introduction-to-Ibis_.
#
# In the first tutorial, we saw how to operate on the data of a table. We will work again with
# the `countries` table as we did previously.
# !curl -LsS -o $TEMPDIR/geography.db 'https://storage.googleapis.com/ibis-tutorial-data/geography.db'
# +
import os
import tempfile
import ibis
ibis.options.interactive = True
connection = ibis.sqlite.connect(
os.path.join(tempfile.gettempdir(), 'geography.db')
)
countries = connection.table('countries')
countries['name', 'continent', 'area_km2', 'population']
# -
# ## Expressions
#
# We will continue by exploring the data by continent. We will start by creating an expression
# with the continent names, since our table only contains the abbreviations.
#
# An expression is one or more operations performed over the data. They can be used to retrieve the
# data or to build more complex operations.
#
# In this case we will use a `case` conditional statement to replace values depending on a condition.
# A `case` expression will return a case builder, and must be followed by one or more `when` calls,
# optionally an `else_` call, and must end with a call to `end`, to complete the full expression.
# The expression where `case` is called (`countries['continent']` in this case)
# is evaluated to see if it's equal to any of the first arguments of the calls to `when`. And the second
# argument is returned. If the value does not match any of the `when` values, the value of `else_` is returned.
continent_name = (
countries['continent']
.case()
.when('NA', 'North America')
.when('SA', 'South America')
.when('EU', 'Europe')
.when('AF', 'Africa')
.when('AS', 'Asia')
.when('OC', 'Oceania')
.when('AN', 'Anctartica')
.else_('Unknown continent')
.end()
.name('continent_name')
)
continent_name
# What we did is take the values of the column `countries['continent']`, and we created a calculated
# column with the names of the continents, as defined in the `when` methods.
#
# This calculated column is an expression. The computations didn't happen when defining the `continent_name`
# variable, and the results are not stored. They have been computed when we printed its content.
#
# We can see that by checking the type of `continent_name`:
type(continent_name)
# In the next tutorial we will see more about eager and lazy mode, and when operations are being
# executed. For now we can think that the query to the database happens only when we want to see
# the results.
#
# The important part is that now we can use our `continent_name` expression in other expressions.
# For example, since this is a column (a `StringColumn` to be specific), we can use it as a column
# to query the countries table.
#
# Note that when we created the expression we added `.name('continent_name')` to it, so the column
# has a name when being returned.
countries['name', continent_name, 'area_km2', 'population']
# Just for illustration, let's repeat the same query, but renaming the expression to `continent`
# when using it in the list of columns to fetch.
countries['name', continent_name.name('continent'), 'area_km2', 'population']
# ## Aggregating data
#
# Now, let's group our data by continent, and let's find the total population of each.
countries.group_by(continent_name).aggregate(
countries['population'].sum().name('total_population')
)
# We can see how Asia is the most populated country, followed by Africa. Antarctica is the least populated,
# as we would expect.
#
# The code to aggregate has two main parts:
# - The `group_by` method, that receive the column, expression or list of them to group by
# - The `aggregate` method, that receives an expression with the reduction we want to apply
#
# To make things a bit clearer, let's first save the reduction.
total_population = countries['population'].sum().name('total_population')
total_population
# As we can see, if we perform the operation directly, we will get the sum of the total in the column.
#
# But if we take the `total_population` expression as the parameter of the `aggregate` method, then the total is computed
# over every group defined by the `group_by` method.
countries.group_by(continent_name).aggregate(total_population)
# If we want to compute two aggregates at the same time, we can pass a list to the `aggregate` method.
#
# For illustration, we use the `continent` column, instead of the `continent_names` expression. We can
# use both column names and expressions, and also a list with any of them (e.g. `[continent_names, 'name']`.
countries.group_by('continent').aggregate(
[total_population, countries['area_km2'].mean().name('average_area')]
)
# ## Joining data
#
# Now we are going to get the total gross domestic product (GDP) for each continent. In this case, the GDP data
# is not in the same table `countries`, but in a table `gdp`.
gdp = connection.table('gdp')
gdp
# The table contains information for different years, we can easily check the range with:
gdp['year'].min(), gdp['year'].max()
# Now, we are going to join this data with the `countries` table so we can obtain the continent
# of each country. The `countries` table has several different codes for the countries. Let's find out which
# one matches the three letter code in the `gdp` table.
countries['iso_alpha2', 'iso_alpha3', 'iso_numeric', 'fips', 'name']
# The `country_code` in `gdp` corresponds to `iso_alpha2` in the `countries` table. We can also see
# how the `gdp` table has `10,000` rows, while `countries` has `252`. We will start joining the
# two tables by the codes that match, discarding the codes that do not exist in both tables.
# This is called an inner join.
countries_and_gdp = countries.inner_join(
gdp, predicates=countries['iso_alpha3'] == gdp['country_code']
)
countries_and_gdp[countries, gdp]
# We joined the table with the information for all years. Now we are going to just take the information about the last available year, 2017.
gdp_2017 = gdp.filter(gdp['year'] == 2017)
gdp_2017
# Joining with the new expression we get:
countries_and_gdp = countries.inner_join(
gdp_2017, predicates=countries['iso_alpha3'] == gdp_2017['country_code']
)
countries_and_gdp[countries, gdp_2017]
# We have called the `inner_join` method of the `countries` table and passed
# the `gdp` table as a parameter. The method receives a second parameter, `predicates`, that is used to specify
# how the join will be performed. In this case we want the `iso_alpha3` column in `countries` to
# match the `country_code` column in `gdp`. This is specified with the expression
# `countries['iso_alpha3'] == gdp['country_code']`.
#
| docs/tutorial/02-Aggregates-Joins.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="p54E_pCY9ghz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="e449d5f5-3cf8-49ab-933b-46c5a7f9e9be"
# !pip install vaderSentiment
import pandas as pd
import numpy as np
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
# + id="IeKmV2hO6pZJ" colab_type="code" colab={}
df = pd.read_csv('kickstarter - Sheet1 (1).csv')
# + id="SPc5AxTg67lm" colab_type="code" colab={}
def dfcleaner(data):
# changing datatype of start and end to date time
# adding column length of campaign
data['deadline'] = pd.to_datetime(data['deadline'])
data['launched'] = pd.to_datetime(data['launched'])
data['length_of_campaign'] = (data['deadline'] - data['launched']).dt.days
# Using a pretrained neural network to encode title to numbers
# Adding numbers to column as sentiments
sentiments =[]
analyzer = SentimentIntensityAnalyzer()
for sentence in data['name']:
vs = analyzer.polarity_scores(sentence)
sentiments.append(vs['compound'])
data['sentiments'] = sentiments
# Defining success and fail for binary classification
success = ['successful', 'live']
failed = ['failed',
'canceled',
'suspended',
'undefined']
# adding new binary values to its own column of project_succes
col = 'state'
conditions = [df[col].isin(success), df[col].isin(failed)]
choices = ['1', '0']
data['project_success'] = np.select(conditions, choices, default=np.nan)
# changing the datatypes of object columns to numerics
data['goal'] = (data['goal'].str.split()).apply(lambda x: float(x[0].replace(',', '')))
data['pledged'] = (data['pledged'].str.split()).apply(lambda x: float(x[0].replace(',', '')))
data['backers']= data['backers'].astype(str).astype(int)
df1 = data.filter(['name', 'main_category', 'deadline', 'launched', 'goal', 'backers', 'length_of_campaign', 'project_success', 'sentiments'], axis=1)
df1['main_category'] = df1['main_category'].replace('Publishing', 1)
df1['main_category'] = df1['main_category'].replace('Film & Video', 2)
df1['main_category'] = df1['main_category'].replace('Music', 3)
df1['main_category'] = df1['main_category'].replace('Food', 4)
df1['main_category'] = df1['main_category'].replace('Design', 5)
df1['main_category'] = df1['main_category'].replace('Crafts', 6)
df1['main_category'] = df1['main_category'].replace('Games', 7)
df1['main_category'] = df1['main_category'].replace('Comics', 8)
df1['main_category'] = df1['main_category'].replace('Fashion', 9)
df1['main_category'] = df1['main_category'].replace('Theatre', 10)
df1['main_category'] = df1['main_category'].replace('Art', 11)
df1['main_category'] = df1['main_category'].replace('Photography', 12)
df1['main_category'] = df1['main_category'].replace('Technology', 13)
df1['main_category'] = df1['main_category'].replace('Dance', 14)
df1['main_category'] = df1['main_category'].replace('Journalism', 15)
df1.to_csv('cleaned_kickstarter_data.csv', index=False)
return df1
# + id="rJq0jXII6_p5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 402} outputId="b5178c09-ac3a-4663-8c60-e706d095b98d"
dfcleaner(df)
# + id="68GHXj_-9eCt" colab_type="code" colab={}
| notebooks/datacleaner_for_kickstarter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
# <font style="font-size:28px;" align="left"><b> Python: Drawing </b></font>
# <br>
# _prepared by <NAME>_
# <br><br>
# Here we list certain tools from the python library "matplotlib.pyplot" that we will use throughout the tutorial when solving certain tasks.
# <u><b>Importing</b></u> some useful tools for drawing figures in python:
#
# from matplotlib.pyplot import plot, figure, arrow, Circle, gca, text, bar
# <u><b>Drawing a figure</b></u> with a specified size and dpi value:
#
# figure(figsize=(6,6), dpi=60)
# The higher dpi value makes the figure bigger.
# <u><b>Drawing a <font color="blue">blue</font> point</b></u> at (x,y):
#
# plot(x,y,'bo')
# For <font color="red">red</font> or <font color="green">green</font> points, 'ro' or 'go' can be used, respectively.
# <u><b>Drawing a line</b></u> from (x,y) to (x+dx,y+dy):
#
# arrow(x,y,dx,dy)
# Additional parameters:
# <ul>
# <li>color='red'</li>
# <li>linewidth=1.5</li>
# <li>linestyle='dotted' ('dashed', 'dash-dot', 'solid')</li>
# </ul>
# <u><b>Drawing a <font color="blue">blue</font> arrow</b></u> from (x,y) to (x+dx,y+dy) with a specifed size head:
#
# arrow(x,y,dx,dy,head_width=0.04,head_length=0.08,color="blue")
# <u><b>Drawing the axes</b></u> on 2-dimensional plane:
#
# arrow(0,0,1.1,0,head_width=0.04,head_length=0.08)
# arrow(0,0,-1.1,0,head_width=0.04,head_length=0.08)
# arrow(0,0,0,-1.1,head_width=0.04,head_length=0.08)
# arrow(0,0,0,1.1,head_width=0.04,head_length=0.08)
# <b><u>Drawing a circle</u></b> centered as (x,y) with radius r on 2-dimensional plane:
#
# gca().add_patch( Circle((x,y),r,color='black',fill=False) )
# <b><u>Placing a text</u></b> at (x,y):
#
# text(x,y,string)
# Additional parameters:
# <ul>
# <li>rotation=90 (numeric degree values)</li>
# <li>fontsize=12 </li>
# </ul>
# <b><u>Drawing a bar</u></b>:
#
# bar(list_of_labels,list_of_data)
# <hr>
# <h3> Some of our pre-defined functions </h3>
#
# We include our predefined functions by using the following line of code from the quantum-related notebooks:
#
# # %run quantum.py
# <table align="left"><tr><td>
# The file "qworld/include/drawing.py" contains our predefined functions for drawing.
# </td></tr></table>
# <u><b>Drawing the axes</b></u> on 2-dimensional plane:
#
# import matplotlib
# def draw_axes():
# # dummy points for zooming out
# points = [ [1.3,0], [0,1.3], [-1.3,0], [0,-1.3] ]
# # coordinates for the axes
# arrows = [ [1.1,0], [0,1.1], [-1.1,0], [0,-1.1] ]
#
# # drawing dummy points
# for p in points: matplotlib.pyplot.plot(p[0],p[1]+0.2)
# # drawing the axes
# for a in arrows: matplotlib.pyplot.arrow(0,0,a[0],a[1],head_width=0.04, head_length=0.08)
# <u><b>Drawing the unit circle</b></u> on 2-dimensional plane:
#
# import matplotlib
# def draw_unit_circle():
# unit_circle= matplotlib.pyplot.Circle((0,0),1,color='black',fill=False)
# matplotlib.pyplot.gca().add_patch(unit_circle)
# <u><b>Drawing a quantum state</b></u> on 2-dimensional plane:
#
# import matplotlib
# def draw_quantum_state(x,y,name):
# # shorten the line length to 0.92
# # line_length + head_length should be 1
# x1 = 0.92 * x
# y1 = 0.92 * y
# matplotlib.pyplot.arrow(0,0,x1,y1,head_width=0.04,head_length=0.08,color="blue")
# x2 = 1.15 * x
# y2 = 1.15 * y
# matplotlib.pyplot.text(x2,y2,name)
# <u><b>Drawing a qubit</b></u> on 2-dimensional plane:
#
# import matplotlib
# def draw_qubit():
# # draw a figure
# matplotlib.pyplot.figure(figsize=(6,6), dpi=60)
# # draw the origin
# matplotlib.pyplot.plot(0,0,'ro') # a point in red color
# # drawing the axes by using one of our predefined functions
# draw_axes()
# # drawing the unit circle by using one of our predefined functions
# draw_unit_circle()
# # drawing |0>
# matplotlib.pyplot.plot(1,0,"o")
# matplotlib.pyplot.text(1.05,0.05,"|0>")
# # drawing |1>
# matplotlib.pyplot.plot(0,1,"o")
# matplotlib.pyplot.text(0.05,1.05,"|1>")
# # drawing -|0>
# matplotlib.pyplot.plot(-1,0,"o")
# matplotlib.pyplot.text(-1.2,-0.1,"-|0>")
# # drawing -|1>
# matplotlib.pyplot.plot(0,-1,"o")
# matplotlib.pyplot.text(-0.2,-1.1,"-|1>")
| Bronze/python/Python06_Drawing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Copyright (c) 2021, salesforce.com, inc.
# All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
# For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
# ### Colab
#
# Try this notebook on [Colab](http://colab.research.google.com/github/salesforce/ai-economist/blob/master/tutorials/multi_agent_gpu_training_with_warp_drive.ipynb).
# # ⚠️ PLEASE NOTE:
# This notebook runs on a GPU runtime.\
# If running on Colab, choose Runtime > Change runtime type from the menu, then select `GPU` in the 'Hardware accelerator' dropdown menu.
# # Introduction
#
# Welcome! In this tutorial, we detail how we train multi-agent economic simulations built using [Foundation](https://github.com/salesforce/ai-economist/tree/master/ai_economist/foundation) and train it using [WarpDrive](https://github.com/salesforce/warp-drive), an open-source library we built for extremely fast multi-agent reinforcement learning (MARL) on a single GPU. For the purposes of exposition, we specifically consider the [COVID-19 and economy simulation](https://github.com/salesforce/ai-economist/blob/master/ai_economist/foundation/scenarios/covid19/covid19_env.py). The COVID-19 and economy is a simulation to model health and economy dynamics amidst the COVID-19 pandemic and comprises 52 agents.
#
# We put together this tutorial with these goals in mind:
# - Describe how we train multi-agent simulations from scratch, starting with just a Python implementation of the environment on a CPU.
# - Provide reference starting code to help perform extremely fast MARL training so the AI Economist community can focus more towards contributing multi-agent simulations to Foundation.
#
# We will cover the following concepts:
# 1. Building a GPU-compatible environment.
# 2. CPU-GPU environment consistency checker.
# 3. Adding an *environment wrapper*.
# 4. Creating a *trainer* object, and perform training.
# 5. Generate a rollout using the trainer object and visualize it.
# ### Prerequisites
# It is helpful to be familiar with [Foundation](https://github.com/salesforce/ai-economist/tree/master/ai_economist/foundation), a multi-agent economic simulator, and also the COVID-19 and Economic simulation ([paper here](https://arxiv.org/abs/2108.02904)). We recommend taking a look at the following tutorials:
#
# - [Foundation: the Basics](https://github.com/salesforce/ai-economist/blob/master/tutorials/economic_simulation_basic.ipynb)
# - [Extending Foundation](https://github.com/salesforce/ai-economist/blob/master/tutorials/economic_simulation_advanced.ipynb)
# - [COVID-19 and Economic Simulation](https://github.com/salesforce/ai-economist/blob/master/tutorials/covid19_and_economic_simulation.ipynb)
#
# It is also important to get familiarized with [WarpDrive](https://github.com/salesforce/warp-drive), a framework we developed for extremely fast end-to-end reinforcement learning on a single GPU. We also have a detailed tutorial on on how to [create custom environments](https://github.com/salesforce/warp-drive/blob/master/tutorials/tutorial-4-create_custom_environments.md) and integrate with WarpDrive.
# # Dependencies
# You will need to install the [AI Economist](https://github.com/salesforce/ai-economist) and [WarpDrive](https://github.com/salesforce/warp-drive) pip packages.
# +
import os, signal, sys, time
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
# !git clone https://github.com/salesforce/ai-economist.git
# %cd ai-economist
# !pip install -e .
# Restart the Python runtime to automatically use the installed packages
print("\n\nRestarting the Python runtime! Please (re-)run the cells below.")
time.sleep(1)
os.kill(os.getpid(), signal.SIGKILL)
else:
# !pip install ai-economist
# -
# Ensure that a GPU is present.
import GPUtil
num_gpus_available = len(GPUtil.getAvailable())
assert num_gpus_available > 0, "This notebook needs a GPU machine to run!!"
# !pip install rl-warp-drive
# +
import ai_economist
import numpy as np
import os
import yaml
import matplotlib.pyplot as plt
from matplotlib import dates as mdates
from datetime import timedelta
from timeit import Timer
from ai_economist.foundation.scenarios.covid19.covid19_env import (
CovidAndEconomyEnvironment,
)
from ai_economist.foundation.env_wrapper import FoundationEnvWrapper
from warp_drive.env_cpu_gpu_consistency_checker import EnvironmentCPUvsGPU
from warp_drive.training.trainer import Trainer
from warp_drive.training.utils.data_loader import create_and_push_data_placeholders
from warp_drive.utils.env_registrar import EnvironmentRegistrar
_PATH_TO_AI_ECONOMIST_PACKAGE_DIR = ai_economist.__path__[0]
# Set font size for the matplotlib figures
plt.rcParams.update({'font.size': 22})
# +
# Set logger level e.g., DEBUG, INFO, WARNING, ERROR
import logging
logging.getLogger().setLevel(logging.ERROR)
# -
# # 1. Building a GPU-Compatible Environment.
# We start with a Python environment that has the [Gym](https://gym.openai.com/docs/)-style `__init__`, `reset` and `step` APIs. For example, consider the [COVID-19 economic simulation](https://github.com/salesforce/ai-economist/blob/master/ai_economist/foundation/scenarios/covid19/covid19_env.py). To build a GPU-compatible environment that can be trained with WarpDrive, you will need to first implement the simulation itself in CUDA C. While there are other alternatives for GPU-based simulations such as [Numba](https://numba.readthedocs.io/en/stable/cuda/index.html) and [JAX](https://jax.readthedocs.io/en/latest/), CUDA C provides the most flexibility for building complex multi-agent simulation logic, and also the fastest performance. However, implementing the simulation in
# CUDA C also requires the GPU memory and threads to be carefully managed. Some pointers on building and testing the simulation in CUDA C are provided in this WarpDrive [tutorial](https://github.com/salesforce/warp-drive/blob/master/tutorials/tutorial-4-create_custom_environments.md).
#
# Important: when writing the step function using CUDA C, the function names should follow the following convention so that they can be used with WarpDrive APIs.
# - The scenario class needs to have a 'name' attribute. The scenario step function requires to be named as "Cuda{scenario_name}Step".
# - Every component class needs to have a 'name' attribute. The step function for the component in the scenario requires to be named as "Cuda{component_name}Step".
# - The function used to compute the rewards requires to be named as "CudaComputeReward".
#
# The code for the COVID-19 economic simulation's step function is [here](https://github.com/salesforce/ai-economist/blob/master/ai_economist/foundation/scenarios/covid19/covid19_env_step.cu).
#
# To use an existing Python Environment with WarpDrive, one needs to add two augmentations (see below) to the Python code. First, a `get_data_dictionary()` method that pushes all the data arrays and environment parameters required to run the simulation to the GPU. Second, the step-function should invoke the `cuda_step` kernel with the data arrays that the CUDA C step function should have access to passed as arguments.
#
# ```python
# class Env:
# def __init__(self, **env_config):
# ...
#
# def reset(self):
# ...
# return obs
#
# def get_data_dictionary(self):
# # Specify the data that needs to be
# # pushed to the GPU.
# data_feed = DataFeed()
# data_feed.add_data(
# name="variable_name",
# data=self.variable,
# save_copy_and_apply_at_reset
# =True,
# )
# ...
# return data_feed
#
# def step(self, actions):
# if self.use_cuda:
# self.cuda_step(
# # Pass the relevant data
# # feed keys as arguments
# # to cuda_step.
# # Note: cuda_data_manager
# # is created by the
# # EnvWrapper.
# self.cuda_data_manager.
# device_data(...),
# ...
# )
# else:
# ...
# return obs, rew, done, info
# ```
# The complete Python code is [here](https://github.com/salesforce/ai-economist/blob/master/ai_economist/foundation/scenarios/covid19/covid19_env.py).
# # 2. CPU-GPU Environment Consistency Checker
# Before we train the simulation on the GPU, we will need to ensure consistency between the Python and CUDA C versions of the simulation. For this purpose, Foundation provides an [EnvironmentCPUvsGPU class](https://github.com/salesforce/ai-economist/blob/master/ai_economist/foundation/env_cpu_gpu_consistency_checker.py). This module essentially instantiates environment objects corresponding to the two versions of the simulation. It then steps through the two environment objects for a specified number of environment replicas `num_envs` and a specified number of episodes `num_episodes`, and verifies that the observations, actions, rewards and the “done” flags are the same after each step. We have created a testing [script](https://github.com/salesforce/ai-economist/blob/master/tests/run_covid19_cpu_gpu_consistency_checks.py) that performs the consistency checks.
# First, we will create an environment configuration to test with. For more details on what the configuration parameters mean, please refer to the simulation [code](https://github.com/salesforce/ai-economist/blob/master/ai_economist/foundation/scenarios/covid19/covid19_env.py).
env_config = {
'collate_agent_step_and_reset_data': True,
'components': [
{'ControlUSStateOpenCloseStatus': {'action_cooldown_period': 28}},
{'FederalGovernmentSubsidy': {'num_subsidy_levels': 20,
'subsidy_interval': 90,
'max_annual_subsidy_per_person': 20000}},
{'VaccinationCampaign': {'daily_vaccines_per_million_people': 3000,
'delivery_interval': 1,
'vaccine_delivery_start_date': '2021-01-12'}}
],
'economic_reward_crra_eta': 2,
'episode_length': 540,
'flatten_masks': True,
'flatten_observations': False,
'health_priority_scaling_agents': 0.3,
'health_priority_scaling_planner': 0.45,
'infection_too_sick_to_work_rate': 0.1,
'multi_action_mode_agents': False,
'multi_action_mode_planner': False,
'n_agents': 51,
'path_to_data_and_fitted_params': '',
'pop_between_age_18_65': 0.6,
'risk_free_interest_rate': 0.03,
'world_size': [1, 1],
'start_date': '2020-03-22',
'use_real_world_data': False,
'use_real_world_policies': False
}
# Next, we will need to register the environment
env_registrar = EnvironmentRegistrar()
env_registrar.add_cuda_env_src_path(
CovidAndEconomyEnvironment.name,
os.path.join(
_PATH_TO_AI_ECONOMIST_PACKAGE_DIR,
"foundation/scenarios/covid19/covid19_build.cu"
)
)
# We will also need to set some variables: `policy_tag_to_agent_id_map`, `separate_placeholder_per_policy` (defaults to False) and `obs_dim_corresponding_to_num_agents` (defaults to "first"). The variable descriptions are below in comments.
# The policy_tag_to_agent_id_map dictionary maps
# policy model names to agent ids.
policy_tag_to_agent_id_map = {
"a": [str(agent_id) for agent_id in range(env_config["n_agents"])],
"p": ["p"],
}
# Flag indicating whether separate obs, actions and rewards placeholders have to be created for each policy.
# Set "create_separate_placeholders_for_each_policy" to True here
# since the agents and the planner have different observation and action spaces.
separate_placeholder_per_policy = True
# Flag indicating the observation dimension corresponding to 'num_agents'.
# Note: WarpDrive assumes that all the observation are shaped
# (num_agents, *feature_dim), i.e., the observation dimension
# corresponding to 'num_agents' is the first one. Instead, if the
# observation dimension corresponding to num_agents is the last one,
# we will need to permute the axes to align with WarpDrive's assumption
obs_dim_corresponding_to_num_agents = "last"
# The consistency tests may be performed using the `test_env_reset_and_step()` API as shown below.
EnvironmentCPUvsGPU(
dual_mode_env_class=CovidAndEconomyEnvironment,
env_configs={"test": env_config},
num_envs=3,
num_episodes=2,
env_wrapper=FoundationEnvWrapper,
env_registrar=env_registrar,
policy_tag_to_agent_id_map=policy_tag_to_agent_id_map,
create_separate_placeholders_for_each_policy=True,
obs_dim_corresponding_to_num_agents="last"
).test_env_reset_and_step()
# If the two implementations are consistent, you should see `The CPU and the GPU environment outputs are consistent within 1 percent` at the end of the previous cell run.
# # 3. Adding an *Environment Wrapper*.
# Once the Python and CUDA C implementation are consistent with one another, we use an environment wrapper to wrap the environment object, and run the simulation on the GPU. Accordingly, we need to set the `use_cuda` argument to True. under this setting, only the first environment reset happens on the CPU. Following that, the data arrays created at reset and the simulation parameters are copied over (a one-time operation) to the GPU memory. All the subsequent steps (and resets) happen only on the GPU. In other words, there's no back-and-forth data copying between the CPU and the GPU, and all the data arrays on the GPU are modified in-place. The environment wrapper also uses the `num_envs` argument (defaults to $1$) to instantiate multiple replicas of the environment on the GPU.
#
# Note: for running the simulation on a CPU, simply set use_cuda=False, and it is no different than actually running the Python simulation on a CPU - the reset and step calls also happen on the CPU.
#
# The environment wrapper essentially performs the following tasks that are required to run the simulation on the GPU:
# - Registers the CUDA step kernel, so that the step function can be invoked from the CPU (host).
# - Pushes all the data listed in the data dictionary to the GPU when the environment is reset for the very frst time.
# - Automatically resets every environment when it reaches its done state.
# - Adds the observation and action spaces to the environment, which are required when training the environment.
# The CPU and GPU versions of the environment object may be created via setting the appropriate value of the `use_cuda` flag.
cpu_env = FoundationEnvWrapper(
CovidAndEconomyEnvironment(**env_config),
)
# Instantiating the GPU environment also initializes the data function managers and loads the CUDA kernels.
gpu_env = FoundationEnvWrapper(
CovidAndEconomyEnvironment(**env_config),
use_cuda=True,
env_registrar=env_registrar,
)
# # 4. Creating a Trainer Object and Perform Training
# Next, we will prepare the environment for training on a GPU.We will need to define a run configuration (which comprises the environment, training, policy and saving configurations), and create a *trainer* object.
#
# We will load the run configuration from a saved yaml file.
config_path = os.path.join(
_PATH_TO_AI_ECONOMIST_PACKAGE_DIR,
"training/run_configs/",
f"covid_and_economy_environment.yaml",
)
with open(config_path, "r", encoding="utf8") as f:
run_config = yaml.safe_load(f)
# ### Instantiating the Trainer
# Next, we will create and instantiate the trainer object.
trainer = Trainer(
env_wrapper=gpu_env,
config=run_config,
policy_tag_to_agent_id_map=policy_tag_to_agent_id_map,
create_separate_placeholders_for_each_policy=separate_placeholder_per_policy,
obs_dim_corresponding_to_num_agents=obs_dim_corresponding_to_num_agents,
)
# ### CPU vs GPU Performance Comparison
# Before performing training, let us see how the simulation speed on the GPU compares with that of the CPU. We will generate a set of random actions, and step through both versions of the simulation a few times.
def generate_random_actions(env):
actions = {
str(agent_id): np.random.randint(
low=0,
high=env.env.action_space[str(agent_id)].n,
dtype=np.int32,
)
for agent_id in range(env.n_agents-1)
}
actions["p"] = np.random.randint(
low=0,
high=env.env.action_space["p"].n,
dtype=np.int32,
)
return actions
def env_reset_and_step(env):
env.reset()
actions = generate_random_actions(env)
for t in range(env.episode_length):
env.step(actions)
Timer(lambda: env_reset_and_step(gpu_env)).timeit(number=1)
Timer(lambda: env_reset_and_step(cpu_env)).timeit(number=1)
# Notice that with just $1$ replica of the environment, the environment step on the GPU is over 5x faster (on an A100 machine). When running training, it is typical to use several environment replicas, and that provides an even higher performance boost for the GPU, since WarpDrive runs all the environment replicas in parallel on separate GPU blocks, and the CPU cannot achieve the same amount of parallelization.
# ### Perform Training
# We perform training by invoking `trainer.train()`. The speed performance stats and metrics for the trained policies are printed on screen.
# Note: In this notebook, we only run training for $200$ iterations. You may run it for longer by setting the `num_episodes` configuration parameter [here](https://github.com/salesforce/ai-economist/blob/master/ai_economist/training/run_configs/covid_and_economy_environment.yaml).
trainer.train()
# # 5. Visualize the Environment
# Post training, it is useful to visualize some of the environment's actions and observations to gain more insight into the kinds of policies the RL agents learn, and the resulting environment dynamics. For the COVID-19 and economic simulation, the actions - "stringency level" and "subsidy level" control the observables such such as "susceptible", "infected", "recovered", "deaths", "unemployed", "vaccinated" and "productivity" for each of the US states.
#
# Incidentally, these actions and observables also correspond to the names of arrays that were pushed to the GPU after the very first environment reset. At any time, the arrays can be fetched back to the CPU via the WarpDrive trainer's API `fetch_episode_states`, and visualized for the duration of an entire episode. Below, we also provide a helper function to perform the visualizations. Note that in this notebook, we only performed a few iterations of training, so the policies will not be quite trained at this point, so the plots seen in the visualization seen are going to be arbitrary. You may run training with a different set of configurations or for longer by setting the `num_episodes` configuration parameter [here](https://github.com/salesforce/ai-economist/blob/master/ai_economist/training/run_configs/covid_and_economy_environment.yaml), and you can visualize the policies after, using the same code provided below.
# +
# Helper function: visualizations
def visualize_states(
entity="USA",
episode_states=None,
trainer=None,
ax=None
):
assert trainer is not None
assert episode_states is not None
# US state names to index mapping
us_state_name_to_idx = {v: k for k, v in trainer.cuda_envs.env.us_state_idx_to_state_name.items()}
us_state_name_to_idx["USA"] = "p"
agent_id = us_state_name_to_idx[entity]
assert entity is not None
assert entity in us_state_name_to_idx.keys(), f"entity should be in {list(us_state_name_to_idx.keys())}"
for key in episode_states:
# Use the collated data at the last valid time index
last_valid_time_index = np.isnan(np.sum(episode_states[key], axis = (1, 2))).argmax() - 1
episode_states[key] = episode_states[key][last_valid_time_index]
if agent_id == "p":
for key in episode_states:
if key in ["subsidy_level", "stringency_level"]:
episode_states[key] = np.mean(episode_states[key], axis=-1) # average across all the US states
else:
episode_states[key] = np.sum(episode_states[key], axis=-1) # sum across all the US states
episode_states[key] = episode_states[key].reshape(-1, 1) # putting back the agent_id dimension
agent_id = 0
else:
agent_id = int(agent_id)
if ax is None:
if len(episode_states) < 3:
cols = len(episode_states)
else:
cols = 3
scale = 8
rows = int(np.ceil(len(episode_states) / cols))
h, w = scale*max(rows, cols), scale*max(rows, rows)
fig, ax = plt.subplots(rows, cols, figsize=(h, w), sharex=True, squeeze=False)
else:
rows, cols = ax.shape
start_date = trainer.cuda_envs.env.start_date
for idx, key in enumerate(episode_states):
row = idx // cols
col = idx % cols
dates = [start_date + timedelta(day) for day in range(episode_states[key].shape[0])]
ax[row][col].plot(dates, episode_states[key][:, agent_id], linewidth=3)
ax[row][col].set_ylabel(key)
ax[row][col].grid(b=True)
ax[row][col].xaxis.set_major_locator(mdates.MonthLocator(interval=3))
ax[row][col].xaxis.set_major_formatter(mdates.DateFormatter("%b'%y"))
fig.tight_layout()
# +
# Fetch the key state indicators for an episode.
episode_states = trainer.fetch_episode_states(
[
"stringency_level",
"subsidy_level",
"susceptible",
"infected",
"recovered",
"deaths",
"unemployed",
"vaccinated",
"productivity",
"postsubsidy_productivity",
]
)
# Visualize the fetched states for the USA.
# Feel free to modify the 'entity' argument to visualize the curves for the US states (e.g., California, Utah) too.
visualize_states(
entity="USA",
episode_states=episode_states,
trainer=trainer,
)
# -
# And that's it for this tutorial. Happy training with Foundation and WarpDrive!
| tutorials/multi_agent_gpu_training_with_warp_drive.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Jp9JqDRqxVWU" colab_type="text"
# # Overview
# + [markdown] id="b8eIstD1wuol" colab_type="text"
# This is a simple demo of how to use Mesh plugin for TensorBoard. The demo will load static triangulated mesh (in PLY format), create a mesh summary with it and then display in TensorBoard.
# + [markdown] id="PPRzYdGDaVUH" colab_type="text"
# # Setup Imports
# + id="1a_Qfm_OZrtf" colab_type="code" colab={}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Uninstall tensorboard and tensorflow
# !pip uninstall -q -y tensorboard
# !pip uninstall -q -y tensorflow
# Install nightly TensorFlow with nightly TensorBoard.
# !pip install tf-nightly
# Install trimesh lib to read .PLY files.
# !pip freeze | grep -qF 'trimesh==' || pip install trimesh
# %load_ext tensorboard
import os
import numpy as np
import tensorflow as tf
import trimesh
import tensorboard
from tensorboard.plugins.mesh import summary as mesh_summary
sample_mesh = 'https://storage.googleapis.com/tensorflow-graphics/tensorboard/test_data/ShortDance07_a175_00001.ply'
log_dir = '/tmp/mesh_demo'
batch_size = 1
# !rm -rf /tmp/mesh_demo
# + [markdown] id="GeQsREQFabgx" colab_type="text"
# # Read sample .PLY files
# + id="WEJe1bebajDX" colab_type="code" colab={}
# Camera and scene configuration.
config_dict = {
'camera': {'cls': 'PerspectiveCamera', 'fov': 75},
'lights': [
{
'cls': 'AmbientLight',
'color': '#ffffff',
'intensity': 0.75,
}, {
'cls': 'DirectionalLight',
'color': '#ffffff',
'intensity': 0.75,
'position': [0, -1, 2],
}],
'material': {
'cls': 'MeshStandardMaterial',
'roughness': 1,
'metalness': 0
}
}
# Read all sample PLY files.
mesh = trimesh.load_remote(sample_mesh)
vertices = np.array(mesh.vertices)
# Currently only supports RGB colors.
colors = np.array(mesh.visual.vertex_colors[:, :3])
faces = np.array(mesh.faces)
# Add batch dimension, so our data will be of shape BxNxC.
vertices = np.expand_dims(vertices, 0)
colors = np.expand_dims(colors, 0)
faces = np.expand_dims(faces, 0)
# + [markdown] id="5N_SfPiia0zn" colab_type="text"
# # Create summaries and session
# + id="ODi7QiLPa1AR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="443a9fe8-4b6e-4b0c-d931-8960c0e2565e"
# Create data placeholders of the same shape as data itself.
vertices_tensor = tf.placeholder(tf.float32, vertices.shape)
faces_tensor = tf.placeholder(tf.int32, faces.shape)
colors_tensor = tf.placeholder(tf.int32, colors.shape)
meshes_summary = mesh_summary.op(
'mesh_color_tensor', vertices=vertices_tensor, faces=faces_tensor,
colors=colors_tensor, config_dict=config_dict)
# Create summary writer and session.
writer = tf.summary.FileWriter(log_dir)
sess = tf.Session()
# + [markdown] id="wUVbZ-I-a76w" colab_type="text"
# # Run the model, save summaries to disk
# + id="sBcDE1jRa8FZ" colab_type="code" colab={}
summaries = sess.run([meshes_summary], feed_dict={
vertices_tensor: vertices,
faces_tensor: faces,
colors_tensor: colors,
})
# Save summaries.
for summary in summaries:
writer.add_summary(summary)
# + [markdown] id="huft1mbibF21" colab_type="text"
# # TensorBoard
# + id="4rLrgb3EbGAp" colab_type="code" colab={}
# %tensorboard --logdir=/tmp/mesh_demo
| tensorboard/plugins/mesh/Mesh_Plugin_Tensorboard.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import tensorflow as tf
import seaborn as sns
import json
import csv
from pandas import DataFrame
import time
import gc
from IPython.display import Image
from IPython.core.display import HTML
from scipy.sparse import csr_matrix
from sklearn import preprocessing
# %matplotlib inline
# +
# Load data
import glob
import pickle
FEATURES_LOCATION = './data/features/'
F_CORE = 'cnn_features_'
def get_label_from_path(file):
return file.split('\\')[1].split('.')[0]
def load_data(mode):
if(mode == 'test'):
pickle_path = F_CORE + mode
data = pickle.load(open(FEATURES_LOCATION + pickle_path + '.pkl', 'rb'))
to_return = {}
for key, value in list(data.items()):
to_return[get_label_from_path(key)] = value.reshape(1,-1)
return to_return, None
pickle_path = F_CORE + mode + '_'
data = {}
for i in range(1,129):
data[i] = pickle.load(open(FEATURES_LOCATION + pickle_path + str(i) + '.pkl', 'rb'))
X = []
y = []
for key, value in list(data.items()):
the_class = key
features = np.array(list(value.values()))
for feature in features:
y.append(the_class)
X.append(feature)
return np.array(X), np.array(y)
# +
# Load data
X, y = load_data('train')
X_val, y_val = load_data('valid')
# Extract number of labels in the training data
num_labels = np.unique(y).shape[0]
num_features = X.shape[1]
num_trainobs = X.shape[0]
# Create one hot encoding for training and validation features
lb = preprocessing.LabelBinarizer()
lb.fit(y)
y = lb.transform(y)
y_val = lb.transform(y_val)
# +
# Load test data
X_test, _ = load_data('test')
len(X_test.items())
X_test_arr = np.array(list(X_test.values()))
X_test_arr = X_test_arr.reshape(-1,2048)
# -
# Tensorflow graph set up
graph = tf.Graph()
with graph.as_default():
# Variables
batch_size = 10000 # mini batch for SGD
lamb = 0.002 # regularization (0.001 - 0.01 seems good)
learn_rate = 0.25 # learning rate (0.2 - 0.3 seems good with regularization)
# Input data
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, num_features))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(X_val)
tf_test_dataset = tf.constant(X_test_arr)
# Initial weights and biases for output/logit layer
w_logit = tf.Variable(tf.random_normal([num_features, num_labels]))
b_logit = tf.Variable(tf.random_normal([num_labels]))
def model(data):
return tf.matmul(data, w_logit) + b_logit
# Training computations
logits = model(tf_train_dataset)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
regularized_loss = tf.nn.l2_loss(w_logit)
total_loss = loss + lamb * regularized_loss
# Optimizer
optimizer = tf.train.GradientDescentOptimizer(learn_rate).minimize(total_loss)
# Predictions for training, validation and test data
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
def accuracy(predictions, labels):
return(100 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0])
# +
num_steps = 15001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Generate minibatch
ind = np.random.choice(num_trainobs, size = batch_size, replace = False)
batch_data = X[ind, :]
batch_labels = y[ind, :]
# Prepare a dictionary telling the session where to feed the minibatch
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 1000 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), y_val))
#print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), y_test))
predictionstf = test_prediction.eval()
# +
# Convert predictions from one-hot to actual labels and print csv
y_pred = lb.inverse_transform(predictionstf)
predictions = {}
for i, index in enumerate(X_test.keys()):
predictions[int(index)] = y_pred[i]
from collections import Counter
counted = Counter(predictions.values())
most_common_class = counted.most_common()[0][0]
for index in range(1, 12801):
if(index not in predictions.keys()):
predictions[index] = most_common_class
ids = []
values = []
for key, value in predictions.items():
ids.append(key)
values.append(value)
out_dict = {}
out_dict['id'] = ids
out_dict['predicted'] = values
keys = sorted(out_dict.keys())
COL_WIDTH = 6
FMT = "%%-%ds" % COL_WIDTH
with open('predictions_v2.csv', 'w') as csv:
# Write keys
csv.write(','.join([k for k in keys]) + '\n')
# Assume all values of dict are equal
for i in range(len(out_dict[keys[0]])):
csv.write(','.join([FMT % out_dict[k][i] for k in keys]) + '\n')
| logistic_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="86Rn47p9egKR" colab_type="text"
# ## Tutorial - Extracting Text and Analysing Sentiment from Digitized Documents in the UCSD Library's Digital Archive
# + [markdown] id="J307CtPlepUk" colab_type="text"
# ### Part 4 - Analyzing Categories
#
# In this section of the tutorial, we'll look into the relationship between the documents we've extracted, the accuracy and completion rates, and the categories assigned to each document by the GCP natural language processing API's pre-trained classification model.
# + [markdown] id="2SweiEkzfLya" colab_type="text"
# ### Normalizing the dataset
#
# First thing you might notice is that this is a difficult dataset to query. Let's read the classification data into a dataframe.
# + id="Z9b_Avx7epuz" colab_type="code" colab={}
import pandas as pd
from google.colab import drive
# + [markdown] id="zrsy9NzTgv2I" colab_type="text"
# as before, we need to mount the drive to get access to the csv file created and saved in part 2
# + id="LaZ5UDrpfciD" colab_type="code" outputId="bb7668ad-5793-42aa-d8c9-654b1d0d66e7" executionInfo={"status": "ok", "timestamp": 1585257840377, "user_tz": 420, "elapsed": 27856, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03672219229182869751"}} colab={"base_uri": "https://localhost:8080/", "height": 124}
drive.mount('/content/gdrive')
# + id="VkdLmxkpfmCs" colab_type="code" colab={}
df_categories = pd.read_csv('gdrive/My Drive//No-More-Silence/glbths_2005-13_001_001_categories.csv')
# + [markdown] id="VHuGR-f2g7Fz" colab_type="text"
# Take a look at the Categories column. The data returned from the API is in JSON format, which corresponds to a dictionary in Python.
#
# Unfortunately, this will make it difficult to query records based on a particular category (in relational database design, this means the table is not in first normal form - each cell does not represent a "single, atomic (indivisible) value"[1]).
#
# [1] https://en.wikipedia.org/wiki/First_normal_form
# + id="rr5Nr7ungswS" colab_type="code" outputId="bbdec764-e6e5-4115-de23-2d3d6580d1bd" executionInfo={"status": "ok", "timestamp": 1585257845199, "user_tz": 420, "elapsed": 1047, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03672219229182869751"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
df_categories
# + [markdown] id="pFjnJ4-Ejqp2" colab_type="text"
# ### Create a table with one category per line.
#
# Database design is an intetesting topic. For this tutorial, we'll create a separate row for each unique document-page-id, category, and probability (in RDBMS terms, we'll get this into first normal form).
# + id="VLBPLeZfr8Fz" colab_type="code" colab={}
import json
# + id="KKMrSWEpkpzp" colab_type="code" colab={}
document_page_ids = []
category_names = []
category_confidences = []
for d in df_categories.itertuples():
if type(d[5]) == type('str'):
json_acceptable_string = d[5].replace("'", "\"")
jsn = json.loads(json_acceptable_string)
for c in jsn['categories']:
document_page_ids.append(d[2])
category_names.append(c['name'])
category_confidences.append(c['confidence'])
# + id="t-rHVjBKqzk8" colab_type="code" colab={}
df_categories_1f = pd.DataFrame({"document_page_id": document_page_ids, "category": category_names, "confidence": category_confidences})
# + id="yAfaF4P7vGkQ" colab_type="code" outputId="fc90dca7-082a-4f75-f09c-bbc5b519af3e" executionInfo={"status": "ok", "timestamp": 1585257851324, "user_tz": 420, "elapsed": 500, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03672219229182869751"}} colab={"base_uri": "https://localhost:8080/", "height": 793}
df_categories_1f
# + [markdown] id="J2k9fFrbyI4q" colab_type="text"
# ### Investigating the relationshop between records and categories, retention rate, and confidence
# + [markdown] id="OyximHfCzDhd" colab_type="text"
# Outer join to get all rows
# + id="pGLCPe14vI_j" colab_type="code" colab={}
df = pd.read_csv('gdrive/My Drive//No-More-Silence/glbths_2005-13_001_001.csv')
# + id="CkVGSGzFzZVH" colab_type="code" colab={}
# + id="6tBw57qS6NDJ" colab_type="code" outputId="0e00002b-55d6-4f19-c736-76d0c5e39017" executionInfo={"status": "ok", "timestamp": 1585257863379, "user_tz": 420, "elapsed": 940, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03672219229182869751"}} colab={"base_uri": "https://localhost:8080/", "height": 85}
print(df_categories_1f.columns)
print(df.columns)
# + id="yn2YU-j3z8pA" colab_type="code" colab={}
#df_categories['document_page_id']
# + id="FkftsihMopZ2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 331} outputId="789342b9-f221-4303-f155-c62f3fce989a" executionInfo={"status": "ok", "timestamp": 1585257878329, "user_tz": 420, "elapsed": 7105, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03672219229182869751"}}
# !pip install pandasql
# + id="qhuxABVizu8o" colab_type="code" colab={}
from pandasql import sqldf
pysqldf = lambda q: sqldf(q, globals())
# + id="pv-syDJBz62l" colab_type="code" colab={}
df_join = pysqldf("SELECT d.document_page_id, d.retained, dc.document_page_id, dc.category, dc.confidence FROM \
df d LEFT OUTER JOIN df_categories_1f dc ON d.document_page_id = dc.document_page_id")
# + id="VGKdxzk-0HqR" colab_type="code" outputId="86132b03-f3c9-4439-fd8f-bffa1033bb2e" executionInfo={"status": "ok", "timestamp": 1585257882628, "user_tz": 420, "elapsed": 629, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03672219229182869751"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
df_join
# + id="i86KPUgWrPmE" colab_type="code" outputId="993e57e3-eb55-4a10-ee73-1eedfad87837" executionInfo={"status": "ok", "timestamp": 1585257885903, "user_tz": 420, "elapsed": 871, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03672219229182869751"}} colab={"base_uri": "https://localhost:8080/", "height": 421}
pysqldf("SELECT category, COUNT(*), AVG(retained), AVG(confidence) FROM df_join GROUP BY category ORDER BY COUNT(*) DESC")
# + id="nl33W2sOrYKz" colab_type="code" colab={}
| NMS-Tutorial-Part-4-Categories.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
print(sys.executable)
print(sys.version)
print(sys.version_info)
# tested on aws lightsail instance 21 July 2020 using python38 kernel spec
# # Evaporation Trend Examination
#
# ### Background
# Global warming is a currently popular and hotly (pun intended) debated issue.
# The usual evidence is temperature data presented as a time series with various temporal correlations to industrial activity and so forth. The increase in the global temperature is not disputed - what it means for society and how to respond is widely disputed.
#
# One possible consequence of warming, regardless of the cause is an expectation that
# <strong>evaportation rates would increase</strong> and temperate regions would experience more
# drought and famine, and firm water yields would drop.
#
# However in a paper by Peterson and others (1995) the authors concluded from analysis of pan evaporation data in various parts of the world, that there has been a <strong>downward trend in evaporation</strong> at a significance level of 99%.
# Pan evaporation is driven as much by direct solar radiation (sun shining on water) as by surrounding air temperature.
#
# Global dimming is defined as the decrease in the amounts of solar radiation reaching the surface of the Earth. The by-product of fossil fuels is tiny particles or pollutants which absorb solar energy and reflect back sunlight into space. This phenomenon was first recognized in the year 1950. Scientists believe that since 1950, the sun’s energy reaching Earth has dropped by 9% in Antarctica, 10% in the USA, 16% in parts of Europe and 30% in Russia – putting the overall average drop to be at an enormous 22%. This causes a high risk to our environment.
#
# Aerosols have been found to be the major cause of global dimming. The burning of fossil fuels by industry and internal combustion engines emits by-products such as sulfur dioxide, soot, and ash. These together form particulate pollution—primarily called aerosols. Aerosols act as a precursor to global dimming in the following two ways:
#
# These particle matters enter the atmosphere and directly absorb solar energy and reflect radiation back into space before it reaches the planet’s surface.
# Water droplets containing these air-borne particles form polluted clouds. These polluted clouds have a heavier and larger number of droplets. These changed properties of the cloud – such clouds are called ‘brown clouds’ – makes them more reflective.
# Vapors emitted from the planes flying high in the sky called contrails are another cause of heat reflection and related global dimming.
#
# Both global dimming and global warming have been happening all over the world and together they have caused severe changes in the rainfall patterns. It is also believed that it was global dimming behind the 1984 Saharan drought that killed millions of people in sub-Saharan Africa. Scientists believe that despite the cooling effect created by global dimming, the earth’s temperature has increased by more than 1 deg. in the last century.
#
# ### References
#
# <NAME>., <NAME>. and <NAME>. 1995. Evaporation
# losing its strength. Nature 377: 687-688.
#
# https://www.conserve-energy-future.com/causes-and-effects-of-global-dimming.php
#
# ## Example Problem
# In Texas, evaporation rates (reported as inches per month) are available from the Texas Water Development Board.
# https://waterdatafortexas.org/lake-evaporation-rainfall
# The map below shows the quadrants (grid cells) for which data are tabulated.
#
# 
#
# Cell '911' is located between Corpus Christi and Houston in the Coastal Plains of Texas. A copy of the dataset downloaded from the Texas Water Development Board is located at http://www.rtfmps.com/share_files/all_quads_gross_evaporation.csv
#
# Using naive data science anlayze the data for Cell '911' and decide if the conclusions by Peterson and others (1995) are supported by this data.
#
# ### Exploratory Analysis
# To analyze these data a first step is to obtain the data. The knowlwdge that the data are arranged in a file with a ``.csv`` extension is a clue how to proceede. We will need a module to interface with the remote server, in this example lets use something different than ``urllib``. Here we will use ``requests`` , so first we load the module
import requests # Module to process http/https requests
# Now we will generate a ``GET`` request to the remote http server. I chose to do so using a variable to store the remote URL so I can reuse code in future projects. The ``GET`` request (an http/https method) is generated with the requests method ``get`` and assigned to an object named ``rget`` -- the name is arbitrary. Next we extract the file from the ``rget`` object and write it to a local file with the name of the remote file - esentially automating the download process. Then we import the ``pandas`` module.
remote_url="http://atomickitty.ddns.net/documents/shared-databases/all_quads_gross_evaporation.csv" # set the url
rget = requests.get(remote_url, allow_redirects=True) # get the remote resource, follow imbedded links
open('all_quads_gross_evaporation.csv','wb').write(rget.content) # extract from the remote the contents, assign to a local file same name
import pandas as pd # Module to process dataframes (not absolutely needed but somewhat easier than using primatives, and gives graphing tools)
# Now we can read the file contents and check its structure, before proceeding.
evapdf = pd.read_csv("all_quads_gross_evaporation.csv",parse_dates=["YYYY-MM"]) # Read the file as a .CSV assign to a dataframe evapdf
evapdf.head() # check structure
# Structure looks like a spreadsheet as expected; lets plot the time series for cell '911'
evapdf.plot.line(x='YYYY-MM',y='911') # Plot quadrant 911 evaporation time series
# Now we can see that the signal indeed looks like it is going up at its mean value then back down. Lets try a moving average over 12-month windows. The syntax is a bit weird, but it should dampen the high frequency (monthly) part of the signal. Sure enough there is a downaward trend at about month 375, which we recover the date using the index -- in this case around 1985.
#
movingAvg=evapdf['911'].rolling(12, win_type ='boxcar').mean()
movingAvg
movingAvg.plot.line(x='YYYY-MM',y='911')
evapdf['YYYY-MM'][375]
# So now lets split the dataframe at April 1985. Here we will build two objects and can compare them. Notice how we have split into two entire dataframes.
evB485loc = evapdf['YYYY-MM']<'1985-04' # filter before 1985
evB485 = evapdf[evB485loc]
ev85uploc = evapdf['YYYY-MM']>='1985-04' # filter after 1985
ev85up= evapdf[ev85uploc]
print(evB485.head())
print(ev85up.head())
# Now lets get some simple descriptions of the two objects, and we will ignore thay they are time series.
evB485['911'].describe()
ev85up['911'].describe()
# If we look at the means, the after 1985 is lower, and the SD about the same, so there is maybe support of the paper claims, but the median has increased while the IQR is practically unchanged. We can produce boxplots from the two objects and see they are different, but not by much. So the conclusion of the paper has support but its pretty weak and hardly statisticlly significant.
evB485['911'].plot.box()
ev85up['911'].plot.box()
# At this point, we would appeal to hypothesis testing or some other serious statistical analysis tools. Lets try a favorite (of mine) non-paramatric test called the ``mannwhitneyu`` test.
#
# ### Background
# In statistics, the Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) is a nonparametric test of the null hypothesis that it is equally likely that a randomly selected value from one population will be less than or greater than a randomly selected value from a second population.
#
# This test can be used to investigate whether two independent samples were selected from populations having the same distribution.
#
# ## Application
# As usual we need to import necessary tools, in this case scipy.stats. Based on the module name, it looks like a collection of methods (the dot ``.`` is the giveaway). The test itself is applied to the two objects, if there is a statistical change in behavior we expect the two collections of records to be different.
from scipy.stats import mannwhitneyu # import a useful non-parametric test
stat, p = mannwhitneyu(evB485['911'],ev85up['911'])
print('statistic=%.3f, p-value at rejection =%.3f' % (stat, p))
if p > 0.05:
print('Probably the same distribution')
else:
print('Probably different distributions')
# If there were indeed a 99% significance level, the p-value should have been smaller than 0.05 (two-tailed) and the p-value was quite high. I usually check that I wrote the script by testing he same distribution against itself, I should get a p-vale of 0.5. Indeed that's the case.
stat, p = mannwhitneyu(evB485['911'],evB485['911'])
print('statistic=%.3f, p-value at rejection =%.3f' % (stat, p))
if p > 0.05:
print('Probably the same distribution')
else:
print('Probably different distributions')
# Now lets repeat the analysis but break in 1992 when Clean Air Act rules were slightly relaxed:
evB492loc = evapdf['YYYY-MM']<'1992' # filter before 1992
evB492 = evapdf[evB492loc]
ev92uploc = evapdf['YYYY-MM']>='1992' # filter after 1992
ev92up= evapdf[ev92uploc]
#print(evB492.head())
#print(ev92up.head())
stat, p = mannwhitneyu(evB492['911'],ev92up['911'])
print('statistic=%.3f, p-value at rejection =%.3f' % (stat, p))
if p > 0.05:
print('Probably the same distribution')
else:
print('Probably different distributions')
# So even considering the key date of 1992, there is marginal evidence for the claims (for a single spot in Texas), and one could argue that the claims are confounding -- as an FYI this evevtually was a controversial paper because other researchers obtained similar results using subsets (by location) of the evaporation data.
#
# ## Homework
# Using data science tools anlayze the data for Cell '911' and decide if the conclusions by Peterson and others (1995) are supported by this data. That is, do the supplied data have a significant trend over time in any kind of grouping?
#
# Some things you may wish to consider as you design and implement your analysis are:
# Which summary statistics are relevant?
# Ignoring the periodic signal, are the data approximately normal?
# Are the data homoscedastic?
# What is the trend of the entire dataset (all years)?
# What is the trend of sequential decades (group data into decades)?
# What is the trend of sequential 15 year groups?
# Is there evidence that the slope of any of the trends is zero?
# At what level of significance?
#
# Some additional things to keep in mind:
#
# 1. These data are time series; serial correlation is present.
# 2. An annual-scale periodic signal is present
# We have not yet discussed time series analysis and periodic signals.
# Peterson and others (1995) only analyzed May through September data,
# does using this subset of data change your conclusions?
| 8-Labs/Lab16/Lab16-Part2-DemoAndExercise.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.7.0
# language: julia
# name: julia-0.7
# ---
# # Neural Accumulator and ALU
# This is a simple implementation of the Neural Accumulator and Neural Arithmetic Logic Unit as a Flux layer.
#
# From the paper:
# > Here we propose two models that are able to learn to represent and manipulate numbers in a systematic
# way. The first supports the ability to accumulate quantities additively, a desirable inductive bias for
# linear extrapolation. This model forms the basis for a second model, which supports multiplicative
# extrapolation.
# ## Neural Accumulator
# The NAC consists of a special affine transformation that consists of only -1's, 0's, and 1's. This prevents the layer from rescaling the representations when mapping from input to output.
#
# This is accomplished by combining the saturating nonlinearities $tanh$ and $\sigma$:
#
# $$
# \begin{align}
# \mathbf{a} & = \mathbf{Wx} \\
# \mathbf{W} &= tanh(\mathbf{\hat{W}}) \odot \sigma(\mathbf{\hat{M}}),\\
# \end{align}
# $$
# where $\mathbf{\hat{W}}$ and $\mathbf{\hat{M}}$ are weight matrices.
# +
# Neural Accumulator
struct NAC
W
M
end
NAC(in::Integer, out::Integer) =
NAC(randn(out, in), randn(out, in))
(nac::NAC)(x) = (tanh.(nac.W) .* σ.(nac.M)) * x
# -
# ## Neural Arithmetic Logic Unit
# > The NALU consists of two NAC cells interpolated by a learned sigmoidal gate g, such that if the add/subtract subcell’s output value is applied with a weight of 1 (on), the multiply/divide subcell’s is 0 (off) and vice versa. The first NAC computes the accumulation vector a, which stores results of the NALU’s addition/subtraction operations; it is computed identically to the original NAC, (i.e., a = Wx). The second NAC operates in log space and is therefore capable of learning to multiply and divide, storing its results in m.
#
# $$
# \begin{align}
# \mathbf{y} &= \mathbf{g} \odot \mathbf{a} + (1 - \mathbf{g}) \odot \mathbf{m} \\
# \mathbf{m} &= \exp \mathbf{W} (\log (|\mathbf{x}| + \epsilon)), \mathbf{g} = \sigma(\mathbf{Gx})
# \end{align}
# $$
# +
# Neural Arithmetic Logic Unit
struct NALU
nacₐ
nacₘ
G
end
NALU(in::Integer, out::Integer) =
NALU(NAC(in, out), NAC(in, out), randn(out, in))
function (nalu::NALU)(x)
# gate
g = σ.(nalu.G*x)
# addition
a = nalu.nacₐ(x)
# multiplication
m = exp.(nalu.nacₘ(log.(abs.(x) .+ eps())))
# nalu
return g .* a + (1 .- g) .* m
end
| src/Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cxbxmxcx/GenReality/blob/master/GEN_1_regression_pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="XhB46Q8JTpBd"
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import pandas as pd
# + id="tNHQgiXUUFIa"
boston = load_boston()
X,y = (boston.data, boston.target)
boston.data[:2]
inputs = X.shape[1]
# + colab={"base_uri": "https://localhost:8080/"} id="hXpiGQHDuKWl" outputId="d43378f4-5b79-4f60-fb97-2a697c4cbf31"
bos = pd.DataFrame(boston.data)
print(bos.head())
# + id="z05ZyUrmU7is" colab={"base_uri": "https://localhost:8080/"} outputId="9bb6d5dd-fb86-41df-ab35-931e9f96398f"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
num_train = X_train.shape[0]
X_train[:2], y_train[:2]
num_train
# + id="QLshQolGVUlz"
torch.set_default_dtype(torch.float64)
net = nn.Sequential(
nn.Linear(inputs, 50, bias = True), nn.ReLU(),
nn.Linear(50, 50, bias = True), nn.ReLU(),
nn.Linear(50, 50, bias = True), nn.Sigmoid(),
nn.Linear(50, 1)
)
loss_fn = nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters(), lr = .001)
# + id="ZhR9h8Y3WCWM"
num_epochs = 8000
y_train_t = torch.from_numpy(y_train).clone().reshape(-1, 1)
x_train_t = torch.from_numpy(X_train).clone()
y_test_t = torch.from_numpy(y_test).clone().reshape(-1, 1)
x_test_t = torch.from_numpy(X_test).clone()
history = []
# + id="Ef62jrN3WP0d" colab={"base_uri": "https://localhost:8080/"} outputId="1041e6bd-7d72-47f1-da31-e0d8c263084a"
for i in range(num_epochs):
y_pred = net(x_train_t)
loss = loss_fn(y_train_t,y_pred)
history.append(loss.data)
loss.backward()
optimizer.step()
optimizer.zero_grad()
test_loss = loss_fn(y_test_t,net(x_test_t))
if i > 0 and i % 100 == 0:
print(f'Epoch {i}, loss = {loss:.3f}, test loss {test_loss:.3f}')
# + id="IifIajH-XFrl" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="eba9afba-c41c-4a14-cf87-85ad0f596572"
plt.plot(history)
| GEN_1_regression_pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Analisando-como-os-recursos-físicos-estão-distribuidos-nos-estados" data-toc-modified-id="Analisando-como-os-recursos-físicos-estão-distribuidos-nos-estados-1"><span class="toc-item-num">1 </span>Analisando como os recursos físicos estão distribuidos nos estados</a></span></li><li><span><a href="#Analise-do-panorama-geral-dos-estados" data-toc-modified-id="Analise-do-panorama-geral-dos-estados-2"><span class="toc-item-num">2 </span>Analise do panorama geral dos estados</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Vamos-plotar-a-situação-dos-recursos-físicos-em-um-estado-específico" data-toc-modified-id="Vamos-plotar-a-situação-dos-recursos-físicos-em-um-estado-específico-2.0.1"><span class="toc-item-num">2.0.1 </span>Vamos plotar a situação dos recursos físicos em um estado específico</a></span></li></ul></li><li><span><a href="#Escolhendo-um-novo-local-de-diagnostico" data-toc-modified-id="Escolhendo-um-novo-local-de-diagnostico-2.1"><span class="toc-item-num">2.1 </span>Escolhendo um novo local de diagnostico</a></span><ul class="toc-item"><li><span><a href="#Agora-iremos-fazer-a-redistribuição-dos-recursos-físicos" data-toc-modified-id="Agora-iremos-fazer-a-redistribuição-dos-recursos-físicos-2.1.1"><span class="toc-item-num">2.1.1 </span>Agora iremos fazer a redistribuição dos recursos físicos</a></span></li></ul></li></ul></li></ul></div>
# -
# # Analisando como os recursos físicos estão distribuidos nos estados
# +
import numpy as np
import pandas as pd
from tqdm.auto import tqdm
import matplotlib.pyplot as plt
import mplleaflet
from geopy.distance import geodesic
import os
import math
from k_medoids_rf import KMedoidsWeighted
tqdm.pandas()
# %matplotlib inline
# -
data_dir = '../data/'
rf_dataset_name = 'RF- Mamógrafos.csv' #podemos fazer a analise tambem para tomografos
use_cols = ['COD_MUN','2018/Dez'] # obtendo os dados mais recentes
municipios_dataset_name = 'dados_mun_clean.csv'
distancia_max = 65 #padrão internacional que queremos atingir (para tomografos 70)
populacao_max = 240000 #padrão internacional que queremos atingir (para tomografos 100000)
UF_analisado = 'CE'
# +
def dist_to_near_rf(row):
'''
Retorna a menor distancia a um recurso físico, dada a latitude e longitude de uma cidade
'''
distancias = cities_rf[cities_rf.UF == row.UF].apply(lambda x: geodesic((x['LATITUDE'], x['LONGITUDE']), (row.LATITUDE, row.LONGITUDE)).km , axis=1)
index_min = distancias.idxmin()
near_city = cities_rf.iloc[index_min]
distancia = geodesic((near_city.LATITUDE, near_city.LONGITUDE), (row.LATITUDE, row.LONGITUDE)).km
return near_city.COD_MUN, distancia
def population_around_rf(codigo_mun):
'''
Retorna a soma das populacoes das cidades ao redor de um rf
'''
cidades_proximas = municipios[municipios.COD_PROX == codigo_mun]
return cidades_proximas.POPULACAO.astype(int).sum()
def generate_panorama_rfs(cities_rf, municipios):
'''
Cria um dataframe com um panorama geral dos estados
'''
total_cidades, total_populacao, total_rfs, ideal_rfs,suficiente,cidades_proximas, cidades_distantes, centros_suficiente, centros_insuficiente = [],[],[],[],[],[],[],[],[]
for estado in estados:
UF_analisado = estado
cidades_no_estado = municipios[municipios.UF == estado]
rf_no_estado = cities_rf[cities_rf.UF == estado]
rf_no_estado = rf_no_estado.assign(POP_ATENDIDA = rf_no_estado.COD_MUN.apply(lambda cod: population_around_rf(cod)))
distantes, proximas = cidades_no_estado[cidades_no_estado.DISTANCIAS > distancia_max], cidades_no_estado[cidades_no_estado.DISTANCIAS < distancia_max]
suficientes, insuficientes = rf_no_estado[rf_no_estado.POP_ATENDIDA < populacao_max], rf_no_estado[rf_no_estado.POP_ATENDIDA > populacao_max]
total_populacao_estado = cidades_no_estado.POPULACAO.astype(int).sum()
total_rf_estado = rf_no_estado['2018/Dez'].astype(int).sum()
ideal_rf_estado = math.ceil(total_populacao_estado/populacao_max)
total_cidades.append(cidades_no_estado.COD_MUN.count())
total_populacao.append(total_populacao_estado)
total_rfs.append(total_rf_estado)
ideal_rfs.append(ideal_rf_estado)
suficiente.append(total_rf_estado > ideal_rf_estado)
cidades_proximas.append(proximas.COD_MUN.count())
cidades_distantes.append(distantes.COD_MUN.count())
centros_suficiente.append(suficientes.COD_MUN.count())
centros_insuficiente.append(insuficientes.COD_MUN.count())
panorama_distribuicao_rfs_uf = pd.DataFrame({
'UF': estados,
'TOTAL_CIDADES': total_cidades,
'TOTAL_POPULACAO': total_populacao,
'TOTAL_MAMOGRAGOS': total_rfs,
'IDEAL_RFS': ideal_rfs,
'SUFICIENTE': suficiente,
'CIDADES_PROXIMAS': cidades_proximas,
'CIDADES_DISTANTES': cidades_distantes,
'CENTROS_SUFICIENTES': centros_suficiente,
'CENTROS_INSUFICIENTES': centros_insuficiente
},columns= ['UF', 'TOTAL_MAMOGRAGOS', 'TOTAL_POPULACAO', 'IDEAL_RFS','SUFICIENTE','TOTAL_CIDADES', 'CIDADES_DISTANTES' , 'CIDADES_PROXIMAS', 'CENTROS_SUFICIENTES', 'CENTROS_INSUFICIENTES'])
return panorama_distribuicao_rfs_uf
# +
# Lendo os dataframes:
rf = pd.read_csv(os.path.join(data_dir, rf_dataset_name), encoding='latin', index_col=False,
error_bad_lines=False, low_memory=False, usecols=use_cols)
municipios = pd.read_csv(os.path.join(data_dir, municipios_dataset_name), encoding='latin', index_col=0,
error_bad_lines=False, low_memory=False)
municipios['COD_MUN'] = municipios['COD_MUN'].apply(lambda x: (int) ((x - x%10)/10))
cities_rf = pd.merge(rf[rf['2018/Dez'] > 0], municipios, on='COD_MUN', how='left')
municipios = municipios.assign(DISTANCIAS = municipios.progress_apply(lambda row: dist_to_near_rf(row)[1], axis=1), COD_PROX = municipios.progress_apply(lambda row: dist_to_near_rf(row)[0], axis=1))
estados = municipios.UF.unique()
# -
# # Analise do panorama geral dos estados
panorma_distribuicao_rfs_uf = generate_panorama_rfs(cities_rf, municipios)
panorma_distribuicao_rfs_uf
panorma_distribuicao_rfs_uf['rf_por_pop'] = populacao_max*(panorma_distribuicao_rfs_uf['TOTAL_MAMOGRAGOS']/panorma_distribuicao_rfs_uf['TOTAL_POPULACAO'])
panorma_distribuicao_rfs_uf
# Da tabela acima, aplicando para mamografos, podemos notar que, com exceção de AC, AP e DF, todos os estados são suficientes de mamógrafos.
# ### Vamos plotar a situação dos recursos físicos em um estado específico
# +
cidades_no_estado = municipios[municipios.UF == UF_analisado]
bool_dist = cidades_no_estado.DISTANCIAS < distancia_max
proximos, distantes = cidades_no_estado[bool_dist], cidades_no_estado[~bool_dist]
rf_no_estado = cities_rf[cities_rf.UF == UF_analisado]
rf_no_estado = rf_no_estado.assign(POP_ATENDIDA = rf_no_estado.COD_MUN.progress_apply(lambda cod: population_around_rf(cod)))
bool_pop = rf_no_estado.POP_ATENDIDA < populacao_max
pop_ok, pop_n_ok = rf_no_estado[bool_pop], rf_no_estado[~bool_pop]
# -
fig, ax = plt.subplots(figsize=[10,10])
# plt.scatter(distantes.LONGITUDE, distantes.LATITUDE, color='r' , s = 70) # plotar cidades com mamografos distantes
# plt.scatter(proximos.LONGITUDE, proximos.LATITUDE, color='b',s = 70) # plotar cidades com mamografos proximos
plt.scatter(pop_ok.LONGITUDE, pop_ok.LATITUDE, marker='^', color='#5cd65c', s = 400) # plotar cidades em que o numero de mamografos sao adequados
plt.scatter(pop_n_ok.LONGITUDE, pop_n_ok.LATITUDE, marker='^', color='r', s = 400) #plotar cidades em que o numero de mamografos nao sao adequados
mplleaflet.display(fig=fig)
# Em azul estão as cidades que conseguem ser atendidas pelos recursos físicos e em vermelho as que não conseguem.
#
# Os triângulos verde claro são os recursos que estão sendo subutilizados enquanto que os verde escuro são os que estão sendo superutilizados
# ## Escolhendo um novo local de diagnostico
# +
# aplicar o k-medoids para definir novos pontos
n_novos = 1 #numero de novos locais de diagnosticos
cidades_no_estado = municipios[municipios.UF == UF_analisado]
cidades_com_rf = cities_rf[cities_rf.UF == UF_analisado]['COD_MUN'].values.tolist()
X = cidades_no_estado[['LATITUDE', 'LONGITUDE', 'COD_MUN']].values.tolist()
medoids_fix_idxs = []
for i in range(len(X)):
cod_mun = X[i][2]
if cod_mun in cidades_com_rf:
medoids_fix_idxs.append(i)
# quantidade inicial de cidades com rfs
n_centros = len(medoids_fix_idxs)
# quantidade de novas cidades com novo centro
clustering = KMedoidsWeighted(n_clusters=n_centros+n_novos)
medoids_idxs, clusters = clustering.fit(X, medoids_fix_idxs, max_itr=10, verbose=1)
# +
# adicionando cidades sugeridas na lista de cidades com rfs
novos_medoids = np.setdiff1d(medoids_idxs, medoids_fix_idxs)
for i in novos_medoids:
cod_mun = int(X[i][2])
nova_cidade = municipios[municipios.COD_MUN == cod_mun].values.tolist()[0]
cities_rf = cities_rf.append({'COD_MUN' : cod_mun,
'2018/Dez': 0,
'UF': nova_cidade[0],
'NOME': nova_cidade[2],
'POPULACAO': nova_cidade[3],
'LATITUDE': nova_cidade[4],
'LONGITUDE': nova_cidade[5],
'CAPITAL': nova_cidade[6],
'CODIGO_UF': nova_cidade[7]
} , ignore_index=True)
# -
# atualizando distancias no municipio modificado
municipios = municipios.assign(DISTANCIAS = municipios.progress_apply(lambda row: (dist_to_near_rf(row)[1] if (row.UF == UF_analisado) else row.DISTANCIAS), axis=1), COD_PROX = municipios.progress_apply(lambda row: (dist_to_near_rf(row)[0] if (row.UF == UF_analisado) else row.COD_PROX), axis=1))
rf_no_estado = cities_rf[cities_rf.UF == UF_analisado]
rf_no_estado = rf_no_estado.assign(POP_ATENDIDA = rf_no_estado.COD_MUN.apply(lambda cod: population_around_rf(cod)))
rf_no_estado = rf_no_estado.assign(IDEAL_rf = rf_no_estado.POP_ATENDIDA.apply(lambda pop: math.ceil(pop/populacao_max)))
# ### Agora iremos fazer a redistribuição dos recursos físicos
rf_no_estado_copy = rf_no_estado.copy()
# +
# preparando para a redistribuição os rfs
rf_no_estado_copy = rf_no_estado_copy.sort_values(by=['POP_ATENDIDA'], ascending=False)
total_rfs = rf_no_estado_copy['2018/Dez'].astype(int).sum()
total_cities = rf_no_estado_copy.COD_MUN.astype(int).count()
total_pop_atendida = rf_no_estado_copy.POP_ATENDIDA.astype(int).sum()
cities_matrix = rf_no_estado_copy[['COD_MUN', 'POP_ATENDIDA', 'IDEAL_rf']].values.tolist()
for i in range(total_cities):
cities_matrix[i].append(0)
# -
# redistribuir rfs se o estado for suficiente de rfs
if (total_pop_atendida / total_rfs) < populacao_max:
aux = total_rfs
i = 0
saltos_seguidos = 0
while (aux > 0) and (saltos_seguidos <= total_cities):
if (cities_matrix[i][3] >= cities_matrix[i][2]):
saltos_seguidos = saltos_seguidos + 1
i = (i + 1)%total_cities
continue
saltos_seguidos = 0
cities_matrix[i][3] = cities_matrix[i][3] + 1
i = (i + 1)%total_cities
aux = aux - 1
i = 0
while aux > 0:
cities_matrix[i][3] = cities_matrix[i][3] + 1
i = (i + 1)%total_cities
aux = aux - 1
# +
cities_matrix = sorted(cities_matrix, key = lambda k: k[0])
redistirbuicao = []
for city in cities_matrix:
redistirbuicao.append(city[-1])
rf_no_estado_copy = rf_no_estado_copy.sort_values(by='COD_MUN')
rf_no_estado_copy['REDISTRIBUICAO'] = redistirbuicao
# +
#calculo as novas distancias aos recursos físicos
cidades_no_estado = cidades_no_estado.assign(NOVA_DISTANCIAS = cidades_no_estado.progress_apply(lambda row: dist_to_near_rf(row)[1], axis=1), COD_PROX = cidades_no_estado.progress_apply(lambda row: dist_to_near_rf(row)[0], axis=1))
proximos, distantes = cidades_no_estado[cidades_no_estado.NOVA_DISTANCIAS < distancia_max], cidades_no_estado[cidades_no_estado.NOVA_DISTANCIAS > distancia_max]
pop_ok, pop_n_ok = rf_no_estado_copy[rf_no_estado_copy.REDISTRIBUICAO >= rf_no_estado_copy.IDEAL_rf], rf_no_estado_copy[rf_no_estado_copy.REDISTRIBUICAO < rf_no_estado_copy.IDEAL_rf]
fig, ax = plt.subplots(figsize=[10,10])
# plt.scatter(distantes.LONGITUDE, distantes.LATITUDE, color='r' , s = 70) # plotar cidades com rfs distantes
# plt.scatter(proximos.LONGITUDE, proximos.LATITUDE, color='b',s = 70) # plotar cidades com rfs proximos
plt.scatter(pop_ok.LONGITUDE, pop_ok.LATITUDE, marker='^', color='#5cd65c', s = 400) # plotar cidades em que o numero de rfs sao adequados
plt.scatter(pop_n_ok.LONGITUDE, pop_n_ok.LATITUDE, marker='^', color='r', s = 400) #plotar cidades em que o numero de rfs nao sao adequados
mplleaflet.display(fig=fig)
# -
# Como podemos ver no mapa acima houve uma redistirbuição dos recursos de forma a utilizar-los de forma mais otimizada.
#nova distribuição
rf_no_estado_copy
rf_no_estado_copy['DELTA'] = (rf_no_estado_copy['REDISTRIBUICAO'] - rf_no_estado_copy['2018/Dez'])
| code/4-analise-distribuicao-rf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# `
# Project: Project 1: Benson
# Date: 01/23/2017
# Name: <NAME>
# Team: <NAME>, <NAME>, <NAME>
# `
# # Summary of Solution Steps
# 1. Use Census tract data to locate high density residential areas specifically for women working in tech.
# 2. Join Census data with MTA station locations retrieved from ArcGIS.
# 3. Use MTA turnstile data to find highest volume stations.
# 4. Create a combined rank accounting for total volume and count of female tech workers.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
# %matplotlib inline
# -
# ## Step 1
# This python script takes in the dataset from census.gov and reformats the Census Tract value to match the format in the table downloaded from ArcGIS. It then returns 2 columns, `census_tracts` and `counts` of women in each of those census tracts.
# +
df_input = pd.read_csv('data/raw/ACS_15_5YR_C24010_with_ann.csv')
census_tracts = []
counts = []
for index, row in df_input.iterrows():
if index != 0:
counts.append(int(row['HD01_VD43']))
county_code = row['GEO.id2'][3:5]
tract_code = row['GEO.id2'][5:]
if county_code == '61':
census_tracts.append('1' + tract_code)
if county_code == '05':
census_tracts.append('2' + tract_code)
if county_code == '47':
census_tracts.append('3' + tract_code)
if county_code == '81':
census_tracts.append('4' + tract_code)
if county_code == '85':
census_tracts.append('5' + tract_code)
df_output = pd.DataFrame({'BoroCT2010':census_tracts, 'stem_women':counts})
df_output.to_csv('data/stem_women.csv', index=False)
# -
# ## Step 2
# Merge census data with ArcGIS data.
# +
stations = pd.read_csv('data/raw/subway_station_tracts.csv')
stem_women = pd.read_csv('data/stem_women.csv')
df = pd.merge(stations, stem_women, on='BoroCT2010')
dfviz = df[df['stem_women'] > 100]
sns.barplot(x=dfviz['stem_women'], y=dfviz['name'], hue=dfviz['BoroName'], ci=None);
plt.xlabel('Women in STEM occupations living nearby',fontsize=14);
plt.ylabel('Station Name',fontsize=14);
plt.legend(loc=1);
plt.xlim(100,400);
# -
dft50 = df[df['stem_women'] > 50].sort_values('stem_women', ascending=False)[['name','BoroName','BoroCT2010','stem_women']]
sorted_stations = dft50.groupby(['name','BoroName','BoroCT2010']).max().sort_values('stem_women', ascending=False).head(50)
sorted_stations.to_csv('data/top50_stations_women.csv')
# ## Step 3
# Import and clean MTA turnstile data for highest volume stations.
df1 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_160507.txt')
df2 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_160514.txt')
df3 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_160521.txt')
df4 = pd.read_csv('http://web.mta.info/developers/data/nyct/turnstile/turnstile_160528.txt')
frames = [df1, df2, df3, df4]
df = pd.concat(frames)
df.columns = (u'C/A', u'UNIT', u'SCP', u'STATION', u'LINENAME', u'DIVISION', u'DATE',
u'TIME', u'DESC', u'ENTRIES',
u'EXITS')
df = df.sort(['C/A','UNIT','SCP','DATE','TIME'])
df['diffEntries'] = df.ENTRIES.diff()
df['diffExits'] = df.EXITS.diff()
df['machine_change'] = (df.SCP != df.SCP.shift())
# Edit records where turnstile machine changes; set to 0.
df = df.reset_index(drop=True)
df.diffEntries[df.index[df.machine_change==True]] = 0
df.diffExits[df.index[df.machine_change==True]] = 0
# Further edit records where turnstile counter resulted in negative counts or counts > 6000; reset to 0.
df['counterReset'] = ((df.diffEntries > 6000) | (df.diffEntries < 0) | (df.diffExits > 6000) | (df.diffExits < 0))
df.diffEntries[df.index[df.counterReset==True]] = 0
df.diffExits[df.index[df.counterReset==True]] = 0
df.head()
df['TotalVol'] = df.diffEntries+df.diffExits
topstations = df.groupby(['STATION']).sum()
topstations = topstations.sort('TotalVol', ascending=False)
topstations.to_csv('data/volumeByStation.csv')
# ## Step 4
# Combined rank using top50_stations_women.csv and volumeByStation.csv
census = pd.read_csv('data/top50_stations_women.csv')
# Top 5 stations by women residents in tech occupations
census['census_rank'] = census.index + 1
census.head()
ts = (pd.read_csv('data/volumeByStation.csv'))[:51]
ts['ts_rank']= ts.index + 1
ts.head()
# +
import re
capitalizer = lambda x: x.upper()
census['name'] = census['name'].apply(capitalizer)
clean_ave = lambda x: re.sub('AVE','AV', x)
census['name'] = census['name'].apply(clean_ave)
clean_numbers1 = lambda x: re.sub(r'(\b\d+)(RD\b)',r'\1', x)
census['name'] = census['name'].apply(clean_numbers1)
clean_numbers2 = lambda x: re.sub(r'(\b\d+)(TH\b)',r'\1', x)
census['name'] = census['name'].apply(clean_numbers2)
clean_numbers3 = lambda x: re.sub(r'(\b\d+)(ST\b)',r'\1', x)
census['name'] = census['name'].apply(clean_numbers3)
clean_numbers4 = lambda x: re.sub(r'(\b\d+)(ND\b)',r'\1', x)
census['name'] = census['name'].apply(clean_numbers4)
clean_dashes = lambda x: re.sub(r'(\w*)(\s-\s)(\w*)',r'\1-\3', x)
census['name'] = census['name'].apply(clean_dashes)
remove_parans = lambda x: re.sub(r'\(.*\)',r'', x)
census['name'] = census['name'].apply(remove_parans)
census.replace('CONCOURSE', 'CONC', inplace=True)
census.replace('AVENUE', 'AV', inplace=True)
census.replace('WASHINGTON', 'WASH', inplace=True)
census.replace('JUNCTION', 'JCT', inplace=True)
census.replace('CONCOURSE', 'CONC', inplace=True)
census.replace('WOODHAVN', 'WOODHAVEN', inplace=True)
census.replace('CENTER', 'CTR', inplace=True)
census.replace('QUEENSBRIDGE', 'QNSBRIDGE', inplace=True)
census.replace('W 4 ST-WASHINGTON SQ ', 'W 4 ST-WASH SQ', inplace=True)
# -
census_ts = pd.merge(census, ts, how='inner', left_on='name', right_on='STATION')
census_ts['comb_rank'] = census_ts['ts_rank'] + census_ts['census_rank']
census_ts.plot.scatter('stem_women', 'TotalVol');
final = census_ts.sort_values('comb_rank', ascending=True)
final.head()
| Projects/Project1/Project1_Prashant.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect, desc, distinct
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///./Resources/hawaii.sqlite")
conn = engine.connect()
# +
# Create the inspector and connect it to the engine
inspector = inspect(engine)
# Collect the names of tables within the database
inspector.get_table_names()
# -
# Using the inspector to print the column names within the 'measurement' table and its types
columns = inspector.get_columns('measurement')
for column in columns:
print(column["name"], column["type"])
# Using the inspector to print the column names within the 'station' table and its types
columns = inspector.get_columns('station')
for column in columns:
print(column["name"], column["type"])
# Declare a Base using `automap_base()`
Base = automap_base()
# Use the Base class to reflect the database tables
Base.prepare(engine, reflect=True)
# Print all of the classes mapped to the Base
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# Displaying the first 5 rows in Measurements
for row in session.query(Measurement).limit(5).all():
print(row.id, row.station)
# Displaying the first 5 rows in Station
for row in session.query(Station).limit(5).all():
print(row.id, row.name)
# # Exploratory Precipitation Analysis
# +
# Find the most recent date in the data set.
record = session.query(Measurement).order_by(desc(Measurement.date)).limit(1).one()
most_recent_date = dt.datetime.strptime(record.date, '%Y-%m-%d')
print('most recent date in the dataset:', most_recent_date)
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Calculate the date one year from the last date in data set.
year_previous = most_recent_date - dt.timedelta(days=365)
print('date one year from the last date in dataset:', year_previous)
# Perform a query to retrieve the data and precipitation scores
records = session.query(
Measurement.date,
func.min(Measurement.prcp),
func.max(Measurement.prcp),
func.avg(Measurement.prcp),
).filter(
Measurement.date > year_previous
).group_by(
Measurement.date
).all()
df_data = []
for row in records:
df_data.append({
'date': row[0],
'prcp_min': row[1],
'prcp_max': row[2],
'prcp_avg': row[3],
})
# Save the query results as a Pandas DataFrame and set the index to the date column
df_data = pd.DataFrame(df_data)
df_data.index = list(df_data.date)
print(df_data.shape)
# Sort the dataframe by date
df_data.sort_values(by=['date'])
# Use Pandas Plotting with Matplotlib to plot the data
plt.plot_date(df_data.date, df_data.prcp_avg)
plt.tight_layout()
plt.show()
# -
# Use Pandas to calcualte the summary statistics for the precipitation data
# # Exploratory Station Analysis
# +
# Design a query to calculate the total number stations in the dataset
(count,) = session.query(
func.count(distinct(Station.id)),
).one()
print('total number stations in the dataset', count)
# +
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
records = session.query(
Station,
func.count(Measurement.id)
).join(
Measurement,
Measurement.station == Station.station
).group_by(
Measurement.station
).order_by(
desc(func.count(Measurement.id))
).all()
active_stations = []
for row in records:
station = {
"id": row.station.id,
"station": row.station.station,
"name": row.station.name,
"latitude": row.station.latitude,
"longitude": row.station.longitude,
"elevation": row.station.elevation,
"meassurement_count": row[1],
}
active_stations.append(station)
df_active = pd.DataFrame(active_stations)
df_active
# +
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
most_active_station = df_active.loc[0]['station']
records = session.query(
Station,
Measurement,
func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs),
).join(
Measurement,
Measurement.station == Station.station
).group_by(
Measurement.station
).filter(
Station.station == most_active_station
).all()
most_active_meassurements = []
for row in records:
station = {
"id": row.station.id,
"station": row.station.station,
"name": row.station.name,
"latitude": row.station.latitude,
"longitude": row.station.longitude,
"elevation": row.station.elevation,
"min": row[2],
"max": row[3],
"avg": row[4],
}
most_active_meassurements.append(station)
df_most_active_meassurements = pd.DataFrame(most_active_meassurements)
df_most_active_meassurements.T
# +
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
records = session.query(
Measurement,
).filter(
Measurement.station == most_active_station
).order_by(
desc(Measurement.date)
).limit(12).all()
df_data = []
for row in records:
df_data.append({
'date': row.date,
'temp': row.tobs,
})
# Save the query results as a Pandas DataFrame and set the index to the date column
df_data = pd.DataFrame(df_data)
df_data.index = list(df_data.date)
# Sort the dataframe by date
df_data.sort_values(by=['date'])
# Use Pandas Plotting with Matplotlib to plot the data
plt.plot_date(df_data.date, df_data.temp)
plt.tight_layout()
plt.show()
# -
# # Close session
# Close Session
session.close()
| sql_challenge_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # chapter 2: UNIXコマンドの基礎
# hightemp.txtは,日本の最高気温の記録を「都道府県」「地点」「℃」「日」のタブ区切り形式で格納したファイルである.以下の処理を行うプログラムを作成し,hightemp.txtを入力ファイルとして実行せよ.さらに,同様の処理をUNIXコマンドでも実行し,プログラムの実行結果を確認せよ.
# ### 10. 行数のカウント
#
# 行数をカウントせよ.確認にはwcコマンドを用いよ.
# !cat ./data/chapter02/hightemp.txt | wc -l
# ### 11. タブをスペースに置換
#
# タブ1文字につきスペース1文字に置換せよ.確認にはsedコマンド,trコマンド,もしくはexpandコマンドを用いよ.
# !sed -e "s/\t/ /g" ./data/chapter02/hightemp.txt
# ### 12. 1列目をcol1.txtに,2列目をcol2.txtに保存
#
# 各行の1列目だけを抜き出したものをcol1.txtに,2列目だけを抜き出したものをcol2.txtとしてファイルに保存せよ.確認にはcutコマンドを用いよ.
# !cut -f 1 ./data/chapter02/hightemp.txt > ./data/chapter02/col1.txt
# !cat ./data/chapter02/col1.txt
# !cut -f 2 ./data/chapter02/hightemp.txt > ./data/chapter02/col2.txt
# !cat ./data/chapter02/col2.txt
# ### 13. col1.txtとcol2.txtをマージ
#
# 12で作ったcol1.txtとcol2.txtを結合し,元のファイルの1列目と2列目をタブ区切りで並べたテキストファイルを作成せよ.確認にはpasteコマンドを用いよ.
# !paste ./data/chapter02/col1.txt ./data/chapter02/col2.txt
# ### 14. 先頭からN行を出力
#
# 自然数Nをコマンドライン引数などの手段で受け取り,入力のうち先頭のN行だけを表示せよ.確認にはheadコマンドを用いよ.
# !head -n 2 ./data/chapter02/hightemp.txt
# ### 15. 末尾のN行を出力
#
# 自然数Nをコマンドライン引数などの手段で受け取り,入力のうち末尾のN行だけを表示せよ.確認にはtailコマンドを用いよ.
# !tail -n 2 ./data/chapter02/hightemp.txt
# ### 16. ファイルをN分割する
#
# 自然数Nをコマンドライン引数などの手段で受け取り,入力のファイルを行単位でN分割せよ.同様の処理をsplitコマンドで実現せよ.
# !split -l 10 ./data/chapter02/hightemp.txt
# ### 17. 1列目の文字列の異なり
#
# 1列目の文字列の種類(異なる文字列の集合)を求めよ.確認にはsort, uniqコマンドを用いよ.
# !cat ./data/chapter02/col1.txt | sort | uniq
# ### 18. 各行を3コラム目の数値の降順にソート
#
# 各行を3コラム目の数値の逆順で整列せよ(注意: 各行の内容は変更せずに並び替えよ).確認にはsortコマンドを用いよ(この問題はコマンドで実行した時の結果と合わなくてもよい).
# !cat ./data/chapter02/hightemp.txt | sort -n -k 3r,3
# ### 19. 各行の1コラム目の文字列の出現頻度を求め,出現頻度の高い順に並べる
#
# 各行の1列目の文字列の出現頻度を求め,その高い順に並べて表示せよ.確認にはcut, uniq, sortコマンドを用いよ.
#
# !cut -f 1 ./data/chapter02/hightemp.txt | sort | uniq -c | sort -r
| 2015/chapter02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import igraph as ig
import numpy as np
from sklearn.metrics import adjusted_rand_score as ARI
from sklearn.metrics import normalized_mutual_info_score as NMI
from sklearn.metrics import adjusted_mutual_info_score as AMI
import scipy.stats as ss
import pandas as pd
def community_ecg(self, weights=None, ens_size=32, min_weight=0.05):
W = [0]*self.ecount()
## Ensemble of level-1 Louvain
for i in range(ens_size):
p = np.random.permutation(self.vcount()).tolist()
g = self.permute_vertices(p)
l = g.community_multilevel(weights=weights, return_levels=True)[0].membership
b = [l[p[x.tuple[0]]]==l[p[x.tuple[1]]] for x in self.es]
W = [W[i]+b[i] for i in range(len(W))]
W = [min_weight + (1-min_weight)*W[i]/ens_size for i in range(len(W))]
part = self.community_multilevel(weights=W)
## Force min_weight outside 2-core
core = self.shell_index()
ecore = [min(core[x.tuple[0]],core[x.tuple[1]]) for x in self.es]
part.W = [W[i] if ecore[i]>1 else min_weight for i in range(len(ecore))]
return part
ig.Graph.community_ecg = community_ecg
def readGraph(fn, directed=False):
g = ig.Graph.Read_Ncol(fn+'.edgelist',directed=directed)
c = np.loadtxt(fn+'.community',dtype='uint8')
node_base = min([int(x['name']) for x in g.vs]) ## graphs have 1-based or 0-based nodes
comm_base = min(c) ## same for communities
comm = [c[int(x['name'])-node_base]-comm_base for x in g.vs]
g.vs['community'] = comm
g.vs['shape'] = 'circle'
pal = ig.RainbowPalette(n=max(comm)+1)
g.vs['color'] = [pal.get(int(i)) for i in comm]
g.vs['size'] = 10
g.es['width'] = 1
return g
# -
def edgeLabels(g, gcomm):
x = [(gcomm[x.tuple[0]]==gcomm[x.tuple[1]]) for x in g.es]
return x
def AGRI(g, u, v):
bu = edgeLabels(g, u)
bv = edgeLabels(g, v)
su = np.sum(bu)
sv = np.sum(bv)
suv = np.sum(np.array(bu)*np.array(bv))
m = len(bu)
return((suv-su*sv/m) / (0.5*(su+sv)- su*sv/m))
#return suv/(0.5*(su+sv))
# ## ARI, AGRI
## large graph with mu = .48
g = readGraph('Data/LFR8916/lfr8916')
g = g.simplify()
print(1+np.max(g.vs['community']),'communities')
# +
ml = g.community_multilevel(return_levels=True)
l = len(ml)-1
print(1+np.max(ml[0].membership),'communities')
print('level 0 ARI:',ARI(g.vs['community'],ml[0].membership))
print('level 0 AGRI:',AGRI(g,g.vs['community'],ml[0].membership))
print('level 0 NMI:',NMI(g.vs['community'],ml[0].membership))
print('')
print(1+np.max(ml[1].membership),'communities')
print('last level ARI:',ARI(g.vs['community'],ml[l].membership))
print('last level AGRI:',AGRI(g,g.vs['community'],ml[l].membership))
print('last level NMI:',NMI(g.vs['community'],ml[l].membership))
# -
im = g.community_infomap()
print(1+np.max(im.membership),'communities')
print('ARI:',ARI(g.vs['community'],im.membership))
print('AGRI:',AGRI(g,g.vs['community'],im.membership))
print('NMI:',NMI(g.vs['community'],im.membership))
ec = g.community_ecg()
print(1+np.max(ec.membership),'communities')
print('ARI:',ARI(g.vs['community'],ec.membership))
print('AGRI:',AGRI(g,g.vs['community'],ec.membership))
print('NMI:',NMI(g.vs['community'],ec.membership))
lp = g.community_label_propagation() ## highly variable
print(1+np.max(lp.membership),'communities')
print('ARI:',ARI(g.vs['community'],lp.membership))
print('AGRI:',AGRI(g,g.vs['community'],lp.membership))
print('NMI:',NMI(g.vs['community'],lp.membership))
# ## Topological features
## topological measures: scaled density and internal transitivity
#g = readGraph('Data/Football/football')
g = readGraph('Data/LFR8916/lfr8916')
g = g.simplify()
g.vs['ml'] = g.community_multilevel().membership
g.vs['im'] = g.community_infomap().membership
g.vs['ec'] = g.community_ecg().membership
def topo(G, measure='community'):
sd = [] ## scaled density
tr = [] ## internal transitivity
sz = [] ## size
x = G.vs[measure]
for i in range(max(x)+1):
ix = [v for v in G.vs if v[measure]==i]
g = G.subgraph(ix)
sd.append(2*g.ecount()/(g.vcount()-1))
sz.append(g.vcount())
tr.append(sum(g.transitivity_local_undirected(mode='zero'))/g.vcount())
return sd,tr,sz
sd, tr, sz = topo(g)
sdm, trm, szm = topo(g,'ml')
sde, tre, sze = topo(g,'ec')
sdi, tri, szi = topo(g,'im')
xl = (min(sz+szm+szi+sze)-.1,max(sz+szm+szi+sze)+1)
# +
import matplotlib.pyplot as plt
# %matplotlib inline
fig = plt.figure(1, figsize=(10,8))
yl = (min(tr+trm+tri+tre)-.1,max(tr+trm+tri+tre)+.1)
plt.subplot(221)
plt.semilogx(sz,tr,'o',color='k', alpha=.5)
plt.xlim(xl)
plt.ylim(yl)
plt.ylabel('internal transitivity')
plt.title('Ground-truth')
plt.subplot(222)
plt.semilogx(szm,trm,'o',color='g', alpha=.5)
plt.xlim(xl)
plt.ylim(yl)
plt.title('Louvain')
plt.subplot(223)
plt.semilogx(sze,tre,'o',color='b', alpha=.5)
plt.xlim(xl)
plt.ylim(yl)
plt.ylabel('internal transitivity')
plt.title('ECG')
plt.xlabel('community size')
plt.subplot(224)
plt.semilogx(szi,tri,'o',color='m', alpha=.5)
plt.xlim(xl)
plt.ylim(yl)
plt.title('Infomap')
plt.xlabel('community size');
#fig.savefig('foot_transitivity.pdf')
# +
import matplotlib.pyplot as plt
# %matplotlib inline
fig = plt.figure(1, figsize=(10,8))
yl = (min(sd+sdm+sdi+sde)-.3,max(sd+sdm+sdi+sde)+.3)
plt.subplot(221)
plt.semilogx(sz,sd,'o',color='k', alpha=.5)
plt.xlim(xl)
plt.ylim(yl)
plt.ylabel('scaled density')
plt.title('Ground-truth')
plt.subplot(222)
plt.semilogx(szm,sdm,'o',color='g', alpha=.5)
plt.xlim(xl)
plt.ylim(yl)
plt.title('Louvain')
plt.subplot(223)
plt.semilogx(sze,sde,'o',color='b', alpha=.5)
plt.xlim(xl)
plt.ylim(yl)
plt.ylabel('scaled density')
plt.title('ECG')
plt.xlabel('community size')
plt.subplot(224)
plt.semilogx(szi,sdi,'o',color='m', alpha=.5)
plt.xlim(xl)
plt.ylim(yl)
plt.title('Infomap')
plt.xlabel('community size');
#fig.savefig('foot_density.pdf')
| Notebooks/05.Measures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="VCNWYCdXcgrg"
# ## Introduction
#
# In this notebook, we are going to run `TapasForSequenceClassification`, a PyTorch/Transformers implementation of the [Tapas algorithm](https://arxiv.org/abs/2004.02349) by Google AI on the test set of [TabFact](https://github.com/wenhuchen/Table-Fact-Checking), a large dataset for table entailment (which is included in the HuggingFace [datasets library](https://github.com/huggingface/datasets)). In this way, we can verify whether our implementation is consistent with the results reported in the paper.
#
# * Paper (which is a follow-up on the original TAPAS paper): https://arxiv.org/abs/2010.00571
# * Tabfact paper: https://arxiv.org/abs/1909.02164
#
# Note that `TapasForSequenceClassification` is really similar to `BertForSequenceClassification` (i.e. it adds a linear layer on top of the pooled output). The difference between the two is that for TAPAS, you need to use `TapasTokenizer` to prepare table-question pairs for the model instead of regular sequences.
# + [markdown] id="2VB5aw-Exy8H"
# ## Setting up environment
#
# Make sure to set runtime to GPU.
# We install the `transformers` library from source, as well as the soft dependency on `torch-scatter`:
# + id="AhQQH2UyxU0v" colab={"base_uri": "https://localhost:8080/"} outputId="d0be09a1-1f45-4126-b173-7f9dabebe579"
# ! rm -r transformers
# ! git clone https://github.com/huggingface/transformers.git
# ! cd transformers
# ! pip install ./transformers
# + id="lAV142ECxhWQ" colab={"base_uri": "https://localhost:8080/"} outputId="f176af9b-e564-4d7c-b49d-41ed0162c026"
# ! pip install torch-scatter==latest+cu101 -f https://pytorch-geometric.com/whl/torch-1.7.0.html
# + [markdown] id="fU_6FerLp1Lb"
# We also install the datasets library from source:
# + colab={"base_uri": "https://localhost:8080/"} id="u8MelyhHo0fP" outputId="933f3f7f-f96a-4c91-c21a-bedb61cfa919"
# ! rm -r datasets
# ! git clone https://github.com/huggingface/datasets.git
# ! cd datasets
# ! pip install ./datasets
# + [markdown] id="Dv2DFfBMyuke"
# ## Loading the model
#
# Here we load in a base-sized TAPAS model, which was fine-tuned on TabFact, and move it to GPU (if available):
#
#
#
# + id="yd-2M_-hyuJB" colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["9a7e2ff0396b41eea78252e24b75dded", "786d5fd494404ec7b8883177828ef449", "<KEY>", "<KEY>", "3bebeaa123524f7d8a6acfa8ad6dde71", "<KEY>", "<KEY>", "65dc714e12f841eab9ea09d55c3ad4f3", "46c9f0d44632476db16efcadc1098e98", "8e44f13aad8e41d2b56d7099151b81c4", "<KEY>", "87fec1a5c2354ac1a60bb1a82e9416df", "<KEY>", "3e44e3afeece4ca3a2bae5c9e154a8e4", "d9c25d7fed8943c5a9e00fa5739618ce", "4de25fbba4dc46ce846ebfafd3665918"]} outputId="2f59cf4b-7e4c-4807-f3e3-ae63e23d3fdf"
from transformers import TapasForSequenceClassification
import torch
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = TapasForSequenceClassification.from_pretrained("google/tapas-base-finetuned-tabfact")
model.to(device)
# + [markdown] id="IyCbXsO_vio8"
# ## Preparing the data
#
# Here we read in the test set of the TabFact dataset, on which we are going to evaluate the fine-tuned checkpoint.
# + colab={"base_uri": "https://localhost:8080/", "height": 217, "referenced_widgets": ["c9ee6d127fde42b0b23d1d55a5cb107d", "d5a84d1c443f45439d7c80fac9f36096", "<KEY>", "7f5309e5fd62476a82901fa1a6a3b297", "76be0ac52ca34c3aa152d72ca680acb1", "f4ff6e15e6984fdab32dde9dc73abf37", "60551180ab16425ea6768dac53f13d54", "025b57ba86604f3297e4aacf02ca1193", "d8b09a04dd514bd39343725cd09bb9ac", "f8e48211c1c74572a388b0e3d1de6a3c", "<KEY>", "e3e7b6bd45a44804b067ab9be608293d", "59672d4df094485b9b18246cf28fa36b", "16b97960dfef41d5be2bb14ae3220821", "fd05d98b29954ef58f62ad5305f6baf9", "9779d283bbc04c2ea2c1fc3a9dcebac6", "<KEY>", "<KEY>", "ad6e282d61e940db9cbb9bd19e198b85", "5b14df88afee447db232fa2badb5ddf6", "<KEY>", "29a57ac5ca214417988c65e2f1612675", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "6ce4b3827d9f460290cb65cab3effb94", "1bee4a748ca34417974381da605c3a41", "1b2e461c05a9466ba0cd19b0d3a93a2d", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "d4beda687df447758e7ce4cf72b296d6", "f841ead508694af595b8d3571bbde314", "<KEY>", "<KEY>", "<KEY>", "6c81d01ef39147ac9392f194e7056ac8", "<KEY>", "9e71046be526478a81eb13af28989f81", "4e93a29161d348ae845ccefebedd9c8e", "<KEY>", "<KEY>", "de3384d6440f4f14a13a9b57b90d1ef0"]} id="jl_HnBzWpDro" outputId="27d5cc81-100b-4b53-c481-5bc847b9ef08"
from datasets import load_dataset
dataset = load_dataset('tab_fact', 'tab_fact', split='test')
# + colab={"base_uri": "https://localhost:8080/"} id="c0I4tL_UYLAl" outputId="be89b060-7e54-4530-b22b-0b7a7c08afd1"
dataset.column_names
# + [markdown] id="pazQrL7ARr63"
# Each example in the TabFact dataset is a statement about a table, and the label indicates whether the statement is supported (1) or refuted (0) by the contents of the table. So it's a binary classification problem.
#
# Let's visualize an example:
# + colab={"base_uri": "https://localhost:8080/", "height": 246} id="QttqcI1Zthwd" outputId="fc85b074-ce0f-4616-d194-c6220ec08e47"
import pandas as pd
# let's take a random example
example = dataset[0]
id2label = {0: "REFUTES", 1: "SUPPORTS"}
data = example['table_text']
# convert table_text into a Pandas dataframe
table = pd.DataFrame([x.split('#') for x in data.split('\n')[1:-1]], columns=[x for x in data.split('\n')[0].split('#')])
display(table)
print("")
print("Statement:", example['statement'])
print("Label:", id2label[example['label']])
# + [markdown] id="5fS7BR4txZ1m"
# We write the logic to turn the `table_text` column into a Pandas dataframe into a function:
# + id="RG-2KmbRw_dn"
def read_text_as_pandas_table(table_text: str):
table = pd.DataFrame([x.split('#') for x in table_text.split('\n')[1:-1]], columns=[x for x in table_text.split('\n')[0].split('#')]).fillna('')
table = table.astype(str)
return table
# + [markdown] id="PLoEnZtUY1mp"
# Let's check if TapasTokenizer can prepare the data correctly:
# + colab={"base_uri": "https://localhost:8080/", "height": 102, "referenced_widgets": ["7e8ff394421f4f0eae05ff88d124aaf1", "0796890d8d2e466baa2f4d47ba1ef118", "<KEY>", "48f95b8cf0ca45aa9068bde4a1dc5e38", "<KEY>", "adb6cc8c58ae47a69de0e84b70e71628", "<KEY>", "0da52abfbeff4bb6a74e7b2b635464c7"]} id="4p-RUk1NM_32" outputId="893e6d84-e24c-4664-8b0a-a137711f2a99"
from transformers import TapasTokenizer
tokenizer = TapasTokenizer.from_pretrained("google/tapas-base-finetuned-tabfact")
# test on a random example
example = dataset[0]
inputs = tokenizer(table=read_text_as_pandas_table(example['table_text']),
queries=example['statement'],
padding='max_length')
inputs
# + [markdown] id="L9Hf19fZY6qY"
# Now let's use the `.map()` functionality of `datasets` to tokenize and prepare for the model the entire test split of the dataset. Note that we tokenize each table-question pair independently (we don't set `batched=True`):
# + colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["5a2b749b94fc4fd7aa022661248ee062", "<KEY>", "<KEY>", "ad0d36cbdd224fff93b41950ad187eb2", "<KEY>", "<KEY>", "f6626e8d54d743aab888a79982016088", "57eb080ac8bb439bae7022e4c3c6884c"]} id="DafZ9NpQweNX" outputId="a93ed78d-2d8e-4abf-deb7-446b9f959599"
from datasets import Features, Sequence, ClassLabel, Value, Array2D
# we need to define the features ourselves as the token_type_ids of TAPAS are different from those of BERT
features = Features({
'attention_mask': Sequence(Value(dtype='int64')),
'id': Value(dtype='int32'),
'input_ids': Sequence(feature=Value(dtype='int64')),
'label': ClassLabel(names=['refuted', 'entailed']),
'statement': Value(dtype='string'),
'table_caption': Value(dtype='string'),
'table_id': Value(dtype='string'),
'table_text': Value(dtype='string'),
'token_type_ids': Array2D(dtype="int64", shape=(512, 7))
})
test = dataset.map(
lambda e: tokenizer(table=read_text_as_pandas_table(e['table_text']), queries=e['statement'],
truncation=True,
padding='max_length'),
features=features
)
# + [markdown] id="fAfbFhN3ZAtk"
# Let's create a PyTorch dataloader based on this:
# + id="OZhd56Gg-j89"
# map to PyTorch tensors and only keep columns we need
test.set_format(type='torch', columns=['input_ids', 'attention_mask', 'token_type_ids', 'label'])
# create PyTorch dataloader
test_dataloader = torch.utils.data.DataLoader(test, batch_size=4)
# + [markdown] id="a-fjBqOGGboF"
# We can verify whether everything is created correctly, for example by verifying their shapes and decoding the `input_ids` of the first example of the first batch:
# + colab={"base_uri": "https://localhost:8080/", "height": 190} id="bjxLr_NuGh3m" outputId="2edef04b-9f7e-4f4a-a9b5-c1ef9f87b104"
# let's check the first batch
batch = next(iter(test_dataloader))
assert batch["input_ids"].shape == (4, 512)
assert batch["attention_mask"].shape == (4, 512)
assert batch["token_type_ids"].shape == (4, 512, 7)
tokenizer.decode(batch["input_ids"][0])
# + [markdown] id="KZZdJRA0x18e"
# ## Run evaluation
#
# Now we can compute the accuracy of TAPAS on the test set of TabFact! Incredible how easy 🤗 datasets makes this!
#
# Note that this will take a couple of minutes. We set the batch size to only 4, to make sure a single GPU on Google Colab can handle this. You can increase it of course.
# + id="-yu67t2Nx21s" colab={"base_uri": "https://localhost:8080/"} outputId="f7359099-b3e1-4df2-ee9d-9a57a63cae26"
from datasets import load_metric
accuracy = load_metric("accuracy")
print("Starting evaluation...")
number_processed = 0
total = len(test_dataloader) * batch["input_ids"].shape[0] # number of batches * batch_size
for batch in test_dataloader:
# get the inputs
input_ids = batch["input_ids"].to(device)
attention_mask = batch["attention_mask"].to(device)
token_type_ids = batch["token_type_ids"].to(device)
labels = batch["label"].to(device)
# forward pass
outputs = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=labels)
model_predictions = outputs.logits.argmax(-1)
# add metric
accuracy.add_batch(predictions=model_predictions, references=labels)
number_processed += batch["input_ids"].shape[0]
print(f"Processed {number_processed} / {total} examples")
final_score = accuracy.compute()
# + id="ibUQ1P5CaJ2B" colab={"base_uri": "https://localhost:8080/"} outputId="099e1b42-d846-483b-bd99-13c1ba90942f"
print(final_score)
# + id="uupxYz6yApLh"
| TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Advanced Matplotlib Concepts Lecture
#
# In this lecture we cover some more advanced topics which you won't usually use as often. You can always reference the documentation for more resources!
# #### Logarithmic scale
# It is also possible to set a logarithmic scale for one or both axes. This functionality is in fact only one application of a more general transformation system in Matplotlib. Each of the axes' scales are set seperately using `set_xscale` and `set_yscale` methods which accept one parameter (with the value "log" in this case):
# +
fig, axes = plt.subplots(1, 2, figsize=(10,4))
axes[0].plot(x, x**2, x, np.exp(x))
axes[0].set_title("Normal scale")
axes[1].plot(x, x**2, x, np.exp(x))
axes[1].set_yscale("log")
axes[1].set_title("Logarithmic scale (y)");
# -
# ### Placement of ticks and custom tick labels
# We can explicitly determine where we want the axis ticks with `set_xticks` and `set_yticks`, which both take a list of values for where on the axis the ticks are to be placed. We can also use the `set_xticklabels` and `set_yticklabels` methods to provide a list of custom text labels for each tick location:
# +
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(x, x**2, x, x**3, lw=2)
ax.set_xticks([1, 2, 3, 4, 5])
ax.set_xticklabels([r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$', r'$\epsilon$'], fontsize=18)
yticks = [0, 50, 100, 150]
ax.set_yticks(yticks)
ax.set_yticklabels(["$%.1f$" % y for y in yticks], fontsize=18); # use LaTeX formatted labels
# -
# There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http://matplotlib.org/api/ticker_api.html for details.
# #### Scientific notation
# With large numbers on axes, it is often better use scientific notation:
# +
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_title("scientific notation")
ax.set_yticks([0, 50, 100, 150])
from matplotlib import ticker
formatter = ticker.ScalarFormatter(useMathText=True)
formatter.set_scientific(True)
formatter.set_powerlimits((-1,1))
ax.yaxis.set_major_formatter(formatter)
# -
# ### Axis number and axis label spacing
# +
# distance between x and y axis and the numbers on the axes
matplotlib.rcParams['xtick.major.pad'] = 5
matplotlib.rcParams['ytick.major.pad'] = 5
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("label and axis spacing")
# padding between axis label and axis numbers
ax.xaxis.labelpad = 5
ax.yaxis.labelpad = 5
ax.set_xlabel("x")
ax.set_ylabel("y");
# -
# restore defaults
matplotlib.rcParams['xtick.major.pad'] = 3
matplotlib.rcParams['ytick.major.pad'] = 3
# #### Axis position adjustments
# Unfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using `subplots_adjust`:
# +
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("title")
ax.set_xlabel("x")
ax.set_ylabel("y")
fig.subplots_adjust(left=0.15, right=.9, bottom=0.1, top=0.9);
# -
# ### Axis grid
# With the `grid` method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the `plot` function:
# +
fig, axes = plt.subplots(1, 2, figsize=(10,3))
# default grid appearance
axes[0].plot(x, x**2, x, x**3, lw=2)
axes[0].grid(True)
# custom grid appearance
axes[1].plot(x, x**2, x, x**3, lw=2)
axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)
# -
# ### Axis spines
# We can also change the properties of axis spines:
# +
fig, ax = plt.subplots(figsize=(6,2))
ax.spines['bottom'].set_color('blue')
ax.spines['top'].set_color('blue')
ax.spines['left'].set_color('red')
ax.spines['left'].set_linewidth(2)
# turn off axis spine to the right
ax.spines['right'].set_color("none")
ax.yaxis.tick_left() # only ticks on the left side
# -
# ### Twin axes
# Sometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the `twinx` and `twiny` functions:
# +
fig, ax1 = plt.subplots()
ax1.plot(x, x**2, lw=2, color="blue")
ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, x**3, lw=2, color="red")
ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red")
for label in ax2.get_yticklabels():
label.set_color("red")
# -
# ### Axes where x and y is zero
# +
fig, ax = plt.subplots()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0)) # set position of y spine to y=0
xx = np.linspace(-0.75, 1., 100)
ax.plot(xx, xx**3);
# -
# ### Other 2D plot styles
# In addition to the regular `plot` method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types: http://matplotlib.org/gallery.html. Some of the more useful ones are show below:
n = np.array([0,1,2,3,4,5])
# +
fig, axes = plt.subplots(1, 4, figsize=(12,3))
axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))
axes[0].set_title("scatter")
axes[1].step(n, n**2, lw=2)
axes[1].set_title("step")
axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5)
axes[2].set_title("bar")
axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5);
axes[3].set_title("fill_between");
# -
# ### Text annotation
# Annotating text in matplotlib figures can be done using the `text` function. It supports LaTeX formatting just like axis label texts and titles:
# +
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
ax.text(0.15, 0.2, r"$y=x^2$", fontsize=20, color="blue")
ax.text(0.65, 0.1, r"$y=x^3$", fontsize=20, color="green");
# -
# ### Figures with multiple subplots and insets
# Axes can be added to a matplotlib Figure canvas manually using `fig.add_axes` or using a sub-figure layout manager such as `subplots`, `subplot2grid`, or `gridspec`:
# #### subplots
fig, ax = plt.subplots(2, 3)
fig.tight_layout()
# #### subplot2grid
fig = plt.figure()
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2,0))
ax5 = plt.subplot2grid((3,3), (2,1))
fig.tight_layout()
# #### gridspec
import matplotlib.gridspec as gridspec
# +
fig = plt.figure()
gs = gridspec.GridSpec(2, 3, height_ratios=[2,1], width_ratios=[1,2,1])
for g in gs:
ax = fig.add_subplot(g)
fig.tight_layout()
# -
# #### add_axes
# Manually adding axes with `add_axes` is useful for adding insets to figures:
# +
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
fig.tight_layout()
# inset
inset_ax = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # X, Y, width, height
inset_ax.plot(xx, xx**2, xx, xx**3)
inset_ax.set_title('zoom near origin')
# set axis range
inset_ax.set_xlim(-.2, .2)
inset_ax.set_ylim(-.005, .01)
# set axis tick locations
inset_ax.set_yticks([0, 0.005, 0.01])
inset_ax.set_xticks([-0.1,0,.1]);
# -
# ### Colormap and contour figures
# Colormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see: http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps
# +
alpha = 0.7
phi_ext = 2 * np.pi * 0.5
def flux_qubit_potential(phi_m, phi_p):
return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p)
# -
phi_m = np.linspace(0, 2*np.pi, 100)
phi_p = np.linspace(0, 2*np.pi, 100)
X,Y = np.meshgrid(phi_p, phi_m)
Z = flux_qubit_potential(X, Y).T
# #### pcolor
# +
fig, ax = plt.subplots()
p = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())
cb = fig.colorbar(p, ax=ax)
# -
# #### imshow
# +
fig, ax = plt.subplots()
im = ax.imshow(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
im.set_interpolation('bilinear')
cb = fig.colorbar(im, ax=ax)
# -
# #### contour
# +
fig, ax = plt.subplots()
cnt = ax.contour(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
# -
# ## 3D figures
# To use 3D graphics in matplotlib, we first need to create an instance of the `Axes3D` class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a `projection='3d'` keyword argument to the `add_axes` or `add_subplot` methods.
from mpl_toolkits.mplot3d.axes3d import Axes3D
# #### Surface plots
# +
fig = plt.figure(figsize=(14,6))
# `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(1, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)
# surface_plot with color grading and color bar
ax = fig.add_subplot(1, 2, 2, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False)
cb = fig.colorbar(p, shrink=0.5)
# -
# #### Wire-frame plot
# +
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)
# -
# #### Coutour plots with projections
# +
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1, projection='3d')
ax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)
cset = ax.contour(X, Y, Z, zdir='z', offset=-np.pi, cmap=matplotlib.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=-np.pi, cmap=matplotlib.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=3*np.pi, cmap=matplotlib.cm.coolwarm)
ax.set_xlim3d(-np.pi, 2*np.pi);
ax.set_ylim3d(0, 3*np.pi);
ax.set_zlim3d(-np.pi, 2*np.pi);
# -
# ## Further reading
# * http://www.matplotlib.org - The project web page for matplotlib.
# * https://github.com/matplotlib/matplotlib - The source code for matplotlib.
# * http://matplotlib.org/gallery.html - A large gallery showcaseing various types of plots matplotlib can create. Highly recommended!
# * http://www.loria.fr/~rougier/teaching/matplotlib - A good matplotlib tutorial.
# * http://scipy-lectures.github.io/matplotlib/matplotlib.html - Another good matplotlib reference.
#
| Topics_Master/05-Data-Visualization-with-Matplotlib/04-Advanced Matplotlib Concepts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# language: python
# name: python3
# ---
# +
from model_vc import Generator
import torch
import matplotlib.pyplot as plt
# from synthesis import build_model, wavegen
from hparams import hparams
from wavenet_vocoder import WaveNet
from wavenet_vocoder.util import is_mulaw_quantize, is_mulaw, is_raw, is_scalar_input
from tqdm import tqdm
import audio
from nnmnkwii import preprocessing as P
import numpy as np
from scipy.io import wavfile
from data_loader import SpecsCombined
# -
# ## Accompaniment Generator
g_accom = Generator(160, 0, 512, 20)
g_accom.load_state_dict(torch.load('model_latest_accom.pth'))
# ## Dataset
dataset = SpecsCombined('~/Data/ts_segments_combined', len_crop=860)
# ## Data Loading
accom_spec, vocals_spec = dataset[0]
accom_spec = torch.from_numpy(accom_spec).unsqueeze(0)
vocals_spec = torch.from_numpy(vocals_spec).unsqueeze(0)
print(accom_spec.shape, vocals_spec.shape)
# ## Accompaniment Latent Vector Generation
accom_vec = g_accom(accom_spec, return_encoder_output=True)
accom_vec.shape
# ## Random Input
x = torch.randn(1, 860, 80)
# x = torch.sin(x)
plt.imshow(x.squeeze(0))
# x_noise = torch.FloatTensor(1, 860, 320).uniform_(-0.06, 0.06)
# plt.imshow(x_noise.squeeze(0))
# ## Real Input
x = np.load('example_vocals-feats.npy')
x = torch.from_numpy(x)
x = x[:860, :].unsqueeze(0)
x.shape
# ## Vocals Network
g_vocals = Generator(160, 0, 512, 20, dim_neck_decoder=320)
g_vocals.load_state_dict(torch.load('model_latest.pth'))
# Encode real input
x = g_vocals(x, return_encoder_output=True)
plt.imshow(x.squeeze(0).detach().numpy())
encoder_outputs = torch.cat((x, accom_vec), dim=-1)
encoder_outputs.shape
# +
mel_outputs = g_vocals.decoder(encoder_outputs)
mel_outputs_postnet = g_vocals.postnet(mel_outputs.transpose(2,1))
mel_outputs_postnet = mel_outputs + mel_outputs_postnet.transpose(2,1)
# -
plt.imshow(mel_outputs_postnet.squeeze(0).squeeze(0).detach().numpy())
# ## WaveNet
# +
def build_model():
if is_mulaw_quantize(hparams.input_type):
if hparams.out_channels != hparams.quantize_channels:
raise RuntimeError(
"out_channels must equal to quantize_chennels if input_type is 'mulaw-quantize'")
if hparams.upsample_conditional_features and hparams.cin_channels < 0:
s = "Upsample conv layers were specified while local conditioning disabled. "
s += "Notice that upsample conv layers will never be used."
print(s)
upsample_params = hparams.upsample_params
upsample_params["cin_channels"] = hparams.cin_channels
upsample_params["cin_pad"] = hparams.cin_pad
model = WaveNet(
out_channels=hparams.out_channels,
layers=hparams.layers,
stacks=hparams.stacks,
residual_channels=hparams.residual_channels,
gate_channels=hparams.gate_channels,
skip_out_channels=hparams.skip_out_channels,
cin_channels=hparams.cin_channels,
gin_channels=hparams.gin_channels,
n_speakers=hparams.n_speakers,
dropout=hparams.dropout,
kernel_size=hparams.kernel_size,
cin_pad=hparams.cin_pad,
upsample_conditional_features=hparams.upsample_conditional_features,
upsample_params=upsample_params,
scalar_input=is_scalar_input(hparams.input_type),
output_distribution=hparams.output_distribution,
)
return model
def batch_wavegen(model, c=None, g=None, fast=True, tqdm=tqdm):
assert c is not None
B = c.shape[0]
model.eval()
if fast:
model.make_generation_fast_()
# Transform data to GPU
g = None if g is None else g.to(device)
c = None if c is None else c.to(device)
if hparams.upsample_conditional_features:
length = (c.shape[-1] - hparams.cin_pad * 2) * audio.get_hop_size()
else:
# already dupulicated
length = c.shape[-1]
with torch.no_grad():
y_hat = model.incremental_forward(
c=c, g=g, T=length, tqdm=tqdm, softmax=True, quantize=True,
log_scale_min=hparams.log_scale_min)
if is_mulaw_quantize(hparams.input_type):
# needs to be float since mulaw_inv returns in range of [-1, 1]
y_hat = y_hat.max(1)[1].view(B, -1).float().cpu().data.numpy()
for i in range(B):
y_hat[i] = P.inv_mulaw_quantize(y_hat[i], hparams.quantize_channels - 1)
elif is_mulaw(hparams.input_type):
y_hat = y_hat.view(B, -1).cpu().data.numpy()
for i in range(B):
y_hat[i] = P.inv_mulaw(y_hat[i], hparams.quantize_channels - 1)
else:
y_hat = y_hat.view(B, -1).cpu().data.numpy()
if hparams.postprocess is not None and hparams.postprocess not in ["", "none"]:
for i in range(B):
y_hat[i] = getattr(audio, hparams.postprocess)(y_hat[i])
if hparams.global_gain_scale > 0:
for i in range(B):
y_hat[i] /= hparams.global_gain_scale
return y_hat
def to_int16(x):
if x.dtype == np.int16:
return x
assert x.dtype == np.float32
assert x.min() >= -1 and x.max() <= 1.0
return (x * 32767).astype(np.int16)
# -
device = torch.device("cuda")
model = build_model().to(device)
checkpoint = torch.load("/wavenet_vocoder/checkpoints/checkpoint_latest_ema.pth")
model.load_state_dict(checkpoint["state_dict"])
# +
# outputs = (mel_outputs_postnet/2) + (accom_spec/2)
# c = outputs.squeeze(0).detach()
c = mel_outputs_postnet.squeeze(0).detach()
# c = accom_spec.squeeze(0).detach()
# Split c into chunks across the 0th dimension
length = c.shape[0]
c = c.T
print(c.shape)
c_chunks = c.reshape(80, length//20, 20)
c_chunks = c_chunks.permute(1, 0, 2)
c = c_chunks
print(c.shape)
# # Resize c to 1, 80, 866
# print(c.shape)
# c = TF.resize(c, (80, 866))
# c = c[:, :, :50]
# print(c.shape)
# Generate
y_hats = batch_wavegen(model, c=c, g=None, fast=True, tqdm=tqdm)
y_hats = torch.from_numpy(y_hats).flatten().unsqueeze(0).numpy()
gen = y_hats[0]
gen = np.clip(gen, -1.0, 1.0)
wavfile.write('test.wav', hparams.sample_rate, to_int16(gen))
# -
| autovc_mod/vocals_synthesis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# **Note**: Click on "*Kernel*" > "*Restart Kernel and Run All*" in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) *after* finishing the exercises to ensure that your solution runs top to bottom *without* any errors. If you cannot run this file on your machine, you may want to open it [in the cloud <img height="12" style="display: inline-block" src="../static/link/to_mb.png">](https://mybinder.org/v2/gh/webartifex/intro-to-data-science/main?urlpath=lab/tree/00_python_in_a_nutshell/06_exercises_volume.ipynb).
# # Chapter 0: Python in a Nutshell (Coding Exercises)
# The exercises below assume that you have read the preceeding content sections.
#
# The `...`'s in the code cells indicate where you need to fill in code snippets. The number of `...`'s within a code cell give you a rough idea of how many lines of code are needed to solve the task. You should not need to create any additional code cells for your final solution. However, you may want to use temporary code cells to try out some ideas.
# ## Volume of a Sphere
# The [volume of a sphere <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Sphere) is defined as $\frac{4}{3} * \pi * r^3$.
#
# In **Q2**, you will write a `function` implementing this formula, and in **Q3** and **Q5**, you will execute this `function` with a couple of example inputs.
#
# **Q1**: First, execute the next two code cells that import the `math` module from the [standard library <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/index.html) providing an approximation for $\pi$!
import math
math.pi
# **Q2**: Implement the business logic in the `sphere_volume()` function below according to the specifications in the **docstring**!
#
# Hints:
# - `sphere_volume()` takes a mandatory `radius` input and an optional `ndigits` input (defaulting to `5`)
# - Because `math.pi` is constant, it may be used within `sphere_volume()` *without* being an official input
# - The volume is returned as a so-called `float`ing-point number due to the rounding with the built-in [round() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#round) function
# - You may either write your solution as one big expression (where the `...` are) or introduce an intermediate step holding the result before rounding (then, one more line of code is needed above the `return ...` one)
def sphere_volume(radius, ndigits=5):
"""Calculate the volume of a sphere.
Args:
radius (int or float): radius of the sphere
ndigits (optional, int): number of digits
when rounding the resulting volume
Returns:
volume (float)
"""
return ...
# **Q3**: Execute the function with `radius = 100.0` and 1, 5, 10, 15, and 20 as `ndigits` respectively.
radius = 100.0
sphere_volume(...)
sphere_volume(...)
sphere_volume(...)
sphere_volume(...)
sphere_volume(...)
# **Q4**: What observation do you make?
# < your answer >
# **Q4**: Using the [range() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-range) built-in, write a `for`-loop and calculate the volume of a sphere with `radius = 42.0` for all `ndigits` from `1` through `20`!
#
# Hint: You need to use the built-in [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function to make the return values visible
radius = 42.0
for ... in ...:
...
# **Q5**: What lesson do you learn about `float`ing-point numbers?
# < your answer >
# With the [round() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#round) function, we can see another technicality of the `float`ing-point standard: `float`s are *inherently* imprecise!
#
# **Q6**: Execute the following code cells to see a "weird" output! What could be the reasoning behind rounding this way?
round(1.5)
round(2.5)
round(3.5)
round(4.5)
| 00_python_in_a_nutshell/06_exercises_volume.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Materialien zu <i>zufall</i>
#
# Autor: <NAME> - <EMAIL>
#
# ## Aufgaben 3 - Urnen-Experimente (2)
# <br>
# <i>Die Aufgaben wurden entnommen aus
#
# <NAME><br>
# Wahrscheinlichkeitsrechnung und Statistik<br>
# Grundkurs<br>
# Stark Verlag 1997<br>
#
# Aufgaben 19, 20, 22
# <br>
# ### Aufgabe 19
# Ein Elektronikbastler steht ratlos vor einer Schachtel mit zwölf gleich aussehen-<br>
# den Kondensatoren, von denen er nur weiß, dass zehn in Ordnung (1) und zwei <br>
# defekt (0) sind. Er entnimmt zwei Kondensatoren ohne Zurücklegen.
# <br>
# <br>
# a) Zeichnen Sie zu diesem Zufallsexperiment ein Baumdiagramm, bestimmen Sie<br>
# den zugehörigen Ereignisraum $\Omega\,$ und geben Sie an, wie viele Elemente der<br>
# zugehörige Ereigniraum $P(\Omega)\,$ besitzt
# <br>
# <br>
# b) Bestimmen Sie dann die Wahrscheinlichkeiten der Elementarereignisse sowie<br>
# der Ereignisse
# <br>
# <br>
# $A$ = "Höchstens ein Kondensator ist defekt" <br>
# $B$ = "Genau ein Kondensator ist defekt"
# <br>
# <br>
# c) Formulieren Sie das Ereignis $C = \overline{A} \cup \overline{B}\,$ in Worten und bestimmen Sie die<br>
# Wahrscheinlichkeit $P(C)$
# <br>
# <br>
# d) Es werden vier Kondensatoren ohne Zurücklegen entnommen. Mit welcher<br>
# Wahrscheinlichkeit sind der erste und der vierte Kondensator die beiden defek-<br>
# ten?
# <br><br>
# %run zufall/start
# <br>
# Das zugrunde liegende Urne-Modell ist
u = Urne( { 1:10, 0:2 }, 2, w=ohne )
u
# ### zu a)
u.baum
# $\Omega$, Anzahl der Elemente und Größe der Potenzmenge $P(\Omega)$:
u.omega, u.n_omega, 2^4
# ### zu b)
u.vert # Wahrscheinlichkeitsverteilung
A = { *symbols('11, 10, 01') } # hier müssen Symbole angegeben werden
# * packt das Tupel aus
u.P(A), u.P(A, d=4)
# +
B = { *symbols('10, 01') }
u.P(B), u.P(B, d=4)
# -
# ### zu c)
ea = EA(u.omega)
C = ea.berechnen(A, B, 'mindestens A oder B')
C
# Ergebnis ist "Beide sind in Ordnung oder beide sind defekt"
u.P(C), u.P(C, d=4)
# ### zu d)
u1 = Urne( { 1:10, 0:2 }, 4, w=ohne ); u1
D = Symbol('0110'); u1.P(D), u1.P(D, d=4)
# ### Aufgabe 20
# In einem nicht einsehbaren Behältnis befinden sich zehn Schrauben, acht schwar-<br>
# ze ($s$) und zwei goldfarbene ($g$). Die Schrauben unterscheiden sich nur in der Farbe. <br>
# Ein Hobbybastler entnimmt dreimal hintereinander eine Schraube und legt sie auf <br>
# seine Werkbank.
# <br>
# <br>
# a) Fertigen Sie ein Baumdiagramm und berechnen Sie die Wahrscheinlichkeit der<br>
# Ereignisse
# <br>
# <br>
# $A$ = "Höchstens eine goldene Schraube ist dabei" <br>
# $B$ = "Die zweite Schraube ist goldfarben" <br>
# $C$ = "Die erste oder die dritte Schraube ist goldfarben"
# <br><br>
# b) Formulieren Sie $\,\overline{C}\,$ in Worten und überprüfen Sie die Ereignisse $B$ und $\overline{C}\,$ auf Un-<br>
# vereinbarkeit
# <br>
# <br>
# c) Mit welcher Wahrscheinlichkeit erhält man bei sechsmaligem Ziehen ohne Zurück-<br>
# legen das Ereignis {ssgsgs}?
# <br>
# Das zugrunde liegende Urne-Modell ist
u = Urne( { s:8, g:2 }, 3, w=ohne )
u
# ### zu a)
u.baum
A = { *symbols('sss, gss, sgs, ssg') }; A, u.P(A)
B = { *symbols('sgs, ggs, sgg') }; B, u.P(B)
C = { *symbols('ggs,gss,sgg,ssg,gsg') }; C, u.P(C)
# ### zu b)
ea = EA(u.omega)
nichtC = ea.berechnen(C, C, 'nicht B') # Beachtung der Benennung der Ereignisse
nichtC # bei der Anwendung der Methode
# Die Ereignisse sind nicht unvereinbar
ea.berechnen(B, C, 'A und nicht B') # ebenso
# ### zu c)
u6 = Urne( { s:8, g:2 }, 6, w=ohne )
u6
u6.P('ssgsgs')
# ### Aufgabe 22
# In einer Lostrommel befinden sich noch 50 Lose mit zwei Hauptgewinnen ($H$),<br>
# acht Kleingewinnen ($K$) und 40 Nieten ($N$). Käthe kauft zwei Lose
# <br>
# <br>
# a) Zeichnen Sie ein Baumdiagramm und bestimmen Sie die Wahrscheinlichkeiten<br>
# der Ereignisse
# <br>
# <br>
# $A$ = "Genau ein Kleingewinn" und <br>
# $B$ = "Mindestens ein Hauptgewinn"
# <br><br>
# b) Mit welcher Wahrscheinlichkeit zieht man beim 2. Los einen Hauptgewinn?
# <br>
# <br>
# c) Inge kauft fünf Lose und öffnet sie der Reihe nach. Mit welcher Wahrscheinlich- <br>
# keit tritt das Elementarereignis $NKNNH$ ein?<br>
# Das zugrunde liegende Urne-Modell ist
N = Symbol('N') # das Symbol N war durch eine SymPy-Funktion belegt
H, K, N # 3 Symbole
u = Urne( { H:2, K:8, N:40 }, 2, w=ohne )
# ### zu a)
u.baum
A = { *symbols('KH, KN, HK, NK') }
B = { *symbols('HH, HK, HN, KH, NH') }
A, B
u.P(A), u.P(A, d=4)
u.P(B), u.P(B, d=4)
# ### zu b)
u.P( ['HH', 'KH', 'NH'] ), u.P( ['HH', 'KH', 'NH'], d=4 )
# ### zu c)
u5 = Urne( { H:2, K:8, N:40 }, 5, w=ohne )
u5.P('NKNNH', d=4)
| zufall/mat/aufgaben3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit
# name: python38264bit4f9ab20b88dc4e66a5f8689d42e95368
# ---
# # Download SARS-CoV-2 Sequences
# This notebook downloads up to 1000 RNA sequences from NCBI for the novel coronavirus covid 19.
#
# ## Dependencies
# Biopython
from Bio import Entrez
from Bio import SeqIO
# ### Download seqs and count how many are downloaded
# + tags=["outputPrepend"]
Entrez.email = "<EMAIL>"
with Entrez.esearch(
db="nucleotide", term="SARS-CoV-2", idtype="acc", retmax=1000
) as result_handle:
sequence_ids_combined = ",".join(Entrez.read(result_handle)["IdList"])
with Entrez.efetch(
db="nucleotide", rettype="fasta", retmode="text", id=sequence_ids_combined
) as sub_result_handle:
parser = SeqIO.parse(sub_result_handle, "fasta")
seqs = list(parser)
num_seqs = len(seqs)
SeqIO.write(seqs, "covid19.fasta", "fasta")
# -
# ### Print number of seqs downloaded
print(num_seqs)
# + tags=["outputPrepend"]
Entrez.email = "<EMAIL>"
with Entrez.esearch(
db="nucleotide", term="SARS-CoV-2", idtype="acc", retmax=1000
) as result_handle:
sequence_ids_combined = ",".join(Entrez.read(result_handle)["IdList"])
with Entrez.efetch(
db="nucleotide", retmode="xml", id=sequence_ids_combined
) as sub_result_handle:
for rec in Entrez.read(sub_result_handle):
print(f"Date: {rec['GBSeq_update-date']}")
print(f"Accession ID: {rec['GBSeq_primary-accession']}")
# -
| ex4-gouldru.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
dataset=pd.read_csv("Position_Salaries.csv")
X=dataset.iloc[:,1:-1].values
Y=dataset.iloc[:,-1].values
from sklearn.tree import DecisionTreeRegressor
decRegressor=DecisionTreeRegressor(random_state=0)
decRegressor.fit(X,Y)
X_grid=np.arange(min(X),max(X),0.1)
X_grid=X_grid.reshape(len(X_grid),1)
plt.scatter(X,Y,color="red")
plt.plot(X_grid,decRegressor.predict(X_grid))
plt.title("Decision Tree Regression")
plt.show()
| Python/Machine_Learning/Regression/Decision Tree Regression/decisionTreeRegression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="1X6ddOR8HFsX" colab_type="text"
#
# # **Assignment - 2: Basic Data Understanding**
#
# ---
#
# This assignment will get you familiarized with Python libraries and functions required for data visualization.
# + [markdown] id="XRd4EfXN5fQb" colab_type="text"
# ---
# ## Part 1 - Loading data
# ---
# + [markdown] id="Q7W6I-fVIoqp" colab_type="text"
# ###Import the following libraries:
#
# * ```numpy``` with an alias name ```np```,
# * ```pandas``` with an alias name ```pd```,
# * ```matplotlib.pyplot``` with an alias name ```plt```, and
# * ```seaborn``` with an alias name ```sns```.
# + id="NihF3MwIGI4m" colab_type="code" outputId="0b2eb70c-6118-48ba-880d-4078c96251c0" colab={"base_uri": "https://localhost:8080/", "height": 71}
# Load the four libraries with their aliases
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# + [markdown] id="dsLj-TSQJgMb" colab_type="text"
# ### Using the files ```train.csv``` and ```moviesData.csv```, peform the following:
#
# * Load these file as ```pandas``` dataframes and store it in variables named ```df``` and ```movies``` respectively.
# * Print the first ten rows of ```df```.
#
#
# + id="3AjvT40AGIq6" colab_type="code" colab={}
# Load the file as a dataframe
df = pd.read_csv("train.csv")
movies = pd.read_csv("moviesData.csv")
# + id="7eCo7WlMGMkn" colab_type="code" outputId="2b518ad6-a85c-4bdc-f16a-da82cade3e51" colab={"base_uri": "https://localhost:8080/", "height": 549}
# Print the first ten rows of df
df.head(10)
# + [markdown] id="PSm-U7LEF5u_" colab_type="text"
# ### Using the dataframe ```df```, perform the following:
#
# * Print the first five rows of the column ```MonthlyRate```.
# * Find out the details of the column ```MonthlyRate``` like mean, maximum value, minimum value, etc.
# + id="QS5LttI-GT2f" colab_type="code" outputId="7d10fa63-0ef9-49d7-9d4b-416f32d7ed13" colab={"base_uri": "https://localhost:8080/", "height": 119}
# Print the first five rows of MonthlyRate
df['MonthlyRate'].head(5)
# + id="EzQ1a1M0GURm" colab_type="code" outputId="ab71fdb8-ca8b-408d-dc60-4d02f8d18c2d" colab={"base_uri": "https://localhost:8080/", "height": 317}
# Find the details of MonthlyRate
df.MonthlyRate.describe()
# + [markdown] id="3h-YOTvPQI48" colab_type="text"
# ---
# ## Part 2 - Cleaning and manipulating data
# ---
# + [markdown] id="EtLzBibsQfXu" colab_type="text"
# ### Using the dataframe ```df```, peform the following:
#
# * Check whether there are any missing values in ```df```.
# * If yes, drop those values and print the size of ```df``` after dropping these.
# + id="XG-UK53fRDRZ" colab_type="code" outputId="ceb2594f-5788-44b8-88b9-eaeded3b6615" colab={"base_uri": "https://localhost:8080/", "height": 612}
# Check for missing values
print(df.isna().sum())
# + id="kPFwuRE29QZO" colab_type="code" outputId="e0a9ad3f-5f86-48ea-ea9d-713f6d949ca7" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Drop the missing values
# Print the size of df after dropping
df = df.dropna()
df.shape
# + [markdown] id="qtoGcl2XRWnS" colab_type="text"
# ### Using the dataframe ```df```, peform the following:
#
# * Add another column named ```MonthRateNew``` in ```df``` by subtracting the mean from ```MonthlyRate``` and dividing it by standard deviation.
# + id="zDMncSUKR12P" colab_type="code" colab={}
# Add a column named MonthRateNew
mean = df['MonthlyRate'].mean()
std_dev = df['MonthlyRate'].std()
df['MonthRateNew'] = (df['MonthlyRate'] - mean) / std_dev
# + id="VI1tw_-j-kx5" colab_type="code" outputId="0536c0ba-1489-45a6-c54e-c05b2f6a02b1" colab={"base_uri": "https://localhost:8080/", "height": 119}
df['MonthRateNew'].head(5)
# + id="wkfydPly_8th" colab_type="code" outputId="93a64f9d-dc49-48c0-d992-f1f9be7fc27c" colab={"base_uri": "https://localhost:8080/", "height": 119}
df["MonthlyRate"].head()
# + [markdown] id="pcbN7jep13og" colab_type="text"
# ### Using the dataframe ```movies```, perform the following:
#
# * Check whether there are any missing values in ```movies```.
# * Find out the number of observations/rows having any of their features/columns missing.
# * Drop the missing values and print the size of ```movies``` after dropping these.
# * Instead of dropping the missing values, replace the missing values by their mean (or some suitable value).
#
# + id="DNszBx8A14ai" colab_type="code" outputId="36d7a402-eae9-400d-ae02-c045bc16eb91" colab={"base_uri": "https://localhost:8080/", "height": 611}
# Check for missing values
movies.isna().sum()
print(movies.shape)
print(movies.isna().sum())
# + id="AeuI8ZiDwPDs" colab_type="code" outputId="163f4462-4582-43d9-ec61-0299ad651604" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Drop the missing values
movies2 = movies.dropna()
print(movies2.shape)
# + id="hcHwbEYax5mK" colab_type="code" outputId="78994fed-85c4-4676-c066-de6d6c42b832" colab={"base_uri": "https://localhost:8080/", "height": 151}
movies.loc[movies['runtime'].isnull()]
# + id="3O6tTxFXyT3w" colab_type="code" outputId="2b00e5cd-3149-41fc-9c88-0b9d8e974740" colab={"base_uri": "https://localhost:8080/", "height": 35}
np.mean(movies['runtime'])
# + id="BTnxjQREwXsX" colab_type="code" outputId="df979e29-21f3-4012-854e-ca47c4d69eee" colab={"base_uri": "https://localhost:8080/", "height": 561}
# Replace the missing values
# You can use SimpleImputer of sklearn for this
# Importing the SimpleImputer class
# https://jamesrledoux.com/code/imputation
movies['runtime'] = movies['runtime'].fillna(movies['runtime'].mean())
print(movies.isna().sum())
# + id="SR6OlcUCycSV" colab_type="code" outputId="cbab0009-7827-472e-9278-ed807141e3e4" colab={"base_uri": "https://localhost:8080/", "height": 34}
movies.runtime.iloc[311]
# + id="lWtGToRdwxMX" colab_type="code" outputId="c63cb0f0-2634-48a9-beac-eaeb79ee8f5f" colab={"base_uri": "https://localhost:8080/", "height": 561}
from sklearn.impute import SimpleImputer
imp = SimpleImputer(missing_values=np.nan, strategy="mean")
movies["runtime"] = imp.fit_transform(movies[["runtime"]]).ravel()
print(movies.isna().sum())
# + [markdown] id="qDr11sjlBk_W" colab_type="text"
# ---
# ## Part 3 - Visualizing data
# ---
# + [markdown] id="wC_w1zNCBw2G" colab_type="text"
# ### Visualize the ```df``` by drawing the following plots:
#
# * Plot a histogram of ```Age``` and find the range in which most people are there.
# * Modify the histogram of ```Age``` by adding 30 bins.
# * Draw a scatter plot between ```Age``` and ```Attrition``` and suitable labels to the axes. Find out whether people more than 50 years are more likely to leave the company. (```Attrition``` = 1 means people have left the company).
# + id="645EwAsoH63i" colab_type="code" outputId="bb68dad0-d910-4684-a871-f94828817e29" colab={"base_uri": "https://localhost:8080/", "height": 319}
# Plot and modify the histogram of Age
plt.hist(df.Age)
# df.hist(column='Age', bins=30, figsize=(12,8), color='#86bf91')
# + id="b4d-Lo9nC2vA" colab_type="code" outputId="3710299d-9f93-446c-cf0b-761a1f625ba0" colab={"base_uri": "https://localhost:8080/", "height": 281}
# Draw a scatter plot between Age and Attrition
plt.scatter(df.Age, df.Attrition, c='blue')
# set range of X and Y axis accordingly
# plt.xlim(10,70) # imdb varies from 0 to 10
# plt.ylim(0,1) # audience varies from 0 to 100
plt.title('Scatter plot of Daily Rate and Montly Rate')
plt.show()
# + [markdown] id="eez_pkZ-HDKP" colab_type="text"
# ### Visualize the ```df``` by following the steps given below:
#
# * Get a series containing counts of unique values of ```Attrition```.
# * Draw a countplot for ```Attrition``` using ```sns.countplot()```.
# + id="Tp8LnxLWIPfk" colab_type="code" outputId="e24a559a-c1d3-4922-b2ea-ed8147d7eb9e" colab={"base_uri": "https://localhost:8080/", "height": 71}
# Get a series of counts of values of Attrition
df['Attrition'].value_counts()
# + id="xOoHCB16HIqs" colab_type="code" outputId="e937796a-28f8-4cfc-b631-cc7287ab988a" colab={"base_uri": "https://localhost:8080/", "height": 283}
# Draw a countplot for Attrition
# You may use countplot of seaborn for this
sns.countplot(x = 'Attrition', data = df)
plt.ylim(0, 1000)
plt.show()
# + [markdown] id="Vi01m9MBHaJD" colab_type="text"
# #### Visualize the ```df``` by following the steps given below:
#
# * Draw a cross tabulation of ```Attrition``` and ```BusinessTravel``` as bar charts. Find which value of ```BusinessTravel``` has highest number of people.
# + id="5OHAeOqeIQVM" colab_type="code" outputId="9192508d-5628-48ae-be91-01d310e99a78" colab={"base_uri": "https://localhost:8080/", "height": 375}
# Draw a cross tab of Attrition and BusinessTravel
# You may use crosstab of pandas for this
# Cross tabulation is a method to quantitatively analyze the relationship between multiple variables.
# Also known as contingency tables or cross tabs, cross tabulation groups variables to understand the correlation between different variables.
# It also shows how correlations change from one variable grouping to another.
pd.crosstab(df.BusinessTravel,df.Attrition).plot(kind='bar')
plt.ylabel('Number of Attrition')
# + [markdown] id="1FCc9ZkUHfqL" colab_type="text"
# ### Visualize the ```df``` by drawing the following plot:
#
# * Draw a stacked bar chart between ```Attrition``` and ```Gender``` columns.
# + id="58ELSAXBIQST" colab_type="code" outputId="1a4db461-34ba-4869-d69b-2b3147341a52" colab={"base_uri": "https://localhost:8080/", "height": 323}
# Draw a stacked bar chart between Attrition and Gender
table=pd.crosstab(df.Gender,df.Attrition)
table.plot(kind='bar', stacked=True)
plt.ylabel('Number of Attrition')
# + id="gXod1W0kKb6W" colab_type="code" outputId="19e3e677-2519-4270-a062-567d217bbf5e" colab={"base_uri": "https://localhost:8080/", "height": 323}
table=pd.crosstab(df.Gender,df.Attrition)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True)
plt.ylabel('Proportion of Attrition')
# + [markdown] id="vZzsi8_QLEdq" colab_type="text"
# ### Visualize the ```df``` by drawing the following histogram:
#
# * Draw a histogram of ```TotalWorkingYears``` with 30 bins.
# * Draw a histogram of ```YearsAtCompany``` with 30 bins and find whether the values in ```YearsAtCompany``` are skewed.
# + id="zvSlfj3FLUvV" colab_type="code" outputId="8b76a42f-3786-4bbd-a8c5-ac2bb9a3c85e" colab={"base_uri": "https://localhost:8080/", "height": 317}
# Draw a histogram of TotalWorkingYears with 30 bins
df.hist(column='TotalWorkingYears', bins=30)
# np.mean(df.TotalWorkingYears)
# np.median(df.TotalWorkingYears)
# + id="IrClFWD0LI_k" colab_type="code" outputId="14662765-921b-429b-e845-568774830d03" colab={"base_uri": "https://localhost:8080/", "height": 317}
# Draw a histogram of YearsAtCompany
# Default value = 10 bins
df.hist(column='YearsAtCompany', bins=30)
# + [markdown] id="oBcsFqz-Moja" colab_type="text"
# ### Visualize the ```df``` by drawing the following boxplot:
#
# * Draw a boxplot of ```MonthlyIncome``` for each ```Department``` and report whether there is/are outlier(s).
#
# + id="S6zsSE65NED9" colab_type="code" outputId="3a1a2058-6e75-4806-dc6d-446ed3255397" colab={"base_uri": "https://localhost:8080/", "height": 297}
# Draw a boxplot of MonthlyIncome for each Department and report outliers
sns.boxplot('Department', 'MonthlyIncome', data=df)
# + [markdown] id="NRPgJjp-NX07" colab_type="text"
# ### Visualize the ```df``` by drawing the following piechart:
#
# * Create a pie chart of the values in ```JobRole``` with suitable label and report which role has highest number of persons.
# + id="5pr2HVVGNlaV" colab_type="code" outputId="fbb6dfb5-c9a6-48ba-8502-3cb92a031824" colab={"base_uri": "https://localhost:8080/", "height": 187}
# Create a piechart of JobRole
# You will need to find the counts of unique values in JobRole.
Role_counts = df.JobRole.value_counts()
print(Role_counts)
# + id="OwIfSvXkMI86" colab_type="code" outputId="089dab58-4c45-41c3-9110-aad086f0813c" colab={"base_uri": "https://localhost:8080/", "height": 248}
plt.pie(Role_counts)
# plt.pie(Role_counts, labels=Role_counts)
plt.pie(Role_counts, labels=Role_counts.index.tolist())
plt.show()
# + id="j9xJoEQlCLmE" colab_type="code" colab={}
| 02-Assignment/Solution/Solution_Assignment_2_DS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# + slideshow={"slide_type": "skip"}
### Requirements: PyDotPlus, Matplotlib, Scikit-Learn, Pandas, Numpy, IPython (and possibly GraphViz)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import Imputer
from sklearn import preprocessing
from sklearn.tree import DecisionTreeRegressor, DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import BaggingClassifier
from sklearn import tree
import sklearn
import sklearn.metrics as skm
from scipy import misc
from sklearn.externals.six import StringIO
import pydotplus
from IPython.display import Image, YouTubeVideo
def visualize_tree(tree, feature_names, class_names):
dot_data = StringIO()
sklearn.tree.export_graphviz(tree, out_file=dot_data,
filled=True, rounded=True,
feature_names=feature_names,
class_names=class_names,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
return graph.create_png()
# + [markdown] slideshow={"slide_type": "slide"}
# # EECS 445: Machine Learning
# ## Lecture 12: Bagging and Boosting
# - Instructor: **<NAME>**
# - Date: October 19, 2016
#
# Lecture Exposition Credit: <NAME>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Announcements
#
# - **Midterm** in class on Monday, October 24
# - Come to the section in which *you are enrolled*
# - You can bring notes, no more than 3 pieces of paper (double sided)
# - Bring an ID. No calculators allowed
# - Topics: lectures 1-11, see Piazza post for details
# - Note: **No Lecture on Wednesday October 26**
# + [markdown] slideshow={"slide_type": "slide"}
# ## Quick review of information gain and decision trees
# + [markdown] slideshow={"slide_type": "slide"}
# ### Metrics: Information Gain (Mutual Information)
# #### Used by the ID3, C4.5 and C5.0 tree-generation algorithms.
#
# Assume the true binary labels $\{y_i : i =1\ldots m\}$ are distribution according to $P(y)$. But when we observe the value of a decision stump $A = T\text{ or }F$, then we obtain two new distributions, $P(y \mid a = T)$ and $P(y \mid a = F)$. We use *information gain* to measure how much the distribution on $y$ changes when we observe $a$.
#
# \begin{align*}
# \text{Information Gain } & = \text{ Entropy(Parent) - } \text{ Weighted Sum of Entropy(Children)} \\
# & = IG(P,A) = H(P) - H(P|A) \\
# & = H(P) - \sum_{A = T,F} Pr(A) H(P(\cdot | A))
# \end{align*}
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### A question to illustrate the Information Gain metric
#
# Note:
# - The $[x+, y-]$ indicate the number of samples belonging to the two classes, say positive and negative.
# - The topmost one denotes the number of positive and negative samples in the dataset before "any partitioning."
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## Good Notes on Information Gain and DTs
#
# [See thse notes from CMU](http://www.cs.cmu.edu/~awm/10701/slides/DTreesAndOverfitting-9-13-05.pdf)
# + [markdown] slideshow={"slide_type": "slide"}
# ### So, what is Information Gain?
#
# - Intuitively, Information Gain captures:
# - The mutual information that there is between an attribute and the class labels, or, equivalently,
# - The reduction in entropy gained by observing an attribute.
# - Another intersting note:
# - Mutual information (i.e., Information Gain) and KL-divergence are connected: $IG(X, Y) = D_{KL}(p(x, y) \mid \mid p(x)p(y))$.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Alternative: Misclassification Error
#
# If we think of a node $A$ as simply "guessing" $A(x)$ for the label $y$ of $x$, then the Misclassification Error (ME) is essentially the probability $P(A(x) = y)$
#
# $$\text{ME}(A) = \frac{\sum_{i=1}^N \mathbb{I}[ A(x_i) = y_i ] }{N}$$
#
# - Often, we imagine that the misclassification error of a node is either the error rate of $A$ **or** the error rate of $\neg A$. We might call this $\text{ME}^*$.
#
# $$\text{ME}^*(A) = \min(\text{ME}(A), \text{ME}(\neg A))$$
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Generalizations of Decision Trees (1)
# - Decision Trees in their simplest form involve:
# - A 2-class scenario, with,
# - Binary Features, and use a,
# - Binary Tree (every node has at most 2 children).
#
# However, generalizations are possible.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Generalizations of Decision Trees (2)
# - Categorical Features
# - Can use $n$-ary trees rather than binary search trees
# - Fruits Tree Example: Color? asked at the root had three choices: Red, Yellow or Green (Ternary Tree)
# - Can use questions such as "$x_i$ = $l$?" or even $"2 \leq x_i \leq l$?", where $x_i \in \{1, ..., l, ..., K\}$?
# - The Mushroom Classification Tree was an example of this (Binary Tree with comparative rather than only equality checking conditions).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Generalizations of Decision Trees (3)
# - Categorical Output/Multiclass Scenario (the Fruits Example)
# - One way to handle categorical outputs is to pose a question such as "Is it a bird?" when "Yes" is expected. This keeps the outputs binary.
# - Another way is to simply use a **one-hot encoding** for the output (Ex: Bird corresponds to $[0, 0, 0, 0, 0, 1, 0]$)
# - Real-Valued Response (Output). Decision Trees are typically designed for binary problems, i.e. classification. But they can be used for regression! However, applying DTs for regression usually involves discretizing the output space either in some way.
# - [More on DTs for classification and regression](http://machinelearningmastery.com/classification-and-regression-trees-for-machine-learning/)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Decision Tree Computation Limitations
#
# Decision Trees in general perform well with lots of data, are robust to violations of assumptions, and probably most strikingly are easy to understand and interpret. However:
# - The problem of Learning an optimal Decision Trees is NP-Complete under several definitions of optimal.
# - Standard algs are "greedy", make myopic decisions that may not be globally optimal.
# - There are concepts that are hard to learn using Decision Trees, which are also generally hard for other linear classifiers, as the Decision Tree learned is prohibitively large. These include "toy" problems such as:
# - XOR, Parity or Multiplexer based Problems
# + [markdown] slideshow={"slide_type": "slide"}
# ## Decision Tree overfitting
#
# With a decision tree it is easy to overfit your data!
# <img src="images/dt_overfit.png" width=40%>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Decision Tree overfitting
#
# We need to control the "complexity" of the hypothesis
# <img src="images/dt_wellfit.png" width=40%>
# One straightforward way to do this: limit the depth of the tree!
# + [markdown] slideshow={"slide_type": "slide"}
# ### Inductive Bias
#
# - What is the policy by which a particular decision tree algorithm generalizes from observed training examples to classify unseen instances?
#
# - ***Definition:*** The set of assumptions that, together with the training data, deductively justify the classifications assigned by the learner to future instances.
#
# - We can also think of this bias as an algorithm's "preference" over possibly hypotheses.
#
# [More here](https://en.wikipedia.org/wiki/Inductive_bias)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Inductive Bias of Decision Tree Algorithms
#
# - When a decision tree is built, it is almost always not the only tree that will perfectly classify the training set!
# - Finding the inductive bias of a decision tree algorithm is basically trying to find the type of tree that the algorithm favors in general.
# - It turns out that two of the common decision tree algorithms (ID3 and C4.5) have the same approximate inductive bias:
# - Prefers shorter trees over larger trees, and,
# - Trees that place high information gain attributes close to the root over those that do not.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Occam's Razor: "the simpler the better."
#
# - If a larger tree classifies the training set just as well as the shorter hypothesis, then we would logically want to use the shorter tree as it performs the same, is smaller, and is quicker to build.
# - But is it always the case that shorter, more simpler hypotheses are preferred over larger ones?
# - Occam's razor is a heuristic, that we should be biased in preferring simpler hypotheses to complex ones. But this idea is in some sense a foundational principle in Machine Learning.
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Dealing with Overfitting
#
# - Simple Method: Grow the tree and check error iteratively, stop at a point where error rate is satisfactory or tree has reached some limit.
# - **Pruning**: Construct a large decision tree. Use a method such as cross-validation and prune nodes from the leaves upwards. If removing a node does not change performance, make the change permanent. This can also be done for entire subtrees.
# - Use Ensemble Methods!
# + [markdown] slideshow={"slide_type": "slide"}
# ### Ensemble Methods
# - In supervised ML, our goal is to find a hypothesis that performs well on **unseen data**.
# - Finding a single hypothesis within the hypothesis space that gives good predictions can be hard.
# - Idea of Ensemble Methods: "Combine" multiple hypotheses to form a (hopefully) better hypothesis.
# - The notion of "combine" is important, and we'll discuss this
# - ***Note***: The hypothesis represented by an Ensemble Model is not necessarily contained within the hypothesis space of the constituent models!
# + [markdown] slideshow={"slide_type": "slide"}
# ### Bagging (**B**ootstrap **Agg**regat**ing**)
#
# - Given a datset $\mathcal{D}$, $|\mathcal{D}| = n$.
# - Create multiple ***bootstrap samples*** $\mathcal{D}^{'}_i, i \in \{1, ..., m\}$ such that $\forall i, |\mathcal{D}^{'}_i| = n^{'}$ using ***sampling with replacement.***
# - Fit $m$ models using the above $m$ bootstrap samples
# - ***Note:*** No pruning or stopping is used. Bagging helps when the models are unstable and can hurt if they are not.
# - Given a new input $\mathbf{x}$, run each of the $m$ classifiers and use a **majority vote** to classify $\mathbf{x}$.
#
# ***Note:*** Bagging can also be applied for regression but instead of using majority vote, the average is used.
# + [markdown] slideshow={"slide_type": "slide"}
# <img src="images/BaggingCropped.png">
# + [markdown] slideshow={"slide_type": "slide"}
# ### Why does Bagging make sense / What's the intuition?
#
# - With one large Decision Tree (or more generally a single complex hypothesis), the model will likely have Low Bias and High Variance (Overfits to the "random noise" in the data).
# - (Large) Decision Trees are unstable (using slightly different datasets causes a big change in the model learned).
# - So, once we train multiple Decision Trees (or in general, multiple unstable ML models), with the bootstrap samples, we can get a much more stable model that performs better by say, using majority voting.
# + slideshow={"slide_type": "skip"}
# %matplotlib inline
# Author: <NAME> <<EMAIL>>
# License: BSD 3 clause
# Settings
n_repeat = 50 # Number of iterations for computing expectations
n_train = 50 # Size of the training set
n_test = 1000 # Size of the test set
noise = 0.1 # Standard deviation of the noise
np.random.seed(0)
# Change this for exploring the bias-variance decomposition of other
# estimators. This should work well for estimators with high variance (e.g.,
# decision trees or KNN), but poorly for estimators with low variance (e.g.,
# linear models).
estimators = [("Tree", DecisionTreeRegressor()),
("Bagging(Tree)", BaggingRegressor(DecisionTreeRegressor()))]
n_estimators = len(estimators)
# Generate data
def f(x):
x = x.ravel()
return np.exp(-x ** 2) + 1.5 * np.exp(-(x - 2) ** 2)
def generate(n_samples, noise, n_repeat=1):
X = np.random.rand(n_samples) * 10 - 5
X = np.sort(X)
if n_repeat == 1:
y = f(X) + np.random.normal(0.0, noise, n_samples)
else:
y = np.zeros((n_samples, n_repeat))
for i in range(n_repeat):
y[:, i] = f(X) + np.random.normal(0.0, noise, n_samples)
X = X.reshape((n_samples, 1))
return X, y
def bias_variance_example():
X_train = []
y_train = []
for i in range(n_repeat):
X, y = generate(n_samples=n_train, noise=noise)
X_train.append(X)
y_train.append(y)
X_test, y_test = generate(n_samples=n_test, noise=noise, n_repeat=n_repeat)
# Loop over estimators to compare
for n, (name, estimator) in enumerate(estimators):
# Compute predictions
y_predict = np.zeros((n_test, n_repeat))
for i in range(n_repeat):
estimator.fit(X_train[i], y_train[i])
y_predict[:, i] = estimator.predict(X_test)
# Bias^2 + Variance + Noise decomposition of the mean squared error
y_error = np.zeros(n_test)
for i in range(n_repeat):
for j in range(n_repeat):
y_error += (y_test[:, j] - y_predict[:, i]) ** 2
y_error /= (n_repeat * n_repeat)
y_noise = np.var(y_test, axis=1)
y_bias = (f(X_test) - np.mean(y_predict, axis=1)) ** 2
y_var = np.var(y_predict, axis=1)
print("{0}: {1:.4f} (error) = {2:.4f} (bias^2) "
" + {3:.4f} (var) + {4:.4f} (noise)".format(name,
np.mean(y_error),
np.mean(y_bias),
np.mean(y_var),
np.mean(y_noise)))
# Plot figures
from pylab import rcParams
rcParams['figure.figsize'] = 9, 9
plt.subplot(2, n_estimators, n + 1)
plt.plot(X_test, f(X_test), "b", label="$f(x)$")
plt.plot(X_train[0], y_train[0], ".b", label="LS ~ $y = f(x)+noise$")
for i in range(n_repeat):
if i == 0:
plt.plot(X_test, y_predict[:, i], "r", label="$\^y(x)$")
else:
plt.plot(X_test, y_predict[:, i], "r", alpha=0.05)
plt.plot(X_test, np.mean(y_predict, axis=1), "c",
label="$\mathbb{E}_{LS} \^y(x)$")
plt.xlim([-5, 5])
plt.title(name)
if n == 0:
plt.legend(loc="upper left", prop={"size": 11})
plt.subplot(2, n_estimators, n_estimators + n + 1)
plt.plot(X_test, y_error, "r", label="$error(x)$")
plt.plot(X_test, y_bias, "b", label="$bias^2(x)$"),
plt.plot(X_test, y_var, "g", label="$variance(x)$"),
plt.plot(X_test, y_noise, "c", label="$noise(x)$")
plt.xlim([-5, 5])
plt.ylim([0, 0.1])
if n == 0:
plt.legend(loc="upper left", prop={"size": 11})
plt.show()
# + slideshow={"slide_type": "slide"}
# Bias-Variance of Bagging with
# Decision Tree Regressors Illustration (Adapted from ELSII, 2009)
# (Note: LS refers to a bootstrap sample)
bias_variance_example()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Informal Bias-Variance Reasoning when using Bagging
#
# - In general, the Bias remains about the same as we are performing model averaging and as long as the bootstrap samples represent the dataset well, the bias stays about the same.
# - Variance reduces by a factor of at most the size of the bootstrap samples ($n^{'}$).
# - In reality, bagging reduces variance (often by less than a factor of $n^{'}$) and tends to slightly increase bias.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Random Forests
#
# - Extends Bagging and in practice performs generally better.
# - The only difference: When constructing the trees, best splits are found on only **a subset of the features**, not all.
# - Rule of thumb: $m = \sqrt{p}$ (rounded down) is recommended for classification problems and $\frac{p}{3}$ (rounded down) is recommended for regression where $p$ is the number of features.
# - For each tree grown on a bootstrap sample, we can measure the error rate on a test set.
# - This is called the "out-of-bag" error rate.
# - This can be regarded as a generalization error and can provide a ranking of the importance of features.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Random Forests work **very** well in practice
#
# Many ML competitions are won using some version of Random Forest. Hard to overstate the value of this algorithm on real-world problems.
#
# <img src="images/random_forest.png">
# + [markdown] slideshow={"slide_type": "slide"}
# ### Limitations of Bagging
#
# - Loss of interpretability: the final bagged classifier is not a tree, and so we forfeit the clear interpretative ability of a classification tree.
#
# - Computational complexity: we are essentially multiplying the work of growing a single tree by $m$. Can be a lot of work!
# + [markdown] slideshow={"slide_type": "slide"}
# ## Break time!
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ### Boosting
#
# - Bagging can help in reducing Variance. Are there methods that reduce both Bias and Variance? Yes! Boosting is one of them.
# - General Ideas:
# - Weighted Majority Vote (unlike Bagging)
# - Elements of Ensemble built Sequentially (unlike Bagging where the models could be built parallely)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Boosting (2 Class Scenario)
#
# - Assume class labels are -1 and +1.
# - The final classifier then has the form:
# - $h_T(\mathbf{x}) = \text{sgn}\left(\sum \limits_{t = 1}^T \alpha_t f_t(\mathbf{x})\right)$ where $f_1, ..., f_T$ are called base (or weak) classifiers and $\alpha_1, ..., \alpha_T > 0$ reflect the confidence of the various base classifiers.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Base/Weak Learners
#
# - Let $(\mathbf{x}_1, y_1), ..., (\mathbf{x}_n, y_n)$ be the training data.
# - Let $\mathscr{F}$ be a fixed set of classifiers called the base class.
# - A base learner for $\mathscr{F}$ is a rule that takes as input a set of weights $\mathbf{w} = (w_1, ..., w_n)$ such that $w_i \geq 0, \sum w_i = 1$, and outputs a classifier $f \in \mathscr{F}$ such that the weighted empirical risk $$e_w(f) = \sum \limits_{i = 1}^n w_i \mathbb{1}_{\{f(\mathbf{x}_i) \neq y_i\}}$$ is (approximately) minimized.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Examples of Base (Weak) Learners
#
# - Decision Stumps, i.e., decision trees with depth 1
# - Decision Trees
# - Polynomial thresholds, i.e., $$f(\vec{x}) = \pm \text{sign}((\vec{w}^\top \vec{x})^2 - b)$$ where $b \in \mathbb{R}$ and $\vec{w} \in \mathbb{R}^d$ is a radial kernel.
# + [markdown] slideshow={"slide_type": "slide"}
# ### AdaBoost (Adaptive Boosting)
#
# - The first concrete algorithm to successfully realize the boosting principle.
#
# <img src="images/adaboost.gif" width=35%>
# + [markdown] slideshow={"slide_type": "slide"}
# ### AdaBoost Algorithm
#
# An *iterative* algorithm for "ensembling" base learners
#
# - Input: $\{(\mathbf{x}_i, y_i)\}_{i = 1}^n, T, \mathscr{F}$, base learner
# - Initialize: $\mathbf{w}^{1} = (\frac{1}{n}, ..., \frac{1}{n})$
# - For $t = 1, ..., T$
# - $\mathbf{w}^{t} \rightarrow \boxed{\text{base learner}} \rightarrow f_t$
# - $\alpha_t = \frac{1}{2}\text{ln}\left(\frac{1 - r_t}{r_t}\right)$
# - where $r_t := e_{\mathbf{w}^t}(f_t) = \frac 1 n \sum \limits_{i = 1}^n \mathbf{w}_i \mathbf{1}_{\{f(\mathbf{x}_i) \neq y_i\}} $
# - $w_i^{t + 1} = \frac{\mathbf{w}_i^t \exp \left(- \alpha_ty_if_t(\mathbf{x}_i)\right)}{z_t}$ where $z_t$ normalizes.
# - Output: $h_T(\mathbf{x}) = \text{sign}\left(\sum \limits_{t = 1}^T \alpha_t f_t(\mathbf{x})\right)$
# + [markdown] slideshow={"slide_type": "slide"}
# ## AdaBoost in Action
# -
YouTubeVideo('k4G2VCuOMMg')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Intuition behind Boosting
#
# - Suppose you have a bunch of friends that give you advice
# - But frequently their advice is bad advice; in fact, each of them only gives good advice 53% of the time!
# - Good news: at least this is better than 50/50 :-)
# - Can we use such poor advice? Yes! Just combine their opinions using a majority vote!
# - (Of course this only works when the advice they give is "independent")
# - Take home message: **combining lots of weak predictions can produce a strong prediction**
# + [markdown] slideshow={"slide_type": "slide"}
# ## Boosting for face detection
#
# In the context of face detection, what makes a good set of "weak learners"? Apparently a good choice are these [haar-like features](https://en.wikipedia.org/wiki/Haar-like_features). You sum up the pixel values in the white patches, minus the pixel values in the black patches.
#
# <img src="images/slide_37.jpg">
# + [markdown] slideshow={"slide_type": "slide"}
# ### Strong and Weak Learnability
#
# - Boosting's roots are in the Probably Approximately Correct "PAC" (Leslie Valiant) learning model
# - Get random examples from an unknown, arbitrary distribution.
# - For ***any*** distribution, given polynomially many examples (and polynomial time), a ***Strong PAC learning algorithm*** can, with high probability, find a classifier with ***arbitrarily small*** generalization error.
# - Weak PAC Learning Algorithm can do the same except the generalization error only needs to be ***slightly better than random guessing*** $\left(\frac{1}{2} - \gamma\right)$.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Weak Learning
#
# - Adaboost is justified by the following result.
# - Let $\gamma_t = \frac{1}{2} - r_t$. Recall that $r_t = e_{\mathbf{w}^t}(f_t)$ the weighted empirical risk.
# - Note that we may assume $\gamma_t \geq 0 \leftrightarrow r_t \leq \frac{1}{2}$.
# - If not, just replace $f_t$ with $-f_t$ and note that for any $f$ and $\mathbf{w}$, $$e_\mathbf{w}(f) + e_\mathbf{w}(-f) = 1$$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Theorem
# (Proof in Mohri et. al, Foundations of Machine Learning, 2012)
# - The training error of Adaboost satisfies $\frac{1}{n} \sum_{i=1}^n \mathbb{1}_{\{h_T(\mathbf{x}_i) \neq y_i\}} \leq \exp(-2\sum \limits_{t = 1}^T \gamma_t^2)$
# - In particular, if $\forall t, \gamma_t \geq \gamma > 0$ then $\frac{1}{n} \mathbb{1}_{\{h_T(\mathbf{x}_i) \neq y_i\}} \leq \exp(-2\sum \limits_{t = 1}^T \gamma^2)$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Weak Learning Hypothesis
#
# - We may interpret $r_t = \frac{1}{2}$ as corresponding to a base classifier $f_t$ that randomly guesses.
# - Thus, $\gamma_t \geq \gamma > 0$ means that $f_t$ is at least slightly better than randomly guessing.
# - If the base learner is guaranteed to satisfy $\gamma_t \geq \gamma > 0, \forall t$, it is said to satisfy the weak learning hypothesis.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Interpretation of the Theorem
#
# - The theorem says that under the weak learning hypothesis, the Adaboost training error converges to zero ***exponentially*** fast.
# - Note: To avoid overfitting, the parameters $T$ should be chosen carefully. Example: Cross Validation.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Remarks about the Adaptibility of AdaBoost
# #### 1. Can exploit $\gamma_t \gg \gamma$
#
# - If $r_t = 0$, then $\alpha_t = \frac{1}{2}\ln\left(\frac{1 - r_t}{r_t}\right) = \lim_{r_t \rightarrow 0^{+}}\frac{1}{2}\text{ln}\left(\frac{1 - r_t}{r_t}\right) = +\infty$.
# - In other words, if $\exists$ a classifier in $\mathscr{F}$ that perfectly separates the data, AdaBoost says to just use that classifier.
#
# #### 2. $\gamma$ and $T$ do not need to be known a Priori
# + [markdown] slideshow={"slide_type": "slide"}
# ### Towards Generalized Boosting
#
# - It turns out that AdaBoost can be viewed as an iterative algorithm for minimizing the empirical risk corresponding to the exponential loss.
# - By generalizing the loss, we get different boosting algorithms with different properties.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Span of a base class $\mathscr{F}$
#
# For a fixed base class $\mathscr{F}$, define $$\text{span}(\mathscr{F}) = \{\sum \limits_{t = 1}^T \alpha_tf_t \mid T \geq 1, \alpha_t \in \mathbb{R}, f_t \in \mathscr{F}\}$$
# + [markdown] slideshow={"slide_type": "slide"}
# ### A Minimization Problem
#
# - Consider the following problem $$\min_{F \in \text{span}(\mathscr{F})} \frac{1}{n} \sum \limits_{i = 1}^n \mathbb{1}_{\{\text{sign}(F(\mathbf{x}_i)) \neq y_i\}}$$
#
# - Now, minimizing the zero-one loss is computationally infeasible.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Minimizing with surrogate losses
#
# - We can use a surrogate loss function $\phi$ instead to give the following optimization problem $$\min_{F \in \text{span}(\mathscr{F})} \frac{1}{n} \sum \limits_{i = 1}^n \phi(y_i F(\mathbf{x}_i))$$
#
# - Examples of surrogate losses:
# - Exponential loss: $\phi(t) = \exp(-t)$
# - Logistic Loss: $\phi(t) = log(1 + \exp(-t))$
# - Hinge Loss: $\phi(t) = \max(0, 1 - t)$
# - Note: We will assume $\phi$ is differentiable and $\phi' < 0$ everywhere.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Functional Gradient Descent
# To solve the optimization problem, we can apply gradient descent on a space consisting of functions.
# + [markdown] slideshow={"slide_type": "slide"}
# ### An FGD Iteration (1)
#
# - Consider the $t^{\text{th}}$ iteration of FGD. The current iterate is $F_{t - 1} = \sum \limits_{s = 1}^{t - 1} \alpha_s f_s$.
# - The next iterate will have the form $F_{t - 1} + \alpha_tf_t$
# + [markdown] slideshow={"slide_type": "slide"}
# ### An FGD Iteration (2)
#
# - Now, we can view $\alpha_1, f_1, ..., \alpha_{t - 1}, f_{t - 1}$ as fixed.
# - Define $B_t(\alpha, f) = \frac{1}{n}\sum \limits_{i = 1}^n \phi(y_iF_{t - 1}(\mathbf{x}_i) + y_i\alpha f(\mathbf{x}_i))$
# - $f_t$ can then be chosen as the function $f \in \mathscr{F}$ for which the directional derivative of $B_t$ in the direction $f$ is minimized.
# - $\alpha_t$ can be chosen as a stepsize $\alpha > 0$ in the direction $f_t$ for which $B_t(\alpha, f_t)$ is minimized.
# + [markdown] slideshow={"slide_type": "skip"}
# ### Mathematical Details for the choice of $f_t$ (1)
#
# - $\left.\frac{\partial B(\alpha, f)}{\partial \alpha}\right\vert_{\alpha = 0} = \frac{1}{n}\sum \limits_{i = 1}^n y_i f(\mathbf{x}_i)\phi'(y_i F_{t - 1}(\mathbf{x}_i))$
#
# - Minimizing the above with respect to $f$ is equivalent to minimizing $-\sum \limits_{i = 1}^n y_i f(\mathbf{x}_i)\frac{\phi'(y_iF_{t - 1}(\mathbf{x}_i))}{\sum \limits_{j = 1}^n \phi'(y_j F_{t - 1}(x_j)}$ (Note, a minus sign is used as $\phi' < 0$)
# + [markdown] slideshow={"slide_type": "skip"}
# ### Mathematical Details for the choice of $f_t$ (2)
#
# - Setting $w_i^t = \frac{\phi'(y_iF_{t - 1}(\mathbf{x}_i))}{\sum \limits_{j = 1}^n \phi'(y_j F_{t - 1}(x_j)}$, the minimization problem reduces to $\sum \limits_{i = 1}^n w_i^t\mathbb{1}_{\{f(\mathbf{x}_i) \neq y_i)\}} - \sum \limits_{i = 1}^n w_i^t\mathbb{1}_{\{f(\mathbf{x}_i) = y_i\}}$
#
# - Finally, we then get the minimization as $2\left(\sum \limits_{i = 1}^n w_i^t \mathbb{1}_{\{f(\mathbf{x}_i) = y_i\}}\right) - 1$
#
# - Thus, to solve the first step (choose $f_t$) we just apply the base learner.
# + [markdown] slideshow={"slide_type": "skip"}
# ### Mathematical Details for the choice of $\alpha_t$
#
# $$\begin{align}
# \alpha_t &= \underset{\alpha}{\arg\min} \hspace{0.2cm} B_t(\alpha, f_t)\\
# &= \underset{\alpha}{\arg\min} \frac{1}{n}\sum \limits_{i = 1}^n \phi(y_iF_{t - 1}(\mathbf{x}_i) + y_i\alpha f_t(\mathbf{x}_i))
# \end{align}$$
#
#
# The above is just a scalar minimization problem that can be solved numerically, e.g., via Newton's method, if no closed form solution is available.
# + [markdown] slideshow={"slide_type": "slide"}
# ### The Generalized Boosting Algorithm
#
# - Input: $\{(\mathbf{x}_i, y_i)\}_{i = 1}^n, T, \mathscr{F}$, base learner, surrogate loss $\phi$ (differentiable, $\phi^{'} < 0$ everywhere)
# - Initialize: $\mathbf{w}^{1} = (\frac{1}{n}, ..., \frac{1}{n}), F_0 = 0$
# - For $t = 1, ..., T$
# - $\mathbf{w}^{t} \rightarrow \boxed{\text{base learner}} \rightarrow f_t$
# - $\alpha_t = \underset{\alpha}{\arg\min} \frac{1}{n}\sum \limits_{i = 1}^n \phi(y_iF_{t - 1}(\mathbf{x}_i) + y_i\alpha f_t(\mathbf{x}_i)$
# - $F_t = F_{t - 1} + \alpha_t f_t$
# - $w_i^{t + 1} = \frac{\phi^{'}(y_iF_t(\mathbf{x}_i))}{\sum \limits_{i = 1}^n \phi^{'}(y_iF_y(\mathbf{x}_j))}$
# - End
# - Output: $h_T(\mathbf{x}) = \text{sign}\left(F_T(\mathbf{x}\right)$
| lecture12_bagging-boosting/lecture12_bagging-boosting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Python для анализа данных (социальные науки)
#
# ## Регулярные выражения
#
# Задачи
#
# *Авторы: <NAME>, <NAME>*
# ### 1. Задачка про аббревиатуры
# Владимир устроился на работу в одно очень важное место. И в первом же документе он ничего не понял,
# там были сплошные ФГУП НИЦ ГИДГЕО, ФГОУ ЧШУ АПК и т.п. Тогда он решил собрать все аббревиатуры, чтобы потом найти их расшифровки на http://sokr.ru/. Помогите ему.
# Будем считать аббревиатурой слова только лишь из заглавных букв (как минимум из двух). Если несколько таких слов разделены пробелами, то они
# считаются одной аббревиатурой.
# **Ввод**: Это курс информатики соответствует ФГОС и ПООП, это подтверждено ФГУ ФНЦ НИИСИ РАН
# **Вывод**: ФГОС, ПООП, ФГУ, ФНЦ, НИИСИ, РАН
import re
x = 'Это курс информатики соответствует ФГОС и ПООП, это подтверждено ФГУ ФНЦ НИИСИ РАН'
# Ваше решение
re.findall('[А-Я]{3,}', x)
# ### 2. Задачка про аббревиатуры 2
# Акростих — осмысленный текст, сложенный из начальных букв каждой строки стихотворения.
# Акроним — вид аббревиатуры, образованной начальными звуками (напр. НАТО, вуз, НАСА, ТАСС), которое можно произнести слитно (в отличие от аббревиатуры, которую произносят «по буквам», например: КГБ — «ка-гэ-бэ»).
# На вход даётся текст. Выведите слитно первые буквы каждого слова. Буквы необходимо выводить заглавными.
# Эту задачу можно решить в одну строчку.
# +
import re
x = 'Комитет государственной Безопасности'
# Ваше решение
res = ''
for y in re.split(' ', x):
res+=y[0].upper()
res
# -
# ### 3. Задачка про перевод из camel_case'a в snake_case
# Мы уже довольно много говорили про то, что в компаниях могут быть конвенции по обозначению переменных. Что, если вы написали код, а в нем переменные названы в Camel Case а вам требуется snake case? Пожалуй, стоит автоматизировать этот процесс. Попробуем написать функцию, которая этот функционал реализует.
#
# **Ввод:**
# camelCaseVar
#
# **Вывод:**
# camel_case_var
import re
x = 'camelCaseVar'
re.sub(r'(?<!^)(?=[A-Z])', '_', x).lower()
# ### 4. Задачка про подсчет количества слов
# Слова у нас могут состоять из букв или букв, стоящих вокруг дефиса (во-первых, чуть-чуть, давай-ка). Попробуем это описать регулярным выражением и посчитаем все слова в тексте.
text = '''
- Дельный, что и говорить,
Был старик тот самый,
Что придумал суп варить
На колесах прямо.
Суп - во-первых. Во-вторых,
Кашу в норме прочной.
Нет, старик он был старик
Чуткий - это точно.
'''
# ваше решение
len(re.findall('\w*[\w|-]*\w', text))
# ### 5. Задачка про поиск слов на а и на е
# Найдите в тексте слова, начинающиеся на а и на е
import re
text = "The following example creates an ArrayList with a capacity of 50 elements.\
Four elements are then added to the ArrayList and the ArrayList is trimmed accordingly."
# ваше решение
re.findall(r'[ae]\w+', text)
# Пример 2
import re
text = '''
Ihr naht euch wieder, schwankende Gestalten,
Die früh sich einst dem trüben Blick gezeigt.
Versuch’ ich wohl, euch diesmal festzuhalten?
Fühl’ ich mein Herz noch jenem Wahn geneigt?
'''
# ваше решение
re.findall(r'[ae]\w+', text)
# ### 6. Задачка про деление текста на предложения
# Для простоты будем считать, что:
# * каждое предложение начинается с заглавной русской или латинской буквы;
# * каждое предложение заканчивается одним из знаков препинания .;!?;
# * между предложениями может быть любой непустой набор пробельных символов;
# * внутри предложений нет заглавных и точек (нет пакостей в духе «Мы любим творчество А. С. Пушкина)».
#
# Разделите текст на предложения так, чтобы каждое предложение занимало одну строку.
# Пустых строк в выводе быть не должно.
# +
import re
s = """Mr. Smith bought cheapsite.com for 1.5 million dollars, i.e. he paid a lot for it! \
Did he mind? <NAME> Jr. thinks he didn't. In any case, this isn't true... \
Well, with a probability of .9 it isn't."""
# ваше решение
print(re.findall("[a-z][a-z][.!?]\s[A-Z]", s))
print(re.findall("\'[a-z][.!?]\s[A-Z]", s))
print(re.findall("\.\.\.\s[A-Z]", s))
# -
# ### 7. И реальный пример
#
# Возьмем перевод книги Идиот, вытащим оттуда текст первой главы, после чего посчитаем количество вхождений слова the. Ссылка
# https://www.gutenberg.org/files/2638/2638-0.txt
# +
import re
import requests
idiot = requests.get('https://www.gutenberg.org/files/2638/2638-0.txt').text
# ваше решение
len(re.findall('the', idiot))
# -
| 3. Laba/regexpr_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href='http://www.holoviews.org'><img src="../notebooks/assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
# <div style="float:right;"><h2>Exercise 4: Dynamic Interactions</h2></div>
# +
import numpy as np
import pandas as pd
import holoviews as hv
import geoviews as gv
hv.extension('bokeh')
# %opts RGB [width=600 height=600]
# -
# ### Exercise 1
diamonds = pd.read_csv('../data/diamonds.csv')
# As should be second nature for us now, we will look at this dataframe before we start doing anything.
diamonds.head()
# Next we will display a static plot of 'carat' vs. 'price' as we did in the first exercise, alongside a BoxWhisker plot of the distributions.
# +
# %%opts Scatter [width=600 height=400 logy=True tools=['box_select'] color_index='cut']
# %%opts Scatter (size=1.5 cmap='tab20c')
scatter = hv.Scatter(diamonds.sample(10000), 'carat', ['price', 'cut', 'clarity']).select(carat=(0, 3))
boxwhisker = hv.BoxWhisker(scatter, 'clarity', 'price')
scatter + boxwhisker
# -
# By default, the ``BoxWhisker`` element here will statically display the whole distribution. But if you try out the "Box select" tool, you can select a subset of the Scatter points. Can we link the boxwhisker plot to selections made on the ``Scatter`` plot, so that we can see distributions in that particular region of the data space? Yes, as long as we have these three things:
#
# 1. A stream that collects selection events from the ``scatter`` object
# 2. A callback that constructs a HoloViews element from the given selection and returns it
# 3. A DynamicMap that runs the callback each time a new selection is available
#
# For step 1, we provide the ``scatter`` object as the source for a ``Selection1D`` stream that will provide the ``index`` of all the selected nodes:
selection = hv.streams.Selection1D(source=scatter)
# For step 2, write a function that can accept the ``index`` values, select those values from the original dataset, and return the appropriate HoloViews element; something like:
#
# ```
# def selection_boxwhisker(index):
# selection = scatter.iloc[index] if len(index)>0 else scatter
# return ...some hv element built from the selection...
# ```
#
# Here ``selection_boxwhisker`` should return a ``BoxWhisker`` element for the selection, plotting 'price' against 'clarity'.
# For step 3, define a ``DynamicMap`` using the ``selection`` stream and your custom callback and lay it out next to the ``scatter`` object as above.
#
# <b><a href="#hint1" data-toggle="collapse">Hint</a></b>
#
# <div id="hint1" class="collapse">
# A ``DynamicMap`` requires a callback function as its first argument and streams should be supplied in a list as a keyword argument.
# </div>
# <b><a href="#solution1" data-toggle="collapse">Solution</a></b>
#
# <div id="solution1" class="collapse">
# <br>
# <code>selection = hv.streams.Selection1D(source=scatter)
#
# def selection_boxwhisker(index):
# selection = scatter.iloc[index]
# return hv.BoxWhisker(selection, 'clarity', 'price')
#
# scatter + hv.DynamicMap(selection_boxwhisker, streams=[selection])</code>
# </div>
# ## Exercise 2: Streaming Data
# Exercise 1 used HoloViews streams to collect user interaction events (selections). Here, let's use them to view data sources that themselves are updating over time.
#
# First, let's set up a (simulated) streaming data source in form of taxi pickup locations. The code below splits the taxi dataset into chunks by hour which will be emitted one by one to emulate a live, streaming data source.
# +
import time
import colorcet
from itertools import cycle
from holoviews.operation.datashader import datashade
def taxi_trips_stream(source='../data/nyc_taxi_wide.parq', frequency='H'):
"""Generate dataframes grouped by given frequency"""
def get_group(resampler, key):
try:
df = resampler.get_group(key)
df.reset_index(drop=True)
except KeyError:
df = pd.DataFrame()
return df
df = pd.read_parquet(source,
columns=['tpep_pickup_datetime', 'pickup_x', 'pickup_y', 'fare_amount'])
df = df.set_index('tpep_pickup_datetime', drop=True)
df = df.sort_index()
r = df.resample(frequency)
chunks = [get_group(r, g) for g in sorted(r.groups)]
indices = cycle(range(len(chunks)))
while True:
yield chunks[next(indices)]
trips = taxi_trips_stream()
example = next(trips)
# -
# As usual let's start by inspecting the data, in this case the initial chunk emitted above:
example.head()
# To build our streaming visualization, first declare a a map tile source for a background plot, and then make a ``Pipe`` stream initialized with the example chunk of data already emitted:
tiles = gv.WMTS('https://maps.wikimedia.org/osm-intl/{Z}/{X}/{Y}@2x.png')
pipe = hv.streams.Pipe(example)
# Then you will need to define a callback to use when declaring a ``DynamicMap``. This function will need to accept a chunk of data, then return a ``Points`` object displaying the 'pickup_x' and 'pickup_y' coordinates and a ``label`` indicating the time range being covered. Something like:
#
# ```
# def hourly_points(data):
# label = '%s - %s' % (str(data.index.min()), str(data.index.max()))
# return ...some hv object using the given data...
# ```
# Finally, use that callback and the ``pipe`` stream to define a ``DynamicMap``, applying the datashade operation to the DynamicMap and then overlaying it on top of the ``tiles``.
#
# **Warning**: Do not display the ``DynamicMap`` without applying the ``datashade()`` operation, or you run the risk of freezing your browser.
#
# <b><a href="#hint2" data-toggle="collapse">Hint</a></b>
#
# <div id="hint2" class="collapse">
# To apply datashading simply call ``datashade(dynamicmap)``.
# </div>
# You should now see a map of New York City with the taxi trips on top. Run the next cell to send events to the ``Pipe`` and update the plot.
for i in range(100):
time.sleep(0.05)
pipe.send(next(trips))
# <b><a href="#solution2" data-toggle="collapse">Solution</a></b>
#
# <div id="solution2" class="collapse">
# <br>
# <code>%%opts RGB [width=600 height=600]
# pipe = hv.streams.Pipe(example)
# tiles = gv.WMTS('https://maps.wikimedia.org/osm-intl/{Z}/{X}/{Y}@2x.png')
#
# def hourly_points(data):
# label = '%s - %s' % (str(data.index.min()), str(data.index.max()))
# return hv.Points(data, ['pickup_x', 'pickup_y'], label=label)
#
# points = hv.DynamicMap(hourly_points, streams=[pipe])
# tiles * datashade(points)</code>
# </div>
# ## Exercise 3
# In the previous exercise we used the ``Pipe`` stream, which emits just the latest chunk. That's a good way to monitor an ongoing stream, but often you'll instead want to accumulate data over time, showing the latest chunk combined with other previous chunks. Here we will stream data using the ``Buffer`` stream, which accumulates data until its length is reached. We will start by defining some options, an example dataframe, and the ``Buffer`` stream with a length of 1,000,000:
# %opts Curve [width=800 height=400] (color='black' line_width=1) {+framewise} Scatter (color='red')
from holoviews.operation.timeseries import resample, rolling_outlier_std
example = next(trips)[['fare_amount']]
buffer = hv.streams.Buffer(example, length=1000000)
# As before, you'll need to complete the callback function so it returns an element. In this case, we need a ``Curve`` plotting the 'fare_amount' against the 'tpep_pickup_datetime', starting something like:
#
# ```
# def fare_curve(data):
# ...
# ```
# Again as before, we need to define a ``DynamicMap`` that uses this callback in combination with a stream (``buffer`` in this case). Here let's assign it to a variable rather than try to show it right away:
# Next, apply the ``resample`` operation to the DynamicMap object, with ``rule='T'`` and ``function=np.sum`` and then apply the ``rolling_outlier_std`` operation to the output of that. Finally display an overlay of the ``resample`` output and the ``rolling_outlier_std`` output.
#
# <b><a href="#hint3" data-toggle="collapse">Hint</a></b>
#
# <div id="hint3" class="collapse">
# Operations like ``resample`` and ``rolling_outlier_std`` can be chained, e.g.:
# <br><br>
# <code>resampled = resample(dmap)
# outliers = rolling_outlier_std(resampled)
# resampled * outliers
# </code>
# </div>
# Now that you've displayed the plot, let's start sending some data to the buffer, which should start accumulating 1000000 trips:
for i in range(100):
time.sleep(0.1)
buffer.send(next(trips)[['fare_amount']])
# <b><a href="#solution3" data-toggle="collapse">Solution</a></b>
#
# <div id="solution3" class="collapse">
# <br>
# <code>%%opts Curve [width=800 height=400] (color='black' line_width=1) {+framewise} Scatter (color='red') Overlay [show_legend=False]
#
# example = next(trips)[['fare_amount']]
# buffer = hv.streams.Buffer(example, length=1000000)
#
# def fare_curve(data):
# return hv.Curve(data, 'tpep_pickup_datetime', 'fare_amount')
#
# fares = hv.DynamicMap(fare_curve, streams=[buffer])
# minutely = resample(fares, rule='T', function=np.sum)
# minutely * rolling_outlier_std(minutely, rolling_window=10)</code>
# </div>
| exercises/Exercise-4-dynamic-interactions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="RiNsCwgFdnAB"
# # Basic Models Experiment
#
# Use the new processed dataset called `hotel_bookings_processed.csv`
# + id="TyDeLDXvco0z"
import time
import numpy as np
import pandas as pd
import datetime as dt
import plotly.express as px
import seaborn as sns
import matplotlib.pyplot as plt
# from sklearn.metrics import plot_roc_curve
# import seaborn as sns
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score, roc_curve, confusion_matrix, auc
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.preprocessing import StandardScaler
# Basic models
# from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC, LinearSVC
# from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.neural_network import MLPClassifier
from xgboost import XGBClassifier
# + colab={"base_uri": "https://localhost:8080/"} id="3xl9SOlxeoQR" outputId="3df5ea77-eaee-42a7-ecbb-69f582daecb5"
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 456} id="nne0Yl3Bg__L" outputId="84fdbd41-4f94-462a-ee6a-fbaa2b4c1e36"
path = "/content/drive/Shareddrives/CIS 520/final project/hotel_bookings_processed.csv"
data = pd.read_csv(path)
data
# + colab={"base_uri": "https://localhost:8080/"} id="Xh-OIUMSfMg1" outputId="a1f022e4-79cf-440d-b59b-d6a2542ebbc2"
data.corr()['is_canceled'].abs().sort_values(ascending=False)[1:11]
# + id="YYjK6CqtijEo"
## Initialize X, y and split dataset
df = data.copy()
y = df['is_canceled']
X = df.drop(['is_canceled'], axis=1)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.20, random_state=42)
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="UK7LMiEzqRC_" outputId="a481c1e8-c577-478c-df0c-596c88537b95"
clf_rf = DecisionTreeClassifier(max_depth=12).fit(X_train, y_train)
pd.DataFrame(data = clf_rf.feature_importances_ * 100,
columns = ['Importance'],
index = X_train.columns)\
.sort_values('Importance', ascending=True)[-20:]\
.plot(kind='barh', color='r')
plt.xlabel("Top 20 Feature Importance (%)")
# + [markdown] id="nDE1XsJgHTAf"
# ## Imbalance
# + colab={"base_uri": "https://localhost:8080/"} id="gCozQX3ZJ_Js" outputId="0248fee1-4799-4525-c6cf-91092dcc8ab2"
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import RandomOverSampler
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import Pipeline
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# + id="wMtQJey_KMnP"
def sampler_compare(samplers):
accus, accu_balanceds = [], []
for sampler in samplers.values():
X_train_balanced, y_train_balanced = sampler.fit_sample(X_train, y_train)
## Test by logistic regression accuracy
logit = LogisticRegression(solver='liblinear')
clf = logit.fit(X_train, y_train)
accu = roc_auc_score(y_test, clf.predict(X_test))
clf_balanced = logit.fit(X_train_balanced, y_train_balanced)
accu_balanced = roc_auc_score(y_test, clf.predict(X_test))
accus.append(accu)
accu_balanceds.append(accu_balanced)
return accus, accu_balanceds
# + id="T6R4o4wZq-fI"
samplers = {'Undersample': RandomUnderSampler(random_state=0),
'Oversample': RandomOverSampler(random_state=0),
'SMOTE': SMOTE()}
accus, accu_balanceds = np.round(sampler_compare(samplers), 4)
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="raLUW-bkrc_A" outputId="412e2f8c-0e57-4600-98f9-5b5d4a92d23f"
## Output comparsion table
table = pd.DataFrame({"Balancing Method": samplers.keys(),
"Accuracy (imbalanced)": accus,
"Accuracy (balanced)": accu_balanceds})
pd.pivot_table(table, index=['Balancing Method'])\
.sort_values(by="Accuracy (balanced)", ascending=False)
# + id="3xjih2Esf3JX"
print(pd.pivot_table(table, index=['Balancing Method'])\
.sort_values(by="Accuracy (balanced)", ascending=False).to_latex())
# + [markdown] id="Q5omj3Unt8Zx"
# It's obvious that the balancing method can imporve the accuracy and among them, SMOTE preforms the best, thus we deploy the SMOTE on the train set.
# + colab={"base_uri": "https://localhost:8080/"} id="QktxeQbquLzP" outputId="d73640d3-8efc-4446-d4e3-7e367541c9bb"
X_train_balanced, y_train_balanced = SMOTE().fit_sample(X_train, y_train)
print(X_train_balanced.shape)
print(y_train_balanced.shape)
# + [markdown] id="W7H4_Bfrd2mB"
# ## Basic Models Implementation
# + [markdown] id="UUYo5zfLOeuE"
# ### Tuning
# + [markdown] id="macLEDF__DM6"
# #### Random Forest
# + colab={"base_uri": "https://localhost:8080/"} id="kvb7lrbihqui" outputId="019c8364-d986-41cd-fe05-91de6a261247"
rf_para = {'n_estimators': [100, 200, 500],
'max_features': ['auto', 'sqrt', 'log2'],
'min_samples_split': [2, 5, 10]}
rf = RandomForestClassifier()
rf_cv = GridSearchCV(rf, rf_para, cv=10, n_jobs=-1, verbose=2)
rf_cv.fit(X_train[:1000], y_train[:1000])
# rf_cv.fit(X_train, y_train)
print("Best paramters (RF): {}".format(rf_cv.best_params_))
# + [markdown] id="hA5ANcSxIsw9"
# Best paramters (RF): {'max_features': 'sqrt', 'min_samples_split': 2, 'n_estimators': 500}
# + [markdown] id="jUgt4Wc---Qd"
# #### Support Vector Machine
# + colab={"base_uri": "https://localhost:8080/"} id="2uFCPAJN_JVg" outputId="ead0713e-12f4-47bc-80fb-c868e2d9cb48"
svm_para = {'kernel': ['linear', 'poly', 'rbf', 'sigmoid'],
'C': [50, 10, 1.0, 0.1, 0.01],
'gamma': ['scale']}
svm = SVC()
nn_cv = GridSearchCV(svm, svm_para, cv=10, n_jobs=-1, verbose=2)
nn_cv.fit(X_train[:1000], y_train[:1000])
# nn_cv.fit(X_train, y_train)
print("Best paramters (SVM): {}".format(nn_cv.best_params_))
# + [markdown] id="ibnEzMvhL0sv"
# Best paramters (SVM): {'C': 0.1, 'gamma': 'scale', 'kernel': 'linear'}
# + colab={"base_uri": "https://localhost:8080/", "height": 280} id="b6KD5bO0KgcH" outputId="570905f5-a547-4bc6-81d4-c132a8483ca7"
## Visualize by heatmap
pvt = pd.pivot_table(pd.DataFrame(nn_cv.cv_results_),
values='mean_test_score',
index='param_C',
columns='param_kernel')
ax = sns.heatmap(pvt, annot=True, linewidths=.5)
# + [markdown] id="_x8IBYffOhfp"
# #### Neural Netowrk
# + colab={"base_uri": "https://localhost:8080/"} id="kJxXA1f6hrVW" outputId="ec3befe0-d1e4-4342-c882-4ffc60e68805"
nn_para = {'alpha': [1, 0.1, 0.01, 0.001],
'hidden_layer_sizes': [(50,50,50), (100,100)],
'solver': ["adam", "sgd"],
'activation': ["logistic", "relu"]}
nn = MLPClassifier()
nn_cv = GridSearchCV(nn, nn_para, cv=10, n_jobs=-1, verbose=2)
nn_cv.fit(X_train, y_train)
print("Best paramters (NN): {}".format(nn_cv.best_params_))
# + [markdown] id="QoWdhsyq9olc"
# Best paramters (NN): {'activation': 'logistic', 'alpha': 0.001, 'hidden_layer_sizes': (100, 100), 'solver': 'adam'}
# + [markdown] id="JgWMw-NPppP6"
# ### All Models Comparison
#
# Notes: SVC runs very slowly due to the fact of its complexity $\mathcal{O}(n^2 p)$ ([source](https://stackoverflow.com/questions/40077432/why-is-scikit-learn-svm-svc-extremely-slow))
# + id="VGzA3HSW87X2"
def compute(name, model, X_train, X_test, y_train, y_test):
fpr, tpr = [], []
accu = auc = exectime = 0
start = time.process_time()
clf = model.fit(X_train, y_train)
y_prob = clf.predict_proba(X_test)[:, 1]
y_pred = clf.predict(X_test)
exectime = round(time.process_time() - start, 4)
fpr, tpr, _ = roc_curve(y_test, y_prob)
accu = round(accuracy_score(y_test,y_pred), 4)
auc = round(roc_auc_score(y_test, y_prob), 4)
cm = confusion_matrix(y_test, y_pred)
## Plot accuracy, AUC and confusion matrix
print("# {}:\nAccuracy Score: {}\nAUC Score: {}\nConfusion Matrix:\n{}"
.format(name, accu, auc, cm))
print("-------------------- \n")
return fpr, tpr, accu, auc, exectime
def implement(models, X_train, X_test, y_train, y_test, balanced=False):
accus, aucs, times, fprs, tprs = [], [], [], [], []
for name, model in models.items():
## XGBoost requires the same column number of training and test set
if name == "XGBoost" and balanced == True:
X_test_balanced, y_test_balanced = SMOTE().fit_sample(X_test, y_test)
fpr, tpr, accu, auc, exectime = \
compute(name, model, X_train, X_test_balanced, y_train, y_test_balanced)
## Neural Network prefers standardization
elif name == "Neural Network (vanilla)" or "Neural Network (tuned)":
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
fpr, tpr, accu, auc, exectime = \
compute(name, model, X_train_scaled, X_test_scaled, y_train, y_test)
else:
fpr, tpr, accu, auc, exectime = \
compute(name, model, X_train, X_test, y_train, y_test)
fprs.append(fpr)
tprs.append(tpr)
times.append(exectime)
accus.append(accu)
aucs.append(auc)
return accus, aucs, times, fprs, tprs
def plot_ROC(fprs, tprs, aucs, names):
plt.figure(0).clf()
for fpr, tpr, name, auc in zip(fprs, tprs, names, aucs):
plt.plot(fpr, tpr, label=name+" (AUC="+str(auc)+")")
## Plot ROC comparison
plt.legend(loc=0)
plt.title("Comparison of Models by ROC")
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
# + id="VqF8W-pRmGzV"
## Dictionary: {model name -- classifier}
models = \
{"Logistic Regression (baseline)": LogisticRegression(solver='liblinear'),
"Decisioion Tree": DecisionTreeClassifier(max_depth=12),
"Random Forest (vanilla)": RandomForestClassifier(),
"Random Forest (Tuned)": RandomForestClassifier(max_features='sqrt',
min_samples_split=2,
n_estimators=500),
"Extra Trees": ExtraTreesClassifier(min_samples_leaf=7,
min_samples_split=2,
n_estimators=500),
"Neural Network (vanilla)": MLPClassifier(),
"Neural Network (Tuned)": MLPClassifier(alpha=0.001,
hidden_layer_sizes=(100,100),
solver='adam',
activation='logistic'),
# "Gaussian Naive Bayes": GaussianNB(),
"XGBoost": XGBClassifier(),
"AdaBoost": AdaBoostClassifier(n_estimators=500),
"SVM (Tuned)": SVC(kernel='linear', C=0.1, probability=True)}
# + [markdown] id="PYjlnI_mMTww"
# Imbalanced Training and Plot
# + colab={"base_uri": "https://localhost:8080/"} id="iCLJLf5MAlra" outputId="14255837-129d-470a-8a6b-dfdedc675275"
accus, aucs, times, fprs, tprs = implement(models, X_train, X_test, y_train, y_test, False)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="OCBJd5I0LPQl" outputId="a8a3957f-038c-4d6f-de8f-ba28809f76f5"
plot_ROC(fprs, tprs, aucs, models.keys())
# + [markdown] id="TRv4GFq_MXiy"
# Balanced Training and Plot
# + colab={"base_uri": "https://localhost:8080/"} id="DmEonFJaFYAv" outputId="14dd6704-8218-41f2-f7d1-368c9554bfa9"
accus_b, aucs_b, times_b, fprs_b, tprs_b = \
implement(models, X_train_balanced, X_test, y_train_balanced, y_test, True)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="oek7UFblLRkj" outputId="d4835016-8775-4a05-f941-b21ae4017c5f"
plot_ROC(fprs_b, tprs_b, aucs_b, models.keys())
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="yW2SK-P3sM4J" outputId="d6cbc800-d8ff-4a12-999f-02ba86251aeb"
## Plot runtime comparison
pd.DataFrame(data=times, columns=['Runtime'], index=models.keys())\
.sort_values('Runtime', ascending=False)\
.plot(kind='barh', color='g')
plt.xlabel("Runtime Comparison (s)")
# + colab={"base_uri": "https://localhost:8080/", "height": 390} id="LYpkeWyntc_c" outputId="71737cc5-11c5-4e94-e819-7c682b5103d1"
## Output comparsion table
table = pd.DataFrame({"Model": models.keys(),
"Accuracy": accus,
"AUC": aucs,
"Runtime(s)": times,
"Accuracy_B": accus_b,
"AUC_B": aucs_b,
"Runtime_B(s)": times_b})
pd.pivot_table(table, index=['Model'])\
.sort_values(by="Accuracy", ascending=False)
# + [markdown] id="sauz8i7Tw06m"
# Compare the accuracy difference among models and balance/imbalance.
# + id="tgsTKgoQ0dNj"
print(pd.pivot_table(table, index=['Model'])\
.sort_values(by="Accuracy", ascending=False).to_latex())
# + [markdown] id="p36JRIYUpODt"
# ## Ensemble by Soft Voting
#
# Selected the most accurate ones to combine as a new multi-layor voting classifier. See notebooks "Advanced Models Exploration"
| Basic_Models_Experiment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from qiskit.aqua.algorithms import VQE, NumPyEigensolver
import matplotlib.pyplot as plt
import numpy as np
from qiskit.chemistry.components.variational_forms import UCCSD
from qiskit.chemistry.components.initial_states import HartreeFock
from qiskit.circuit.library import EfficientSU2
from qiskit.aqua.components.optimizers import COBYLA, SPSA, SLSQP
from qiskit.aqua.operators import Z2Symmetries
from qiskit import IBMQ, BasicAer, Aer
from qiskit.chemistry.drivers import PySCFDriver, UnitsType
from qiskit.chemistry import FermionicOperator
from qiskit import IBMQ
from qiskit.aqua import QuantumInstance
from qiskit.ignis.mitigation.measurement import CompleteMeasFitter
from qiskit.providers.aer.noise import NoiseModel
# +
# Function takes interatomic distance & returns qubit operator + more info
def get_qubit_op(dist):
# define the molecule (LiH), and configure the interatomic distance for calculations
#create a driver
driver = PySCFDriver(atom="Li .0 .0 .0; H .0 .0 " + str(dist), unit=UnitsType.ANGSTROM,
charge=0, spin=0, basis='sto3g')
molecule = driver.run()
#freeze core
freeze_list = [0]
#remove unoccupied orbitals
remove_list = [-3, -2]
repulsion_energy = molecule.nuclear_repulsion_energy
num_particles = molecule.num_alpha + molecule.num_beta
num_spin_orbitals = molecule.num_orbitals * 2
remove_list = [x % molecule.num_orbitals for x in remove_list]
freeze_list = [x % molecule.num_orbitals for x in freeze_list]
remove_list = [x - len(freeze_list) for x in remove_list]
remove_list += [x + molecule.num_orbitals - len(freeze_list) for x in remove_list]
freeze_list += [x + molecule.num_orbitals for x in freeze_list]
ferOp = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals)
ferOp, energy_shift = ferOp.fermion_mode_freezing(freeze_list)
num_spin_orbitals -= len(freeze_list)
num_particles -= len(freeze_list)
ferOp = ferOp.fermion_mode_elimination(remove_list)
num_spin_orbitals -= len(remove_list)
qubitOp = ferOp.mapping(map_type='parity', threshold=0.00000001)
qubitOp = Z2Symmetries.two_qubit_reduction(qubitOp, num_particles)
shift = energy_shift + repulsion_energy
return qubitOp, num_particles, num_spin_orbitals, shift
# +
backend = BasicAer.get_backend("statevector_simulator")
distances = np.arange(0.5, 4.0, 0.1)
exact_energies = []
vqe_energies = []
optimizer = SLSQP(maxiter=5)
for dist in distances:
# Qubit Operator
qubitOp, num_particles, num_spin_orbitals, shift = get_qubit_op(dist)
result = NumPyEigensolver(qubitOp).run()
exact_energies.append(np.real(result.eigenvalues) + shift)
initial_state = HartreeFock(
num_spin_orbitals,
num_particles,
qubit_mapping='parity'
)
# Classical Exact Eigensolver
var_form = UCCSD(
num_orbitals=num_spin_orbitals,
num_particles=num_particles,
initial_state=initial_state,
qubit_mapping='parity'
)
vqe = VQE(qubitOp, var_form, optimizer)
vqe_result = np.real(vqe.run(backend)['eigenvalue'] + shift)
vqe_energies.append(vqe_result)
print("Interatomic Distance:", np.round(dist, 2), "VQE Result:", vqe_result, "Exact Energy:", exact_energies[-1])
print("All energies have been calculated")
# -
plt.plot(distances, exact_energies, label="Exact Energy")
plt.plot(distances, vqe_energies, label="VQE Energy")
plt.xlabel('Atomic distance (Angstrom)')
plt.ylabel('Energy')
plt.legend()
plt.show()
| vqe.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py3]
# language: python
# name: conda-env-py3-py
# ---
import pandas
import nibabel as ni
import numpy as np
import networkx as nx
import seaborn as sns
from glob import glob
from sklearn import cluster
import matplotlib.pyplot as plt
tst_el = '../data/graphs/BNU1/hippo_erc_voxelwise/sub-0025864_ses-1_dwi_hippo_erc_voxelwise.edgelist'
jnk = nx.read_weighted_edgelist(tst_el)
tbl = pandas.read_table(tst_el, sep=' ', header=None)
tbl.columns = ['x','y','value']
full_mtx = np.zeros((9877, 9877))
full_mtx.shape
for i,row in tbl.iterrows():
full_mtx[row['x'], row['y']] = row['value']
bin_matrix = np.zeros_like(full_mtx)
bin_matrix[bin_matrix > 0] = 1
len(bin_matrix>0) / len(bin_matrix.flat)
# +
# cluster.SpectralClustering?
# +
atlas = ni.load('../data/dilated_hipp_parcellation_gspace.nii.gz').get_data()
vw = ni.load('../data/hippo_erc_voxelwise.nii.gz').get_data()
# -
len(np.unique(ec_indices)) + len(np.unique(hipp_indices))
# +
#full_mtx = np.zeros((len(np.unique(vw)), len(np.unique(vw))))
hipp_indices = vw[atlas>2]
ec_indices = vw[(atlas==1) | (atlas==2)]
newmat = np.zeros((len(hipp_indices), len(ec_indices)))
for i, hipind in enumerate(hipp_indices):
for j, ecind in enumerate(ec_indices):
newmat[i,j] = bin_matrix[hipind-1, ecind-1]
# -
newmat.shape
SC = cluster.SpectralClustering(n_clusters=2, n_neighbors=100, assign_labels='discretize')
test_SC = SC.fit(newmat)
list(zip(np.unique(test_SC.labels_),np.bincount(test_SC.labels_)))
# +
hip_remap = {i: hipind for i,hipind in enumerate(hipp_indices)}
erc_remap = {i: ecind for i,ecind in enumerate(ec_indices)}
# -
cx_clusters = np.zeros_like(vw)
for idx, label in enumerate(test_SC.labels_):
cx_clusters[vw == hip_remap[idx]] = label+1
# +
#from nilearn import plotting
#aff = ni.load('../data/dilated_hipp_parcellation_gspace.nii.gz').affine
for val in np.unique(cx_clusters):
jnk = np.zeros_like(cx_clusters)
jnk[cx_clusters==val] = 1
plt.close()
plotting.plot_roi(ni.Nifti1Image(jnk, aff), cmap='RdBu')
plt.show()
# -
d = tst_mtx.shape[0] # dimensionality -- number of nodes
n = 1 # number of graphs
delta = n**(-1./(d+4))
dist_matrix = tst_mtx
similarity_matrix = np.exp(- dist_matrix ** 2 / (2. * delta ** 2))
sns.distplot(similarity_matrix.flat, kde=False)
plt.show()
| code/HIPP_ERC_CX_CLUSTERING.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.7 64-bit (''ml_env'': conda)'
# name: python_defaultSpec_1594734513062
# ---
# + tags=[]
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=100000, n_features=25)
# + tags=[]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
X_cal, X_test, y_cal, y_test = train_test_split(X_test, y_test, test_size=0.5, random_state=42)
print(f"{X_train.shape}")
print(f"{X_test.shape}")
print(f"{X_cal.shape}")
# +
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(random_state=42)
ada_clf.fit(X_train, y_train)
# -
ada_probas = ada_clf.predict_proba(X_test)[:, 1]
ada_probas
# +
from sklearn.calibration import calibration_curve
import matplotlib.pyplot as plt
def plot_calibration_curve(name, fig_index, probs):
fig = plt.figure(fig_index, figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
frac_of_pos, mean_pred_value = calibration_curve(y_test, probs, n_bins=10)
ax1.plot(mean_pred_value, frac_of_pos, "s-", label=f'{name}')
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title(f'Calibration plot ({name})')
ax2.hist(probs, range=(0, 1), bins=10, label=name, histtype="step", lw=2)
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
# -
plot_calibration_curve("AdaBoost Classifier", 1, ada_probas)
ada_probas_cal = ada_clf.predict_proba(X_cal)[:, 1]
# +
from sklearn.linear_model import LogisticRegression
lr_cal = LogisticRegression(random_state=42)
lr_cal.fit(ada_probas_cal.reshape(-1, 1), y_cal)
lr_cal_probas = lr_cal.predict_proba(ada_probas.reshape(-1, 1))[:, 1]
plot_calibration_curve("Logistic regression", 1, lr_cal_probas)
# +
from betacal import BetaCalibration
bc = BetaCalibration(parameters="abm")
bc.fit(ada_probas_cal, y_cal)
bc_probas = bc.predict(ada_probas)
plot_calibration_curve("Beta calibration", 1, bc_probas)
# -
| Dummy_dataset_and_calibration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: sfem3
# language: python
# name: sfem3
# ---
# # sdata.table
# +
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# %autosave 0
import logging
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.DEBUG, datefmt='%I:%M:%S')
import os
import sys
import numpy as np
import pandas as pd
import sdata
import uuid
# -
my_unique_name = "test0815"
sdata.uuid_from_str(my_unique_name)
df = pd.DataFrame({"a":[1,2,3], "b":[1.2, 3, 2]})
df
data = sdata.Data(name=my_unique_name,
uuid=sdata.uuid_from_str(my_unique_name),
table=df)
data
data.metadata.df
data.name = "hallo"
data.uuid = uuid.uuid4()
data.metadata.df
data.table
data.df.loc[1,"a"] = 0.1
data.df
df
filepath_xlsx = "/tmp/data.xlsx"
data.to_xlsx(filepath_xlsx)
data_xlsx = sdata.Data.from_xlsx(filepath_xlsx)
assert data_xlsx.name==data_xlsx.name
data_xlsx.df.loc[1,"a"] = 0.4
data_xlsx.df
df
data.metadata.df
| ipynb/sdata_table_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/beta/AlphaFold2_advanced.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="pc5-mbsX9PZC"
# # AlphaFold2_advanced
#
# This notebook modifies deepmind's [original notebook](https://colab.research.google.com/github/deepmind/alphafold/blob/main/notebooks/AlphaFold.ipynb) (**before AlphaFold-Multimer existed**) to add experimental support for modeling complexes (both homo and hetero-oligomers), option to run MMseqs2 instead of Jackhmmer for MSA generation and advanced functionality.
#
# See [ColabFold](https://github.com/sokrypton/ColabFold/) for other related notebooks
#
# **Limitations**
# - This notebook does **NOT** use Templates.
# - This notebook does **NOT** use AlphaFold-Multimer for complex (protein-protein) modeling.
# - For a typical Google-Colab session, with a `16G-GPU`, the max total length is **1400 residues**. Sometimes a `12G-GPU` is assigned, in which the max length is ~1000 residues.
# - Can I use the models for **Molecular Replacement**? Yes, but be CAREFUL, the bfactor column is populated with pLDDT confidence values (higher = better). Phenix.phaser expects a "real" bfactor, where (lower = better). See [post](https://twitter.com/cheshireminima/status/1423929241675120643) from <NAME> on how to process models.
# + id="woIxeCPygt7K" cellView="form"
#@title Install software
#@markdown Please execute this cell by pressing the _Play_ button
#@markdown on the left.
# setup device
import os
import sys
import tensorflow as tf
import jax
try:
# check if TPU is available
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
print('Running on TPU')
DEVICE = "tpu"
except:
if jax.local_devices()[0].platform == 'cpu':
print("WARNING: no GPU detected, will be using CPU")
DEVICE = "cpu"
else:
print('Running on GPU')
DEVICE = "gpu"
# disable GPU on tensorflow
tf.config.set_visible_devices([], 'GPU')
from IPython.utils import io
import subprocess
import tqdm.notebook
install_jackhmmer = True
GIT_REPO = 'https://github.com/deepmind/alphafold'
SOURCE_URL = 'https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar'
PARAMS_DIR = './alphafold/data/params'
PARAMS_PATH = os.path.join(PARAMS_DIR, os.path.basename(SOURCE_URL))
TQDM_BAR_FORMAT = '{l_bar}{bar}| {n_fmt}/{total_fmt} [elapsed: {elapsed} remaining: {remaining}]'
TMP_DIR = "tmp"
os.makedirs(TMP_DIR, exist_ok=True)
# if not already installed
total = 55
with tqdm.notebook.tqdm(total=total, bar_format=TQDM_BAR_FORMAT) as pbar:
if not os.path.isdir("alphafold"):
# download alphafold code
os.system(f"git clone {GIT_REPO} alphafold; cd alphafold; git checkout 1d43aaff941c84dc56311076b58795797e49107b")
# download colabfold patches
os.system(f"wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/colabfold.py")
os.system(f"wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/colabfold_alphafold.py")
os.system(f"wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/pairmsa.py")
os.system(f"wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/protein.patch -P {TMP_DIR}")
os.system(f"wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/config.patch -P {TMP_DIR}")
os.system(f"wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/model.patch -P {TMP_DIR}")
os.system(f"wget -qnc https://raw.githubusercontent.com/sokrypton/ColabFold/main/beta/modules.patch -P {TMP_DIR}")
# apply patch to dynamically control number of recycles (idea from <NAME>)
os.system(f"patch -u alphafold/alphafold/model/model.py -i {TMP_DIR}/model.patch")
os.system(f"patch -u alphafold/alphafold/model/modules.py -i {TMP_DIR}/modules.patch")
os.system(f"patch -u alphafold/alphafold/model/config.py -i {TMP_DIR}/config.patch")
# apply multi-chain patch from <NAME> @huhlim
os.system(f"patch -u alphafold/alphafold/common/protein.py -i {TMP_DIR}/protein.patch")
pbar.update(4)
os.system(f"pip install biopython dm-haiku ml-collections py3Dmol")
pbar.update(6)
# download model params (speedup from kaczmarj)
os.system(f"mkdir --parents {PARAMS_DIR}")
os.system(f"curl -fsSL {SOURCE_URL} | tar x -C {PARAMS_DIR}")
pbar.update(14+27)
# install hhsuite
os.system(f"curl -fsSL https://github.com/soedinglab/hh-suite/releases/download/v3.3.0/hhsuite-3.3.0-SSE2-Linux.tar.gz | tar xz -C {TMP_DIR}/")
# install jackhmmer
if install_jackhmmer:
os.system(f"sudo apt install --quiet --yes hmmer")
pbar.update(3)
# create a ramdisk to store a database chunk to make Jackhmmer run fast.
os.system(f"sudo mkdir -m 777 --parents /tmp/ramdisk")
os.system(f"sudo mount -t tmpfs -o size=9G ramdisk /tmp/ramdisk")
pbar.update(1)
else:
pbar.update(4)
else:
pbar.update(55)
########################################################################################
# --- Python imports ---
if '/content/alphafold' not in sys.path:
sys.path.append('/content/alphafold')
if f"{TMP_DIR}/bin" not in os.environ['PATH']:
os.environ['PATH'] += f":{TMP_DIR}/bin:{TMP_DIR}/scripts"
import colabfold as cf
import colabfold_alphafold as cf_af
import json
import matplotlib.pyplot as plt
import numpy as np
try:
from google.colab import files
IN_COLAB = True
except:
IN_COLAB = False
# + id="rowN0bVYLe9n" cellView="form"
#@title Enter the amino acid sequence to fold ⬇️
import re
# define sequence
sequence = 'PIAQIHILEGRSDEQKETLIREVSEAISRSLDAPLTSVRVIITEMAKGHFGIGGELASK' #@param {type:"string"}
jobname = "test" #@param {type:"string"}
homooligomer = "1" #@param {type:"string"}
#@markdown - `sequence` Specify protein sequence to be modelled.
#@markdown - Use `/` to specify intra-protein chainbreaks (for trimming regions within protein).
#@markdown - Use `:` to specify inter-protein chainbreaks (for modeling protein-protein hetero-complexes).
#@markdown - For example, sequence `AC/DE:FGH` will be modelled as polypeptides: `AC`, `DE` and `FGH`. A seperate MSA will be generates for `ACDE` and `FGH`.
#@markdown If `pair_msa` is enabled, `ACDE`'s MSA will be paired with `FGH`'s MSA.
#@markdown - `homooligomer` Define number of copies in a homo-oligomeric assembly.
#@markdown - Use `:` to specify different homooligomeric state (copy numer) for each component of the complex.
#@markdown - For example, **sequence:**`ABC:DEF`, **homooligomer:** `2:1`, the first protein `ABC` will be modeled as a homodimer (2 copies) and second `DEF` a monomer (1 copy).
I = cf_af.prep_inputs(sequence, jobname, homooligomer, clean=IN_COLAB)
# + id="ITcPnLkLuDDE" cellView="form"
#@title Search against genetic databases
#@markdown Once this cell has been executed, you will see
#@markdown statistics about the multiple sequence alignment
#@markdown (MSA) that will be used by AlphaFold. In particular,
#@markdown you’ll see how well each residue is covered by similar
#@markdown sequences in the MSA.
#@markdown (Note that the search against databases and the actual prediction can take some time, from minutes to hours, depending on the length of the protein and what type of GPU you are allocated by Colab.)
#@markdown ---
msa_method = "mmseqs2" #@param ["mmseqs2","jackhmmer","single_sequence","precomputed"]
#@markdown - `mmseqs2` - FAST method from [ColabFold](https://github.com/sokrypton/ColabFold)
#@markdown - `jackhmmer` - default method from Deepmind (SLOW, but may find more/less sequences).
#@markdown - `single_sequence` - use single sequence input
#@markdown - `precomputed` If you have previously run this notebook and saved the results,
#@markdown you can skip this step by uploading
#@markdown the previously generated `prediction_?????/msa.pickle`
#@markdown ---
#@markdown **custom msa options**
add_custom_msa = False #@param {type:"boolean"}
msa_format = "fas" #@param ["fas","a2m","a3m","sto","psi","clu"]
#@markdown - `add_custom_msa` - If enabled, you'll get an option to upload your custom MSA in the specified `msa_format`. Note: Your MSA will be supplemented with those from 'mmseqs2' or 'jackhmmer', unless `msa_method` is set to 'single_sequence'.
#@markdown ---
#@markdown **pair msa options**
#@markdown Experimental option for protein complexes. Pairing currently only supported for proteins in same operon (prokaryotic genomes).
pair_mode = "unpaired" #@param ["unpaired","unpaired+paired","paired"] {type:"string"}
#@markdown - `unpaired` - generate seperate MSA for each protein.
#@markdown - `unpaired+paired` - attempt to pair sequences from the same operon within the genome.
#@markdown - `paired` - only use sequences that were sucessfully paired.
#@markdown Options to prefilter each MSA before pairing. (It might help if there are any paralogs in the complex.)
pair_cov = 50 #@param [0,25,50,75,90] {type:"raw"}
pair_qid = 20 #@param [0,15,20,30,40,50] {type:"raw"}
#@markdown - `pair_cov` prefilter each MSA to minimum coverage with query (%) before pairing.
#@markdown - `pair_qid` prefilter each MSA to minimum sequence identity with query (%) before pairing.
# --- Search against genetic databases ---
I = cf_af.prep_msa(I, msa_method, add_custom_msa, msa_format,
pair_mode, pair_cov, pair_qid, TMP_DIR=TMP_DIR)
mod_I = I
if len(I["msas"][0]) > 1:
plt = cf.plot_msas(I["msas"], I["ori_sequence"])
plt.savefig(os.path.join(I["output_dir"],"msa_coverage.png"), bbox_inches = 'tight', dpi=200)
plt.show()
# + id="sYIjwa1VXeeW" cellView="form"
#@title Filter options (optional)
trim = "" #@param {type:"string"}
trim_inverse = False #@param {type:"boolean"}
#@markdown - Use `trim` to specify regions to trim. For example: `trim:5-9,20` will remove positions 5,6,7,8,9 and 20.
#@markdown - For complexes, you can use `trim:A1-A3,B5-B7` to remove positions 1,2,3 in 1st protein and positions 5,6,7 in 2nd protein.
#@markdown - Note: This function is 1-indexed, meaning the first position is 1, not 0.
#@markdown - To specify regions to keep instead of trim, enable `trim_inverse`
cov = 0 #@param [0,25,50,75,90,95] {type:"raw"}
qid = 0 #@param [0,15,20,25,30,40,50] {type:"raw"}
#@markdown - `cov` minimum coverage with query (%)
#@markdown - `qid` minimum sequence identity with query (%)
mod_I = cf_af.prep_filter(I, trim, trim_inverse, cov, qid)
if I["msas"] != mod_I["msas"]:
plt.figure(figsize=(16,5),dpi=100)
plt.subplot(1,2,1)
plt.title("Sequence coverage (Before)")
cf.plot_msas(I["msas"], I["ori_sequence"], return_plt=False)
plt.subplot(1,2,2)
plt.title("Sequence coverage (After)")
cf.plot_msas(mod_I["msas"], mod_I["ori_sequence"], return_plt=False)
plt.savefig(os.path.join(I["output_dir"],"msa_coverage.filtered.png"), bbox_inches = 'tight', dpi=200)
plt.show()
# + id="bQe3KeyTcv0n" cellView="form"
#@title Run alphafold
num_relax = "None"
rank_by = "pLDDT" #@param ["pLDDT","pTMscore"]
use_turbo = True #@param {type:"boolean"}
max_msa = "512:1024" #@param ["512:1024", "256:512", "128:256", "64:128", "32:64"]
max_msa_clusters, max_extra_msa = [int(x) for x in max_msa.split(":")]
#@markdown - `rank_by` specify metric to use for ranking models (For protein-protein complexes, we recommend pTMscore)
#@markdown - `use_turbo` introduces a few modifications (compile once, swap params, adjust max_msa) to speedup and reduce memory requirements. Disable for default behavior.
#@markdown - `max_msa` defines: `max_msa_clusters:max_extra_msa` number of sequences to use. When adjusting after GPU crash, be sure to `Runtime` → `Restart runtime`. (Lowering will reduce GPU requirements, but may result in poor model quality. This option ignored if `use_turbo` is disabled)
show_images = True #@param {type:"boolean"}
#@markdown - `show_images` To make things more exciting we show images of the predicted structures as they are being generated. (WARNING: the order of images displayed does not reflect any ranking).
#@markdown ---
#@markdown **sampling options**
#@markdown There are two stochastic parts of the pipeline. Within the feature generation (choice of cluster centers) and within the model (dropout).
#@markdown To get structure diversity, you can iterate through a fixed number of random_seeds (using `num_samples`) and/or enable dropout (using `is_training`).
num_models = 5 #@param [1,2,3,4,5] {type:"raw"}
use_ptm = True #@param {type:"boolean"}
num_ensemble = 1 #@param [1,8] {type:"raw"}
max_recycles = 3 #@param [1,3,6,12,24,48] {type:"raw"}
tol = 0 #@param [0,0.1,0.5,1] {type:"raw"}
is_training = False #@param {type:"boolean"}
num_samples = 1 #@param [1,2,4,8,16,32] {type:"raw"}
#@markdown - `num_models` specify how many model params to try. (5 recommended)
#@markdown - `use_ptm` uses Deepmind's `ptm` finetuned model parameters to get PAE per structure. Disable to use the original model params. (Disabling may give alternative structures.)
#@markdown - `num_ensemble` the trunk of the network is run multiple times with different random choices for the MSA cluster centers. (`1`=`default`, `8`=`casp14 setting`)
#@markdown - `max_recycles` controls the maximum number of times the structure is fed back into the neural network for refinement. (3 recommended)
#@markdown - `tol` tolerance for deciding when to stop (CA-RMS between recycles)
#@markdown - `is_training` enables the stochastic part of the model (dropout), when coupled with `num_samples` can be used to "sample" a diverse set of structures.
#@markdown - `num_samples` number of random_seeds to try.
subsample_msa = True #@param {type:"boolean"}
#@markdown - `subsample_msa` subsample large MSA to `3E7/length` sequences to avoid crashing the preprocessing protocol. (This option ignored if `use_turbo` is disabled.)
if not use_ptm and rank_by == "pTMscore":
print("WARNING: models will be ranked by pLDDT, 'use_ptm' is needed to compute pTMscore")
rank_by = "pLDDT"
# prep input features
feature_dict = cf_af.prep_feats(mod_I, clean=IN_COLAB)
Ls_plot = feature_dict["Ls"]
# prep model options
opt = {"N":len(feature_dict["msa"]),
"L":len(feature_dict["residue_index"]),
"use_ptm":use_ptm,
"use_turbo":use_turbo,
"max_recycles":max_recycles,
"tol":tol,
"num_ensemble":num_ensemble,
"max_msa_clusters":max_msa_clusters,
"max_extra_msa":max_extra_msa,
"is_training":is_training}
if use_turbo:
if "runner" in dir():
# only recompile if options changed
runner = cf_af.prep_model_runner(opt, old_runner=runner)
else:
runner = cf_af.prep_model_runner(opt)
else:
runner = None
###########################
# run alphafold
###########################
outs, model_rank = cf_af.run_alphafold(feature_dict, opt, runner, num_models, num_samples, subsample_msa,
rank_by=rank_by, show_images=show_images)
# + id="PnMog4kHUztt" cellView="form"
#@title Refine structures with Amber-Relax (optional)
#@markdown If side-chain bond geometry is important to you, enable Amber-Relax by specifying how many top ranked structures you want relaxed. By default, we disable Amber-Relax since it barely moves the main-chain (backbone) structure and can overall double the runtime.
num_relax = "None" #@param ["None", "Top1", "Top5", "All"] {type:"string"}
if num_relax == "None":
num_relax = 0
elif num_relax == "Top1":
num_relax = 1
elif num_relax == "Top5":
num_relax = 5
else:
num_relax = num_models * num_samples
#@markdown - `num_relax` specify how many of the top ranked structures to relax
if num_relax > 0 and not os.path.isfile("stereo_chemical_props.txt"):
try:
total = 45
with tqdm.notebook.tqdm(total=total, bar_format=TQDM_BAR_FORMAT) as pbar:
pbar.set_description(f'INSTALL AMBER')
with io.capture_output() as captured:
# Install OpenMM and pdbfixer.
# %shell rm -rf /opt/conda
# %shell wget -q -P /tmp \
# https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
# && bash /tmp/Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda \
# && rm /tmp/Miniconda3-latest-Linux-x86_64.sh
pbar.update(4)
# PATH=%env PATH
# %env PATH=/opt/conda/bin:{PATH}
# %shell conda update -qy conda \
# && conda install -qy -c conda-forge \
# python=3.7 \
# openmm=7.5.1 \
# pdbfixer
pbar.update(40)
# %shell wget -q -P /content \
# https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt
pbar.update(1)
# %shell mkdir -p /content/alphafold/common
# %shell cp -f /content/stereo_chemical_props.txt /content/alphafold/common
# Apply OpenMM patch.
# %shell pushd /opt/conda/lib/python3.7/site-packages/ && \
# patch -p0 < /content/alphafold/docker/openmm.patch && \
# popd
except subprocess.CalledProcessError:
print(captured)
raise
if num_relax > 0:
if "relax" not in dir():
# add conda environment to path
sys.path.append('/opt/conda/lib/python3.7/site-packages')
# import libraries
from alphafold.relax import relax
from alphafold.relax import utils
with tqdm.notebook.tqdm(total=num_relax, bar_format=TQDM_BAR_FORMAT) as pbar:
pbar.set_description(f'AMBER relaxation')
for n,key in enumerate(model_rank):
if n < num_relax:
prefix = f"rank_{n+1}_{key}"
pred_output_path = os.path.join(I["output_dir"],f'{prefix}_relaxed.pdb')
if not os.path.isfile(pred_output_path):
amber_relaxer = relax.AmberRelaxation(
max_iterations=0,
tolerance=2.39,
stiffness=10.0,
exclude_residues=[],
max_outer_iterations=20)
relaxed_pdb_lines, _, _ = amber_relaxer.process(prot=outs[key]["unrelaxed_protein"])
with open(pred_output_path, 'w') as f:
f.write(relaxed_pdb_lines)
pbar.update(n=1)
# + id="KAZj6CBZTkJM" cellView="form"
#@title Display 3D structure (optional) {run: "auto"}
rank_num = 1 #@param ["1", "2", "3", "4", "5"] {type:"raw"}
color = "lDDT" #@param ["chain", "lDDT", "rainbow"]
show_sidechains = False #@param {type:"boolean"}
show_mainchains = False #@param {type:"boolean"}
key = model_rank[rank_num-1]
prefix = f"rank_{rank_num}_{key}"
pred_output_path = os.path.join(I["output_dir"],f'{prefix}_relaxed.pdb')
if not os.path.isfile(pred_output_path):
pred_output_path = os.path.join(I["output_dir"],f'{prefix}_unrelaxed.pdb')
cf.show_pdb(pred_output_path, show_sidechains, show_mainchains, color, Ls=Ls_plot).show()
if color == "lDDT": cf.plot_plddt_legend().show()
if use_ptm:
cf.plot_confidence(outs[key]["plddt"], outs[key]["pae"], Ls=Ls_plot).show()
else:
cf.plot_confidence(outs[key]["plddt"], Ls=Ls_plot).show()
# + id="XzK2Wve12GCk" cellView="form"
#@title Extra outputs (optional)
dpi = 100#@param {type:"integer"}
save_to_txt = True #@param {type:"boolean"}
save_pae_json = True #@param {type:"boolean"}
#@markdown - save data used to generate contact and distogram plots below to text file (pae values can be found in json file if `use_ptm` is enabled)
if use_ptm:
print("predicted alignment error")
cf.plot_paes([outs[k]["pae"] for k in model_rank], Ls=Ls_plot, dpi=dpi)
plt.savefig(os.path.join(I["output_dir"],f'predicted_alignment_error.png'), bbox_inches = 'tight', dpi=np.maximum(200,dpi))
plt.show()
print("predicted contacts")
cf.plot_adjs([outs[k]["adj"] for k in model_rank], Ls=Ls_plot, dpi=dpi)
plt.savefig(os.path.join(I["output_dir"],f'predicted_contacts.png'), bbox_inches = 'tight', dpi=np.maximum(200,dpi))
plt.show()
print("predicted distogram")
cf.plot_dists([outs[k]["dists"] for k in model_rank], Ls=Ls_plot, dpi=dpi)
plt.savefig(os.path.join(I["output_dir"],f'predicted_distogram.png'), bbox_inches = 'tight', dpi=np.maximum(200,dpi))
plt.show()
print("predicted LDDT")
cf.plot_plddts([outs[k]["plddt"] for k in model_rank], Ls=Ls_plot, dpi=dpi)
plt.savefig(os.path.join(I["output_dir"],f'predicted_LDDT.png'), bbox_inches = 'tight', dpi=np.maximum(200,dpi))
plt.show()
def do_save_to_txt(filename, adj, dists, sequence):
adj = np.asarray(adj)
dists = np.asarray(dists)
L = len(adj)
with open(filename,"w") as out:
out.write("i\tj\taa_i\taa_j\tp(cbcb<8)\tmaxdistbin\n")
for i in range(L):
for j in range(i+1,L):
if dists[i][j] < 21.68 or adj[i][j] >= 0.001:
line = f"{i}\t{j}\t{sequence[i]}\t{sequence[j]}\t{adj[i][j]:.3f}"
line += f"\t>{dists[i][j]:.2f}" if dists[i][j] == 21.6875 else f"\t{dists[i][j]:.2f}"
out.write(f"{line}\n")
for n,key in enumerate(model_rank):
if save_to_txt:
txt_filename = os.path.join(I["output_dir"],f'rank_{n+1}_{key}.raw.txt')
do_save_to_txt(txt_filename,
outs[key]["adj"],
outs[key]["dists"],
mod_I["full_sequence"])
if use_ptm and save_pae_json:
pae = outs[key]["pae"]
max_pae = pae.max()
# Save pLDDT and predicted aligned error (if it exists)
pae_output_path = os.path.join(I["output_dir"],f'rank_{n+1}_{key}_pae.json')
# Save predicted aligned error in the same format as the AF EMBL DB
rounded_errors = np.round(np.asarray(pae), decimals=1)
indices = np.indices((len(rounded_errors), len(rounded_errors))) + 1
indices_1 = indices[0].flatten().tolist()
indices_2 = indices[1].flatten().tolist()
pae_data = json.dumps([{
'residue1': indices_1,
'residue2': indices_2,
'distance': rounded_errors.flatten().tolist(),
'max_predicted_aligned_error': max_pae.item()
}],
indent=None,
separators=(',', ':'))
with open(pae_output_path, 'w') as f:
f.write(pae_data)
# + cellView="form" id="xkCcitL6qdNS"
#@title Animate outputs (optional)
dpi = 100#@param {type:"integer"}
use_pca = True #@param {type:"boolean"}
#@markdown - `use_pca` - use the first principle component to determine order of animation frames
import matplotlib
from matplotlib import animation
from IPython.display import HTML
from sklearn.decomposition import PCA
def mk_animation(positions, labels, ref=0, Ls=None, line_w=2.0, dpi=100):
def ca_align_to_last(positions, ref):
def align(P, Q):
if Ls is None or len(Ls) == 1:
P_,Q_ = P,Q
else:
# align relative to first chain
P_,Q_ = P[:Ls[0]],Q[:Ls[0]]
p = P_ - P_.mean(0,keepdims=True)
q = Q_ - Q_.mean(0,keepdims=True)
return ((P - P_.mean(0,keepdims=True)) @ cf.kabsch(p,q)) + Q_.mean(0,keepdims=True)
pos = positions[ref,:,1,:] - positions[ref,:,1,:].mean(0,keepdims=True)
best_2D_view = pos @ cf.kabsch(pos,pos,return_v=True)
new_positions = []
for i in range(len(positions)):
new_positions.append(align(positions[i,:,1,:],best_2D_view))
return np.asarray(new_positions)
# align to reference
pos = ca_align_to_last(positions, ref)
fig, (ax1) = plt.subplots(1)
fig.set_figwidth(5)
fig.set_figheight(5)
fig.set_dpi(dpi)
xy_min = pos[...,:2].min() - 1
xy_max = pos[...,:2].max() + 1
for ax in [ax1]:
ax.set_xlim(xy_min, xy_max)
ax.set_ylim(xy_min, xy_max)
ax.axis(False)
ims=[]
for l,p in zip(labels,pos):
if Ls is None or len(Ls) == 1:
img = cf.plot_pseudo_3D(p, ax=ax1, line_w=line_w)
else:
c = np.concatenate([[n]*L for n,L in enumerate(Ls)])
img = cf.plot_pseudo_3D(p, c=c, cmap=cf.pymol_cmap, cmin=0, cmax=39, line_w=line_w, ax=ax1)
ims.append([cf.add_text(f"{l}", ax1),img])
ani = animation.ArtistAnimation(fig, ims, blit=True, interval=120)
plt.close()
return ani.to_html5_video()
labels = np.array([k for k in outs])
pos = np.array([outs[k]["unrelaxed_protein"].atom_positions for k in labels])
if use_pca:
pos_ca = pos[:,:,1,:]
if Ls_plot is not None and len(Ls_plot) > 1:
pos_ca = pos_ca[:,:Ls_plot[0]]
i,j = np.triu_indices(pos_ca.shape[1],k=1)
pos_ca_dm = np.sqrt(np.square(pos_ca[:,None,:,:] - pos_ca[:,:,None]).sum(-1))[:,i,j]
pc = PCA(1).fit_transform(pos_ca_dm)[:,0].argsort()
pos = pos[pc]
labels = labels[pc]
HTML(mk_animation(pos,labels,Ls=Ls_plot,dpi=dpi))
# + id="Riekgf0KQv_3" cellView="form"
#@title Download prediction
#@markdown Once this cell has been executed, a zip-archive with
#@markdown the obtained prediction will be automatically downloaded
#@markdown to your computer.
# add settings file
settings_path = os.path.join(I["output_dir"],"settings.txt")
with open(settings_path, "w") as text_file:
text_file.write(f"notebook=https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/beta/AlphaFold2_advanced_beta.ipynb\n")
text_file.write(f"sequence={I['ori_sequence']}\n")
text_file.write(f"msa_method={msa_method}\n")
if add_custom_msa:
text_file.write(f"add_custom_msa={add_custom_msa} msa_format={msa_format}\n")
text_file.write(f"homooligomer={I['homooligomer']}\n")
text_file.write(f"pair_mode={pair_mode}\n")
if pair_mode != "unpaired":
text_file.write(f"pair_cov={pair_cov}\n")
text_file.write(f"pair_qid={pair_qid}\n")
if I["ori_sequence"] != mod_I["ori_sequence"]:
text_file.write(f"mod_sequence={mod_I['ori_sequence']}\n")
text_file.write(f"trim={trim}\n")
text_file.write(f"trim_inverse={trim_inverse}\n")
if "cov" in dir():
text_file.write(f"cov={cov}\n")
text_file.write(f"qid={qid}\n")
else:
text_file.write(f"cov=0\nqid=0\n")
text_file.write(f"max_msa={max_msa}\n")
text_file.write(f"subsample_msa={subsample_msa}\n")
text_file.write(f"num_relax={num_relax}\n")
text_file.write(f"use_turbo={use_turbo}\n")
text_file.write(f"use_ptm={use_ptm}\n")
text_file.write(f"rank_by={rank_by}\n")
text_file.write(f"num_models={num_models}\n")
text_file.write(f"num_samples={num_samples}\n")
text_file.write(f"num_ensemble={num_ensemble}\n")
text_file.write(f"max_recycles={max_recycles}\n")
text_file.write(f"tol={tol}\n")
text_file.write(f"is_training={is_training}\n")
text_file.write(f"use_templates=False\n")
text_file.write(f"-------------------------------------------------\n")
for n,key in enumerate(model_rank):
line = f"rank_{n+1}_{key} pLDDT:{outs[key]['pLDDT']:.2f}" + f" pTMscore:{outs[key]['pTMscore']:.4f}" if use_ptm else ""
text_file.write(line+"\n")
# --- Download the predictions ---
# %shell zip -FSr {I["output_dir"]}.zip {I["output_dir"]}
if IN_COLAB:
files.download(f'{I["output_dir"]}.zip')
else:
print("this notebook appears to be running locally, to download click folder icon to the left, navigate to file, right click and download")
| beta/AlphaFold2_advanced.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/utkarshpaliwal9/Anomaly-Detection/blob/master/Major_project_notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="IvXjAHsbcKY7" colab_type="text"
# ## Criminal Activity Recognition
# + id="ylTxivSXbaBw" colab_type="code" outputId="e6fea5a4-326a-4983-92ec-18f2a1d7344d" colab={"base_uri": "https://localhost:8080/", "height": 122}
from google.colab import drive
drive.mount('/content/drive')
# + id="ossgjxLiHxjB" colab_type="code" colab={}
import os
import pandas as pd
# os.chdir('/content/drive/My Drive/Project')
# data = pd.read_csv('CrimeDS.txt', sep="\n", header=None)
# print(data.iloc[0].values[0])
os.chdir('/content/drive/My Drive/Project/AnomalyVideos')
# + id="K12tg9E9yO-G" colab_type="code" colab={}
data = []
for name in os.listdir():
if(name[len(name)-1] == '4'):
data.append(name)
# + id="f4OPr2YRzTkc" colab_type="code" colab={}
# data
# + id="S6K-D8-ecIYb" colab_type="code" colab={}
# # !wget 'https://ucb7dca99505e76e54fa172ef606.dl.dropboxusercontent.com/cd/0/get/AwnTshqOOPemhrjTqZXlfp02PCkgNJSJt8jhllVUuRTJn90nr_5blsJzj0JrrIDnpQ2lApMyJKY3KTi_4saPTdvB3611txsv82RztoOYS5GARg/file?_download_id=011006157493155033421613254309845225599327459049660434941660553837&_notify_domain=www.dropbox.com&dl=1'
# + id="lP-IsF-5cIt2" colab_type="code" colab={}
# !7z x '/content/anomaly_part2.zip'
# + id="04lD2v2CJWt-" colab_type="code" colab={}
# import cv2
# videos_info = []
# for i in range(len(data)):
# name_of_video = data.iloc[i].values[0]
# class_of_video = ""
# cap = cv2.VideoCapture(name_of_video)
# length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
# for j in range(len(name_of_video)):
# if name_of_video[j].isdigit() or name_of_video[j] == '_':
# break
# else:
# class_of_video += name_of_video[j]
# #print(i, str(i%11))
# #print((i % 11 == 1) or (i % 11 == 3) or (i % 11 == 7))
# if((i % 11 == 1) or (i % 11 == 3) or (i % 11 == 7)):
# videos_info.append(["test", class_of_video, name_of_video, length])
# else:
# videos_info.append(["train", class_of_video, name_of_video, length])
# #print(videos_info, sep=",", end="\n")
# + id="IukbapY9mKCY" colab_type="code" colab={}
import cv2
videos_info = []
for i in range(len(data)):
name_of_video = data[i]
class_of_video = ""
cap = cv2.VideoCapture(name_of_video)
length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
if("Normal" in name_of_video):
class_of_video = "Normal"
else:
class_of_video = "Criminal"
if(i%100 <= 10):
videos_info.append(["test", class_of_video, name_of_video, length])
else:
videos_info.append(["train", class_of_video, name_of_video, length])
# + id="JT-kvlm3a9so" colab_type="code" colab={}
# print(videos_info, sep=",", end="\n")
# new_df = [['train', 'Arrest', 'Arrest002_x264.mp4', 1790], ['test', 'Arrest', 'Arrest003_x264.mp4', 3054],['train', 'Normal', 'Normal_Videos011_x264.mp4', 897], ['test', 'Normal', 'Normal_Videos020_x264.mp4', 485]]
# + id="lfrRbmGdfP-W" colab_type="code" colab={}
# videos_info
# + id="_pl63_GYTtEa" colab_type="code" colab={}
# videos_df = pd.DataFrame(videos_info)
# print(videos_df)
# #include =['object', 'float', 'int']
# #videos_df.describe(include = include)
# + id="viYq7NaCbV8x" colab_type="code" outputId="f439b4a9-c600-435e-868b-14833ac4f49f" colab={"base_uri": "https://localhost:8080/", "height": 935}
videos_df = pd.DataFrame(data= videos_info, columns= ['partition', 'class', 'video_name', 'frames'])
print("Number of videos = " + str(len(videos_df)) + "\n\n")
print(videos_df[:50])
# + id="UlW36haSpoAT" colab_type="code" colab={}
# un = videos_df['partition'].unique()
# print(un)
# partitionwre = (videos_df.groupby(['partition']))
# testw_re = partitionwre.get_group(un[0])
# trainw_re = partitionwre.get_group(un[1])
# print(len(trainw_re))
# print(len(testw_re))
# + id="56N1sP3SgFsR" colab_type="code" colab={}
#@title PLEASE CHANGE THE RUN_NUMBER IN THE CELL BELOW
# + id="zOHQ7a5_aqUD" colab_type="code" outputId="20ff49fd-e79a-4bb2-9bbd-b4bc152cc045" colab={"base_uri": "https://localhost:8080/", "height": 34}
# IMPORTANT
#
#
#
# PLEASE INCREMENT THE RUN_NUMBER BEFORE RUNNING THIS CELL
#
#
#
#. IMPORTANT
import pandas as pd
import numpy as np
import cv2
import os
import h5py
from tqdm import tqdm
from keras.preprocessing import image
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.models import Model, load_model, Sequential
from keras.layers import Input, LSTM, Dense, Dropout
from keras.utils import to_categorical
from keras.applications.imagenet_utils import preprocess_input
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, TensorBoard,EarlyStopping
from keras.utils.io_utils import HDF5Matrix
SEQ_LEN = 30
MAX_SEQ_LEN = 3000
BATCH_SIZE = 16
EPOCHS = 100
RUN_NUMBER = "G" #PLEASE INCREMENT THE RUN_NUMBER BEFORE RUNNING THIS CELL
# + id="doq8RcCGCFNx" colab_type="code" colab={}
# + id="M9-tCCZscPW6" colab_type="code" colab={}
os.mkdir("/content/drive/My Drive/Project/"+RUN_NUMBER)
os.chdir("/content/drive/My Drive/Project/"+RUN_NUMBER)
def get_data(path, if_pd=False):
"""Load our data from file."""
names = ['partition', 'class', 'video_name', 'frames']
df = pd.DataFrame(data = videos_info, columns= names)
return df
def get_class_dict(df):
class_name = list(df['class'].unique())
index = np.arange(0, len(class_name))
label_index = dict(zip(class_name, index))
index_label = dict(zip(index, class_name))
return (label_index, index_label)
def clean_data(df):
mask = np.logical_and(df['frames'] >= SEQ_LEN, df['frames'] <= MAX_SEQ_LEN)
df = df[mask]
return df
def split_train_test(df):
partition = (df.groupby(['partition']))
un = df['partition'].unique()
test = partition.get_group(un[0])
train = partition.get_group(un[1])
return (train, test)
def preprocess_image(img):
img = cv2.resize(img, (299,299))
return preprocess_input(img)
def encode_video(row, model, label_index):
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# success,image = vidcap.read()
# imgplot = plt.imshow(image)
# plt.show()
cap = cv2.VideoCapture("/content/drive/My Drive/Project/AnomalyVideos/" + str(row["video_name"].iloc[0]))
images = []
for i in range(SEQ_LEN):
ret, frame = cap.read()
# im = plt.imshow(frame)
# plt.show()
frame = preprocess_image(frame)
images.append(frame)
features = model.predict(np.array(images))
index = label_index[row["class"].iloc[0]]
print(index)
#y_onehot = to_categorical(index, len(label_index.keys()))
return features, index
from keras.utils import np_utils
def encode_dataset(data, model, label_index, phase):
input_f = []
output_y = []
required_classes = ["Criminal" , "Normal"]
for i in tqdm(range(data.shape[0])):
# Check whether the given row , is of a class that is required
if str(data.iloc[[i]]["class"].iloc[0]) in required_classes:
# print("===")
# print(data.iloc[[i]])
# print("===")
# print(model)
# print("===")
# print(label_index)
# print("===")
features,y = encode_video(data.iloc[[i]], model, label_index)
input_f.append(features)
output_y.append(y)
le_labels = np_utils.to_categorical(output_y)
f = h5py.File(phase+'_8'+'.h5', 'w')
f.create_dataset(phase, data=np.array(input_f))
f.create_dataset(phase+"_labels", data=np.array(le_labels))
del input_f[:]
del output_y[:]
# + id="8rQW1WYscPY7" colab_type="code" colab={}
def main():
# Get model with pretrained weights.
base_model = InceptionV3(
weights='imagenet',
include_top=True)
# We'll extract features at the final pool layer.
model = Model(
inputs=base_model.input,
outputs=base_model.get_layer('avg_pool').output)
# Getting the data
df = get_data('') #INSERT PATH IF READING FROM FILE REQUIRED
# print("_____DF_____")
# print(df)
# Clean the data
df_clean = clean_data(df)
# print("_____DF-Clean_____")
# print(df_clean)
# Creating index-label maps and inverse_maps
label_index, index_label = get_class_dict(df_clean)
# print("\n\nlabel_index, index_label _____ ")
# print(label_index, index_label)
# Split the dataset into train and test
train, test = split_train_test(df_clean)
# print("\n\n_____TRAIN_____")
# print(train)
# print('\n\n_______TEST______')
# print(test)
# Encoding the dataset
encode_dataset(train, model, label_index, "train")
encode_dataset(test,model,label_index,"test")
# + id="6CSH8rX1IKYD" colab_type="code" outputId="541db8ba-1599-4f76-b000-90cd5d5b7548" colab={"base_uri": "https://localhost:8080/", "height": 34}
# # !ls
# + id="QusjBRpYdR5r" colab_type="code" outputId="b4b9597c-5a55-4248-d3be-f70b59cf917f" colab={"base_uri": "https://localhost:8080/", "height": 49}
# mask = np.logical_and(videos_df[3] >= SEQ_LEN, videos_df[3] <= MAX_SEQ_LEN)
# videos_df[mask]
# + id="TH7GXfBQoAEN" colab_type="code" colab={}
#p = videos_df.groupby(['partition'])
#p.get_group("test")
# + id="47a0U3JMpIBC" colab_type="code" colab={}
#videos_df
# + id="Bk3pEtMvqED_" colab_type="code" colab={}
# print(clean_data(videos_df))
# + id="NnkJp3ljcPam" colab_type="code" outputId="96d004e5-0425-4314-e53b-8ae60d418ad3" colab={"base_uri": "https://localhost:8080/", "height": 1000}
main()
# + id="lGQuJIzbf1tF" colab_type="code" colab={}
from keras.models import Sequential
from keras.layers import Dense, Activation,Dropout
from keras.layers import LSTM
# + id="S0uJMq0tViq3" colab_type="code" colab={}
def lstm():
"""Build a simple LSTM network. We pass the extracted features from
our CNN to this model predominantly."""
input_shape = (SEQ_LEN, 2048)
# Model.
model = Sequential()
model.add(LSTM(2048,input_shape=input_shape,dropout=0.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
#model.add(Dense(10, activation='softmax'))"""
checkpoint = ModelCheckpoint(filepath='models\\checkpoint-{epoch:02d}-{val_loss:.2f}.hdf5')
tb_callback = TensorBoard(
log_dir="logs",
histogram_freq=2,
write_graph=True
)
early_stopping = EarlyStopping(monitor = 'val_loss',patience= 10)
callback_list = [checkpoint, tb_callback]
optimizer = Adam(lr=1e-5, decay=1e-6)
metrics = ['accuracy', 'top_k_categorical_accuracy']
model.compile(loss='categorical_crossentropy', optimizer=optimizer,metrics=metrics)
return model, callback_list
# + id="kJXchpTpIxdN" colab_type="code" outputId="251a2a1c-bc3d-4605-9ea5-834dc8245a81" colab={"base_uri": "https://localhost:8080/", "height": 1000}
x_train = HDF5Matrix('train_8.h5', 'train')
y_train = HDF5Matrix('train_8.h5', 'train_labels')
x_test = HDF5Matrix('test_8.h5', 'test')
y_test = HDF5Matrix('test_8.h5', 'test_labels')
print(x_train.shape)
print(y_train.shape)
# print(y_train[240])
print(x_test.shape)
print(y_test.shape)
model, callback_list = lstm()
print("MODEL SUMMARY")
print(model.summary())
print()
#model.fit(x_train, y_train)
model.fit(x_train, y_train, batch_size = BATCH_SIZE, epochs = 100,verbose = 2,validation_data = (x_test, y_test),shuffle = 'batch', callbacks=callback_list)
model.save("Criminal_Activity_Detection.h5")
# + id="AlVmvyiX17vL" colab_type="code" outputId="1b290e15-aaac-4df8-ad71-dce0713319c9" colab={"base_uri": "https://localhost:8080/", "height": 286}
predict('/content/drive/My Drive/Project/Normal_Videos_for_Event_Recognition/Normal_Videos_015_x264.mp4')
# + id="csWt1n4gIt2r" colab_type="code" colab={}
def predict(video_name):
# print(video_name[len(video_name)-3:])
if(video_name[len(video_name)-3:] != "mp4"):
print("Wrong format")
return None
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
vidcap = cv2.VideoCapture(video_name)
success,image = vidcap.read()
imgplot = plt.imshow(image)
plt.show()
# count = 0
# while success:
# cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file
# success,image = vidcap.read()
# print('Read a new frame: ', success)
# count += 1
# img=mpimg.imread('your_image.png')
# imgplot = plt.imshow(img)
# plt.show()
return test_video(video_name)
# + id="adXVxAIHCf7O" colab_type="code" outputId="95c5c567-ab63-46d5-d638-6e53092cbd38" colab={"base_uri": "https://localhost:8080/", "height": 901}
# !ls
# + id="P66vQulxoy2-" colab_type="code" colab={}
#THIS FUNCTION IS ONLY FOR DEMONSTRATION AND DOESN'T ACTUALLY EMPLOY THE NETWORK
def test_video(video_name):
arr = video_name.split('/')
video_name = arr[len(arr)-1]
if "Normal" in video_name:
return "Normal"
else:
return "Criminal"
# + id="bAAqAAm2Ohk0" colab_type="code" colab={}
# + id="2fZK9ScOO2G7" colab_type="code" colab={}
| Major_project_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
# +
acc_LR = np.loadtxt('result_LR.dat')
acc_RF = np.loadtxt('result_RF.dat')
acc_ER = np.loadtxt('result_ER.dat')
print(acc_ER.shape,acc_RF.shape,acc_LR.shape)
# +
nx,ny = 1,2
nfig = nx*ny
fig, ax = plt.subplots(ny,nx,figsize=(nx*10,ny*2.8))
for i in range(ny):
ax[i].plot(acc_LR[i],'k^--',label='LR')
ax[i].plot(acc_RF[i],'bv--',label='RF')
ax[i].plot(acc_ER[i],'ro-',label='ER')
ax[i].set_xlabel('data id')
ax[0].set_ylabel('accuracy')
ax[1].set_ylabel('ROC')
ax[0].legend()
plt.tight_layout(h_pad=1, w_pad=1.5)
plt.savefig('fig2.pdf', format='pdf', dpi=100)
# +
acc_LR_mean = acc_LR[0].mean()
acc_RF_mean = acc_RF[0].mean()
acc_ER_mean = acc_ER[0].mean()
roc_LR_mean = acc_LR[1].mean()
roc_RF_mean = acc_RF[1].mean()
roc_ER_mean = acc_ER[1].mean()
print('acc:',acc_LR_mean,acc_LR[0].std(),acc_RF_mean,acc_RF[0].std(),acc_ER_mean,acc_ER[0].std())
print('roc:',roc_LR_mean,acc_LR[1].std(),roc_RF_mean,acc_RF[1].std(),roc_ER_mean,acc_ER[1].std())
# -
| Ref/plot_compare.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + language="html"
# <!--Script block to left align Markdown Tables-->
# <style>
# table {margin-left: 0 !important;}
# </style>
# -
# Preamble script block to identify host, user, and kernel
import sys
# ! hostname
# ! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
# # Lesson 21 -- Prediction Engines (Data Modeling by Regression - Continued)
#
# A procedure to use additional variables to make predictions. OLS is not confined to a single explainatory variable; we can consider a collection of explainatory variables.
#
# ## Objectives
# - To apply fundamental concepts involved in data modeling and regression;
# - Is a single variable a good estimator of the target process?
# - What kind of 'error' is involved
# ---
#
# ## Computational Thinking Concepts
# The CT concepts include:
#
# - Abstraction => Represent data behavior with a model
# - Pattern Recognition => Compare patterns in (our) data models to make a decision
# ---
#
# # Textbook Resources
#
# [https://inferentialthinking.com/chapters/15/Prediction.html](https://inferentialthinking.com/chapters/15/Prediction.html)
#
# You know the URL that no-one reads, perhaps because there is a "secret" module you need to install, without instructions of how!
#
# <hr>
#
#
# ## Marksmanship Example
#
# As an example consider our work on a FPS game named "Olympic 10-meter Air Pistol" we are developing as a training tool. [https://en.wikipedia.org/wiki/ISSF_10_meter_air_pistol](https://en.wikipedia.org/wiki/ISSF_10_meter_air_pistol)
#
# First some packages
import random
import numpy
import matplotlib.pyplot
# So first we are going to build a function that shows a target, with strikes on the target.
def showmytarget(myx,myy,centerx,centery):
# import matplotlib.pyplot as plt
fig, ax = matplotlib.pyplot.subplots(figsize = (10,10)) # note we must use plt.subplots, not plt.subplot
circle1 = matplotlib.pyplot.Circle((centerx, centery), 1, color='black')
circle2 = matplotlib.pyplot.Circle((centerx, centery), 1, color='orange', fill=False)
circle3 = matplotlib.pyplot.Circle((centerx, centery), 0.5, color='orange', fill=False)
circle4 = matplotlib.pyplot.Circle((centerx, centery), 2, color='black', fill=False)
circle5 = matplotlib.pyplot.Circle((centerx, centery), 3, color='black', fill=False)
circle6 = matplotlib.pyplot.Circle((centerx, centery), 4, color='black', fill=False)
circle7 = matplotlib.pyplot.Circle((centerx, centery), 5, color='black', fill=False)
circle8 = matplotlib.pyplot.Circle((centerx, centery), 6, color='black', fill=False)
ax.set_xlim((-10, 10))
ax.set_ylim((-10, 10))
ax.plot(myx,myy, 'o', color='r') #vector of hits
ax.add_artist(circle1)
ax.add_artist(circle2)
ax.add_artist(circle3)
ax.add_artist(circle4)
ax.add_artist(circle5)
ax.add_artist(circle6)
ax.add_artist(circle7)
ax.add_artist(circle8)
matplotlib.pyplot.show()
return
# ### Accuracy
#
# The concept of accuracy is a measure of how close to the "true" or population value is our estimate.
# If we are estimating the mean value, then the "bullseye" is the population mean $\mu$, our estimate is $\bar x$.
#
# Consider the graphical simulator below. The target is centered at (0,0). We will take 10 shots and evaluate our performance, lets say that we are kind of old and shaky, sometimes we hit the bullseye, sometimes we don't but in 40 years of shooting, on average, we get good scores and tend to hit near the center.
mu = 0.0 # where we tend to hit
sigma = 0.60 # how steady we are when the shot trips
myx = []; myy = []; distxy = []
for i in range(10001): # 10 shots
xvalue = random.gauss(mu, sigma)
yvalue = random.gauss(mu, sigma)
myx.append(xvalue)
myy.append(yvalue)
distxy.append((xvalue**2 + yvalue**2)**0.5)
showmytarget(myx,myy,0,0)
matplotlib.pyplot.hist(distxy,bins=20)
matplotlib.pyplot.show()
print('mean distance from bullseye =',numpy.mean(distxy))
# # Aiming Point
#
# Consider the graphical simulator below. The target is centered at (0,0). We will take 10 shots and evaluate our performance, lets say that we are kind of sloppy and shaky, sometimes we hit the bullseye, sometimes we don't but in 40 years of shooting, on average, we get ok scores -- in this case our mean value deviates from zero, say a bit left and low.
mu = -2.0 # where we tend to hit
sigma = 0.6 # how steady we are when the shot trips
myx = []; myy = []; distxy = []
for i in range(10001): # 10 shots
xvalue = random.gauss(mu, sigma)
yvalue = random.gauss(mu, sigma)
myx.append(xvalue)
myy.append(yvalue)
distxy.append((xvalue**2 + yvalue**2)**0.5)
showmytarget(myx,myy,0,0)
matplotlib.pyplot.hist(distxy,bins=20)
matplotlib.pyplot.show()
print('mean distance from bullseye =',numpy.mean(distxy))
# ### Precision
#
# The concept of precision is a measure of the repeatability of our estimates. In this context the dispersion is the metric, i.e. variance. Consider the graphical simulator below. The target is centered at (0,0). We will take 10 shots and evaluate our performance, lets say that we are kind of sloppy but very steady, all our shots are quite close, and it really depends on how we set up our sights.
mu = -4.0 # where we tend to hit
sigma = 0.3 # how steady we are when the shot trips
myx = []; myy = []; distxy = []
for i in range(10001): # 10 shots
xvalue = random.gauss(mu, sigma)
yvalue = random.gauss(mu, sigma)
myx.append(xvalue)
myy.append(yvalue)
distxy.append((xvalue**2 + yvalue**2)**0.5)
showmytarget(myx,myy,0,0)
matplotlib.pyplot.hist(distxy,bins=20)
matplotlib.pyplot.show()
print('mean distance from bullseye =',numpy.mean(distxy))
# If we can adjust our sights to hit a bit high and right (of the red dots) then we anticipate a better score.
#
mu = 4.00 # where we tend to hit
sigma = 0.03 # how steady we are when the shot trips
myx = []; myy = []; distxy = []
for i in range(10001): # 10 shots
xvalue = random.gauss(mu, sigma)
yvalue = random.gauss(mu, sigma)
myx.append(xvalue)
myy.append(yvalue)
distxy.append((xvalue**2 + yvalue**2)**0.5)
showmytarget(myx,myy,0,0)
matplotlib.pyplot.hist(distxy,bins=20)
matplotlib.pyplot.show()
print('mean distance from bullseye =',numpy.mean(distxy))
# ### Bias
#
# Bias is a systematic "error" or offset - similar to the distance from the bullseye in our examples. If we have a precise rifle that shoots a known distance from the bullseye, thats still a useful tool - we either adjust our aiming point, or the device to account for this bias. Its akin to the last example where we demonstrate the contribution to error from a poor point of aim, and an unsteady hand.
# ## Residuals
#
# In the context of our target shooting, the residual is the distance from the target that our model (the rifle) places the estimate (shot). Lets examine the simulations over again. First with a bias and unsteady hands
mu = -4.0 # where we tend to hit
sigma = 0.3 # how steady we are when the shot trips
myx = []; myy = []; distxy = []
for i in range(1000001): # 10 shots
xvalue = random.gauss(mu, sigma)
yvalue = random.gauss(mu, sigma)
myx.append(xvalue)
myy.append(yvalue)
distxy.append((xvalue**2 + yvalue**2)**0.5)
showmytarget(myx,myy,0,0)
matplotlib.pyplot.hist(distxy,bins=200)
matplotlib.pyplot.show()
print('mean distance from bullseye =',numpy.mean(distxy))
# In these examples we know the target should be at (0,0) so lets stipulate that to our model (rifle).
mu = 0.0 # where we tend to hit
sigma = 0.3 # how steady we are when the shot trips
myx = []; myy = []; distxy = []
for i in range(11): # 10 shots
xvalue = random.gauss(mu, sigma)
yvalue = random.gauss(mu, sigma)
myx.append(xvalue)
myy.append(yvalue)
distxy.append((xvalue**2 + yvalue**2)**0.5)
showmytarget(myx,myy,0,0)
matplotlib.pyplot.hist(distxy,bins=20)
matplotlib.pyplot.show()
print('mean distance from bullseye =',numpy.mean(distxy))
print('mean dispersion from point of aim =',numpy.std(distxy))
# So even with a perfect aim because of shaky hands, our average distance from the target is 0.37, and dispersion from the point of aim is 0.196.
#
# Now lets improve our situatuon by putting our device into a mechanical mount that reduces the shake.
mu = 0.0 # where we tend to hit
sigma = 0.01 # how steady we are when the shot trips
myx = []; myy = []; distxy = []
for i in range(100001): # 10 shots
xvalue = random.gauss(mu, sigma)
yvalue = random.gauss(mu, sigma)
myx.append(xvalue)
myy.append(yvalue)
distxy.append((xvalue**2 + yvalue**2)**0.5)
showmytarget(myx,myy,0,0)
matplotlib.pyplot.hist(distxy,bins=20)
matplotlib.pyplot.show()
print('mean distance from bullseye =',numpy.mean(distxy))
print('mean dispersion from point of aim =',numpy.std(distxy))
# Now with perfect aim and a rigid mount, our average distance from the target is 0.00125, and the dispersion is 0.006.
#
# A technique you will learn in your statistics class called analysis of variance is a practical application of these ideas. The distances (in this case always positive) are the residuals, and the variance has two contributing components; how far from the true value the estimator is (our bullseye distance); and how spread out around the point of aim the estimates are (sample variance).
#
# What adds to the challenge is that the target often moves!
mu = -3.40 # where we tend to hit
sigma = 0.01 # how steady we are when the shot trips
myx = []; myy = []; distxy = []
for i in range(1001): # 10 shots
xvalue = random.gauss(mu, sigma)
yvalue = random.gauss(mu, sigma)
myx.append(xvalue)
myy.append(yvalue)
distxy.append(((xvalue+1)**2 + (yvalue+3)**2)**0.5)
showmytarget(myx,myy,-1,-3)
matplotlib.pyplot.hist(distxy,bins=20)
matplotlib.pyplot.show()
print('mean distance from bullseye =',numpy.mean(distxy))
print('mean dispersion from point of aim =',numpy.std(distxy))
# ### MLS Regression (Continued)
# ### Example
#
# It is desired to relate the abrasion resistance of rubber (y) to the amount of silica filler ($x_1$) and the amount of hardner ($x_2$).
# Fine-particle silica fibers are added to rubber to increase strength and abrasion resistance.
# The hardner chemically bondos the filler to the rubber polymer chains and increases the efficiency of the filler.
# The units of measure are in parts per hundred of rubber ($pph_r$)
#
# The data from an experiment are given below
#
# |Y|$x_1$|$x_2$|
# |:---|:---|:---|
# |83|1|-1|
# |113|1|1|
# |92|-1|1|
# |82|-1|-1|
# |100|0|0|
# |96|0|0|
# |98|0|0|
# |95|0|1.5|
# |80|0|-1.5|
# |100|1.5|0|
# |92|-1.5|0|
#
#
# Lets examine some different models and assess their utility to explain these data.
#
# 2. $Y = \beta_0 +\beta_1 x_1 + \beta_2 x_2 $
# 2. $Y = \beta_0 +\beta_1 x_1 + \beta_2 x_2 +\beta_3 x_1^2 + \beta_4 x_2^2 + \beta_5 x_1 x_2$
# +
# The experimental data
y = [83,113,92,82,100,96,98,95,80,100,92]
x1 = [1,1,-1,-1,0,0,0,0,0,1.5,-1.5]
x2 = [-1,1,1,-1,0,0,0,1.5,-1.5,0,0]
x1x1 = []; x2x2 = []; x1x2 = []
for i in range(len(y)):
x1x1.append(x1[i]*x1[i])
x2x2.append(x2[i]*x2[i])
x1x2.append(x1[i]*x2[i])
# This import registers the 3D projection, but is otherwise unused.
# https://matplotlib.org/3.1.1/gallery/mplot3d/scatter3d.html
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x1, x2, y, color='r')
ax.set_xlabel('X1 Silica')
ax.set_ylabel('X2 Hardner')
ax.set_zlabel('Y Abrasion Resistance Factor')
plt.show()
# +
import numpy as np
import statsmodels.api as sm
x = [x1]
ones = np.ones(len(x[0]))
X = sm.add_constant(np.column_stack((x[0], ones)))
for ele in x[1:]:
X = sm.add_constant(np.column_stack((ele, X)))
results = sm.OLS(y, X).fit()
print(results.summary())
# +
xm1 = []; xm2 = []
howmany = 11
xstep = (1.5 - (-1.5))/howmany
for i in range(howmany+1):
for j in range(howmany+1):
xm1.append(i*xstep-1.5) #note i and j to build a grid
xm2.append(j*xstep-1.5)
xx = [xm2,xm1]
ones = np.ones(len(xx[0]))
XX = sm.add_constant(np.column_stack((xx[0], ones)))
for ele in xx[1:]:
XX = sm.add_constant(np.column_stack((ele, XX)))
yy = results.predict(XX)
# +
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xm1, xm2, yy, color='r')
ax.set_xlabel('X1 Silica')
ax.set_ylabel('X2 Hardner')
ax.set_zlabel('Y Abrasion Resistance Factor')
plt.show()
# +
x = [x1,x2,x1x1,x2x2,x1x2]
ones = np.ones(len(x[0]))
X = sm.add_constant(np.column_stack((x[0], ones)))
for ele in x[1:]:
X = sm.add_constant(np.column_stack((ele, X)))
results = sm.OLS(y, X).fit()
print (results.summary())
# +
xm1 = []; xm2 = [];xxm1 = []; xxm2 = []; xxm12 = []
howmany = 11
xstep = (1.5 - (-1.5))/howmany
for i in range(howmany+1):
for j in range(howmany+1):
xm1.append(i*xstep-1.5) #note i and j to build a grid
xm2.append(j*xstep-1.5)
xxm1.append((i*xstep-1.5)**2)
xxm2.append((j*xstep-1.5)**2)
xxm12.append((i*xstep-1.5)*(j*xstep-1.5))
xx = [xm1,xm2,xxm1,xxm2,xxm12]
ones = np.ones(len(xx[0]))
XX = sm.add_constant(np.column_stack((xx[0], ones)))
for ele in xx[1:]:
XX = sm.add_constant(np.column_stack((ele, XX)))
yy = results.predict(XX)
yy = results.predict(XX)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xm1, xm2, yy, color='r')
ax.set_xlabel('X1 Silica')
ax.set_ylabel('X2 Hardner')
ax.set_zlabel('Y Abrasion Resistance Factor')
plt.show()
# -
# ## Learn More
#
# - https://euanrussano.github.io/20190810linearRegressionNumpy/
# - http://scipy-lectures.org/packages/statistics/auto_examples/plot_regression_3d.html
# - https://www.geeksforgeeks.org/ml-multiple-linear-regression-using-python/
#
# ## Code Fragments for Future Development
### Lets Make a Plotting Package
def makeAbear(xxx,yyy,xmodel,ymodel):
myfigure = matplotlib.pyplot.figure(figsize = (10,5)) # generate a object from the figure class, set aspect ratio
# Built the plot
matplotlib.pyplot.scatter(xxx, yyy, color ='blue')
matplotlib.pyplot.plot(xmodel, ymodel, color ='red')
matplotlib.pyplot.ylabel("Y")
matplotlib.pyplot.xlabel("X")
mytitle = "YYY versus XXX"
matplotlib.pyplot.title(mytitle)
matplotlib.pyplot.show()
return
| 1-Lessons/Lesson22/PsuedoLessonOld/.ipynb_checkpoints/Lesson21-Deploy-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="MqGJCXGs5mYH" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 263} outputId="e7cbb0e6-7c83-4b71-bbe2-d099efe43a66" executionInfo={"status": "ok", "timestamp": 1531741484444, "user_tz": -540, "elapsed": 11655, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "107995332831641667384"}}
# !pip install dynet
# !git clone https://github.com/neubig/nn4nlp-code.git
# + id="CW_B8G_x5vtX" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
from __future__ import print_function
import time
from collections import defaultdict
import random
import math
import sys
import argparse
import dynet as dy
import numpy as np
# + id="yt-MDLts50dE" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# format of files: each line is "word1|tag1 word2|tag2 ..."
train_file = "nn4nlp-code/data/tags/train.txt"
dev_file = "nn4nlp-code/data/tags/dev.txt"
w2i = defaultdict(lambda: len(w2i))
t2i = defaultdict(lambda: len(t2i))
def read(fname):
"""
Read tagged file
"""
with open(fname, "r") as f:
for line in f:
words, tags = [], []
for wt in line.strip().split():
w, t = wt.split('|')
words.append(w2i[w])
tags.append(t2i[t])
yield (words, tags)
# Read the data
train = list(read(train_file))
unk_word = w2i["<unk>"]
w2i = defaultdict(lambda: unk_word, w2i)
unk_tag = t2i["<unk>"]
t2i = defaultdict(lambda: unk_tag, t2i)
nwords = len(w2i)
ntags = len(t2i)
dev = list(read(dev_file))
# + id="xgo1lmyp55r8" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# DyNet Starts
model = dy.Model()
trainer = dy.AdamTrainer(model)
# Model parameters
EMBED_SIZE = 64
HIDDEN_SIZE = 128
# Lookup parameters for word embeddings
LOOKUP = model.add_lookup_parameters((nwords, EMBED_SIZE))
# Word-level BiLSTM
LSTM = dy.BiRNNBuilder(1, EMBED_SIZE, HIDDEN_SIZE, model, dy.LSTMBuilder)
# Word-level softmax
W_sm = model.add_parameters((ntags, HIDDEN_SIZE))
b_sm = model.add_parameters(ntags)
# Calculate the scores for one example
def calc_scores(words):
dy.renew_cg()
# Transduce all batch elements with an LSTM
word_reps = LSTM.transduce([LOOKUP[x] for x in words])
# Softmax scores
W = dy.parameter(W_sm)
b = dy.parameter(b_sm)
scores = [dy.affine_transform([b, W, x]) for x in word_reps]
return scores
# Calculate MLE loss for one example
def calc_loss(scores, tags):
losses = [dy.pickneglogsoftmax(score, tag) for score, tag in zip(scores, tags)]
return dy.esum(losses)
# Calculate number of tags correct for one example
def calc_correct(scores, tags):
correct = [np.argmax(score.npvalue()) == tag for score, tag in zip(scores, tags)]
return sum(correct)
# + id="aqzFXAhR5hr3" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 879} outputId="1f805840-416d-4d51-9ef3-ab478d38830b"
# Perform training
for ITER in range(100):
random.shuffle(train)
start = time.time()
this_sents = this_words = this_loss = this_correct = 0
for sid in range(0, len(train)):
this_sents += 1
if this_sents % int(1000) == 0:
print("train loss/word=%.4f, acc=%.2f%%, word/sec=%.4f" % (
this_loss / this_words, 100 * this_correct / this_words, this_words / (time.time() - start)),
file=sys.stderr)
# train on the example
words, tags = train[sid]
scores = calc_scores(words)
loss_exp = calc_loss(scores, tags)
this_correct += calc_correct(scores, tags)
this_loss += loss_exp.scalar_value()
this_words += len(words)
loss_exp.backward()
trainer.update()
# Perform evaluation
start = time.time()
this_sents = this_words = this_loss = this_correct = 0
for words, tags in dev:
this_sents += 1
scores = calc_scores(words)
loss_exp = calc_loss(scores, tags)
this_correct += calc_correct(scores, tags)
this_loss += loss_exp.scalar_value()
this_words += len(words)
print("dev loss/word=%.4f, acc=%.2f%%, word/sec=%.4f" % (
this_loss / this_words, 100 * this_correct / this_words, this_words / (time.time() - start)), file=sys.stderr)
| 10-structured/bilstm_tagger_dynet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
X = np.zeros((100, 5), dtype='bool')
features = ["bread", "milk", "cheese", "apples", "bananas"]
for i in range(X.shape[0]):
if np.random.random() < 0.3:
# A bread winner
X[i][0] = 1
if np.random.random() < 0.5:
# Who likes milk
X[i][1] = 1
if np.random.random() < 0.2:
# Who likes cheese
X[i][2] = 1
if np.random.random() < 0.25:
# Who likes apples
X[i][3] = 1
if np.random.random() < 0.5:
# Who likes bananas
X[i][4] = 1
else:
# Not a bread winner
if np.random.random() < 0.5:
# Who likes milk
X[i][1] = 1
if np.random.random() < 0.2:
# Who likes cheese
X[i][2] = 1
if np.random.random() < 0.25:
# Who likes apples
X[i][3] = 1
if np.random.random() < 0.5:
# Who likes bananas
X[i][4] = 1
else:
if np.random.random() < 0.8:
# Who likes cheese
X[i][2] = 1
if np.random.random() < 0.6:
# Who likes apples
X[i][3] = 1
if np.random.random() < 0.7:
# Who likes bananas
X[i][4] = 1
if X[i].sum() == 0:
X[i][4] = 1 # Must buy something, so gets bananas
print(X[:5])
np.savetxt("affinity_dataset.txt", X, fmt='%d')
| Chapter01/ch1_affinity_create.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Customer Churn Analysis
# Context
# The leading telecom company has a massive market share but one big problem: several rivals that are constantly trying to steal customers. Because this company has been the market leader for so many years, there are not significant opportunities to grow with new customers. Instead, company executives have decided to focus on their churn: the rate at which they lose customers.
#
# They have two teams especially interested in this data: the marketing team and the customer service team. Each team has its own reason for wanting the analysis. The marketing team wants to find out who the most likely people to churn are and create content that suits their interests. The customer service team would like to proactively reach out to customers who are about to churn, and try to encourage them to stay.
#
# They decide to hire you for two tasks:
# Help them identify the types of customers who churn
# Predict who of their current customers will churn next month
#
# To do this, they offer you a file of 7,000 customers. Each row is a customer. The Churn column will say Yes if the customer churned in the past month. The data also offers demographic data and data on the services that each customer purchases. Finally there is information on the payments those customers make.
# # Deliverables - What is expected
# # Week 1
# A presentation explaining churn for the marketing team - with links to technical aspects of your work. Tell a story to the marketing team to help them understand the customers who churn and what the marketing team can do to prevent it. Highlight the information with helpful visualizations.
#
# 1- How much is churn affecting the business? How big is churn compared to the existing customer base?
#
# 2- Explain churn by the below categories. Are there any factors that combine to be especially impactful?
#
# a- Customer demographics like age and gender
# b- Services used
# c- Billing information
#
# 3- What services are typically purchased by customers who churned? Are any services especially helpful in retaining customers?
#
# 4- Bonus! How long will it take for the company to lose all its customers? Which demographics will they lose first?
#
# # import all libraries need
#Import the libraries:
import pandas as pd
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import rcParams
import seaborn as sns
from sklearn.linear_model import LogisticRegression
# # Data Prepocessing #
# Read the online file by the URL provides above, and assign it to variable "df"
url= "dataset/datasets_13996_18858_WA_Fn-UseC_-Telco-Customer-Churn.csv"
df=pd.read_csv(url)
df.head()
# summary of the dataframe.
df.info()
# count distinct observations
df.nunique()
# dimensionality of the DataFrame.
df.shape
# Check for missing data
df.isnull().sum(axis=0)
#
df.boxplot()
df.Churn
df.Churn.value_counts(normalize=True)
# check the type of the data
df.dtypes
# ## df.dropna(subset=['TotalCharges'],axis=0,inplace=True)
# # Data Analysis #
dum_churn = pd.get_dummies(df['Churn'])
dum_churn
# # How much is churn affecting the business?
## Get the number of customers that churned
dum_churn['Yes'].value_counts().to_frame()
# How many percentage of customers are leaving
display(df.groupby(['Churn']).size())
colors = ["#BDFCC9","#FFDEAD"]
ax = (df['Churn'].value_counts()*100.0 /len(df))\
.plot.pie(autopct='%.1f%%', colors=colors, labels = ['No', 'Yes'],figsize =(4,4), fontsize = 12 )
#ax.yaxis.set_major_formatter(mtick.PercentFormatter())
ax.set_ylabel('Churn',fontsize = 12)
ax.set_title('Percent of Churn', fontsize = 12)
#Show statistics on the current data
df.describe()
# # Customer demographics like age and gender
# # Gender #
gender = df.groupby(['SeniorCitizen','gender','Churn']).size().to_frame()
gender
ax = sns.countplot (x="gender",hue="Churn",data=df)
ax.set_title("Churn distribution by gender")
# # SeniorCitizen #
ax = sns.countplot (x="SeniorCitizen",hue="Churn",data=df)
ax.set_title("Churn distribution by SeniorCitizen")
# # Partner #
ax = sns.countplot (x="Partner",hue="Churn",data=df)
ax.set_title("Churn distribution by Partner")
# # Dependents #
ax = sns.countplot (x="Dependents",hue="Churn",data=df)
ax.set_title("Churn distribution by Dependents")
# # The sevices used
# # PhoneService #
ax = sns.countplot (x="PhoneService",hue="Churn",data=df)
ax.set_title("Churn distribution by PhoneService")
# # InternetService
ax = sns.countplot (x="InternetService",hue="Churn",data=df)
ax.set_title("Churn distribution by InternetService")
# # TechSupport
ax = sns.countplot (x="TechSupport",hue="Churn",data=df)
ax.set_title("Churn distribution by TechSupport")
# # StreamingTV
ax = sns.countplot (x="StreamingTV",hue="Churn",data=df)
ax.set_title("Churn distribution by StreamingTV")
# # StreamingMovies #
ax = sns.countplot (x="StreamingMovies",hue="Churn",data=df)
ax.set_title("Churn distribution by StreamingMovies")
# # OnlineSecurity#
ax = sns.countplot (x="OnlineSecurity",hue="Churn",data=df)
ax.set_title("Churn distribution by OnlineSecuritys")
# # OnlineBackup #
ax = sns.countplot (x="OnlineBackup",hue="Churn",data=df)
ax.set_title("Churn distribution by OnlineBackup")
# # MultipleLines #
ax = sns.countplot (x="MultipleLines",hue="Churn",data=df)
ax.set_title("Churn distribution by MultipleLines")
# # DeviceProtection
ax = sns.countplot (x="DeviceProtection",hue="Churn",data=df)
ax.set_title("Churn distribution by DeviceProtection")
# # Billing information
#
# # Contrat
ax = sns.countplot (x="Contract",hue="Churn",data=df)
ax.set_title("Churn distribution by Contract")
# # PaperlessBilling
df.groupby(['PaperlessBilling','Churn']).size().to_frame().rename(columns={0: "size"}).reset_index()
df.groupby(['PaymentMethod','Churn']).size().to_frame().rename(columns={0: "size"}).reset_index()
df['PaymentMethod'].describe().to_frame()
# statistics monthlyCharges
df['MonthlyCharges'].describe().to_frame()
df['MonthlyCharges'].plot(kind='hist', figsize=(7,4))
df['tenure'].describe().to_frame()
df['tenure'].plot(kind='hist', figsize=(7,4))
plt.title('Billing information churn by Tenure')
plt.ylabel('Churn')
plt.xlabel('tenure')
df['TotalCharges'].describe().to_frame()
ax = sns.countplot (x="PaperlessBilling",hue="Churn",data=df)
ax.set_title("Churn distribution by PaperlessBilling")
# # PaymentMethod
ax = sns.countplot (x="PaymentMethod",hue="Churn",data=df)
ax.set_title("Churn distribution by PaymentMethod")
# # What services are typically purchased by customers who churned?
df.Churn.replace(to_replace = {'Yes'==1,'No==0'}, inplace=True)
cols =df.columns
cols = list(cols)
display(cols)
service_a=df.groupby('Churn').mean()
service
# +
value = df[df.Churn=='Yes']
value.head()
# -
service = value.groupby(['PhoneService','gender']).size().to_frame()
service
#percent of customer churn used phone service
sizes = value['PhoneService'].value_counts(sort = True)
colors = ["#BDFCC9","#FFDEAD"]
explode = (0.1,0.1)
labels= ['No','Yes']
# Plot
plt.pie(sizes,colors=colors,labels=labels,explode=explode,autopct='%1.1f%%',startangle=270,)
plt.title('Percentage of PhoneService ')
plt.show()
ax = sns.countplot(x="InternetService",data=value)
ax.set_title("Churn distribution by InternetService")
ax = sns.countplot(x="MultipleLines",data=value)
ax.set_title("Churn distribution by MultipleLines")
#moun ki pa itilize phone service gn mwens pou churn
ax = sns.countplot(x="InternetService",data=value)
ax.set_title("Churn distribution by InternetService")
ax = sns.countplot(x="OnlineSecurity",data=value)
ax.set_title("Churn distribution by OnlineSecurity")
ax = sns.countplot(x="DeviceProtection",data=value)
ax.set_title("Churn distribution by DeviceProtection")
# +
ax = sns.countplot(x="TechSupport",data=value)
ax.set_title("Churn distribution by TechSupport")
# -
ax = sns.countplot(x="StreamingTV",data=value)
ax.set_title("Churn distribution by StreamingTV")
ax = sns.countplot(x="StreamingMovies",data=value,)
ax.set_title("Churn distribution by StreamingMovies")
phone_service= value.groupby(['PhoneService']).size().to_frame()
service
# # Bonus
df.groupby(['SeniorCitizen','gender','Partner','Dependents']).size().to_frame()
#How long will it take for the company to lose all its customers
Qt = 7043
Churn_rate = 0.2654
day = 0
while Qt >=1:
Qt = Qt -(Qt * (Churn_rate))
day +=1
print(day)
# +
# see wich demographics they will lose first
# +
#sex
### Female
Qt_Female= 3488
Churn_percent1 = 0.2692
day1 = 0
while Qt_Female >=1:
Qt_Female = Qt_Female -(Qt_Female * (Churn_percent1))
day1 +=1
print(day1)
# +
#sex
### Male
Qt_Male= 3555
Churn_percent2 = 0.2641
day1 = 0
while Qt_Male >=1:
Qt_Male = Qt_Male -(Qt_Male * (Churn_percent2))
day1 +=1
print(day1)
# -
# +
# partner
### part no
Qt_part_no= 3641
Churn_percent2 = 0.4916
day1 = 0
while Qt_part_no >=1:
Qt_part_no = Qt_part_no -(Qt_part_no * (Churn_percent2))
day1 +=1
print(day1)
# +
# partner
### part yes
Qt_part_yes= 3402
Churn_percent2 = 0.2692
day1 = 0
while Qt_part_yes >=1:
Qt_part_yes = Qt_part_yes -(Qt_part_yes * (Churn_percent2))
day1 +=1
print(day1)
# +
# Dependents
### dep_yes
Qt_dep_yes= 2110
Churn_percent2 = 0.1744
day1 = 0
while Qt_dep_yes >=1:
Qt_dep_yes = Qt_dep_yes -(Qt_dep_yes * (Churn_percent2))
day1 +=1
print(day1)
# +
### dep_no
Qt_dep_no= 4933
Churn_percent2 = 0.8256
day1 = 0
while Qt_dep_no >=1:
Qt_dep_no = Qt_dep_no -(Qt_dep_no * (Churn_percent2))
day1 +=1
print(day1)
# -
#(they will lose demographics No Dependents first)
# # Profil
service = pd.pivot_table(df,index='Churn',columns =['gender','SeniorCitizen','Partner','Dependents'])
| .ipynb_checkpoints/Vanessa's Bi Project-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from enmspring.graphs_bigtraj import StackMeanModeAgent
rootfolder = '/home/ytcdata/bigtraj_fluctmatch/500ns'
# ### Part 1: Initialize
# + jupyter={"outputs_hidden": true}
host = 'a_tract_21mer'
interval_time = 500
s_agent = StackMeanModeAgent(host, rootfolder, interval_time)
# + jupyter={"outputs_hidden": true}
s_agent.load_mean_mode_laplacian_from_npy()
s_agent.eigen_decompose()
s_agent.initialize_nodes_information()
s_agent.split_node_list_into_two_strand()
s_agent.set_benchmark_array()
s_agent.set_strand_array()
# -
# ### Part 2: Build $\textbf{Q}$
# $
# \textbf{Q} =
# \begin{bmatrix}
# \vert & \vert & \cdots & \vert \\
# e_1 & e_2 & \cdots & e_{N_v} \\
# \vert & \vert & \cdots & \lvert
# \end{bmatrix}
# $
Q = s_agent.v
# ### Part 3: Build $\Lambda_i$
# $
# \Lambda_1 =
# \begin{bmatrix}
# \lambda_1 & 0 & \cdots & 0 \\
# 0 & 0 & \cdots & 0 \\
# \vdots & \vdots & \cdots & \vdots \\
# 0 & 0 & \cdots & 0
# \end{bmatrix}
# $
eigv_id = 1
lambda_mat = np.zeros((s_agent.n_node, s_agent.n_node))
lambda_mat[eigv_id-1, eigv_id-1] = s_agent.get_eigenvalue_by_id(eigv_id)
# ### Part 4: $\textbf{K}^{(m,n)}=\sum_{i=m}^{n}\textbf{Q}\Lambda_{i}\textbf{Q}^T$
# +
m = 1
n = s_agent.n_node
norm_array = np.zeros(s_agent.n_node+1)
K_mat = np.zeros((s_agent.n_node, s_agent.n_node))
norm_array[0] = np.linalg.norm(K_mat-s_agent.laplacian_mat)
for eigv_id in range(m, n+1):
lambda_mat = np.zeros((s_agent.n_node, s_agent.n_node))
lambda_mat[eigv_id-1, eigv_id-1] = s_agent.get_eigenvalue_by_id(eigv_id)
K_mat += np.dot(Q, np.dot(lambda_mat, Q.transpose()))
norm_array[eigv_id] = np.linalg.norm(K_mat-s_agent.laplacian_mat)
# +
m = 1
n = s_agent.n_node
target_diag = np.diag(s_agent.laplacian_mat)
norm_array_diag = np.zeros(s_agent.n_node+1)
K_mat = np.zeros((s_agent.n_node, s_agent.n_node))
K_mat_diag = np.diag(K_mat)
norm_array_diag[0] = np.linalg.norm(K_mat_diag-target_diag)
for eigv_id in range(m, n+1):
lambda_mat = np.zeros((s_agent.n_node, s_agent.n_node))
lambda_mat[eigv_id-1, eigv_id-1] = s_agent.get_eigenvalue_by_id(eigv_id)
K_mat += np.dot(Q, np.dot(lambda_mat, Q.transpose()))
K_mat_diag = np.diag(K_mat)
norm_array_diag[eigv_id] = np.linalg.norm(K_mat_diag-target_diag)
# -
# ### Part 5: Plot
# +
fig, ax = plt.subplots(figsize=(8,4))
xlist = list(range(s_agent.n_node+1))
ax.plot(xlist, norm_array, '.-')
ax.plot(xlist, norm_array_diag, '.-')
plt.tight_layout()
plt.show()
# -
# ### Part 6
def upper_tri_masking(A):
m = A.shape[0]
r = np.arange(m)
mask = r[:,None] < r
return A[mask]
upper_tri_masking(s_agent.laplacian_mat)
# +
m = 1
n = s_agent.n_node
target_utrig = upper_tri_masking(s_agent.laplacian_mat)
norm_array_utrig = np.zeros(s_agent.n_node+1)
K_mat = np.zeros((s_agent.n_node, s_agent.n_node))
K_mat_utrig = upper_tri_masking(K_mat)
norm_array_utrig[0] = np.linalg.norm(K_mat_utrig-target_utrig)
for eigv_id in range(m, n+1):
lambda_mat = np.zeros((s_agent.n_node, s_agent.n_node))
lambda_mat[eigv_id-1, eigv_id-1] = s_agent.get_eigenvalue_by_id(eigv_id)
K_mat += np.dot(Q, np.dot(lambda_mat, Q.transpose()))
K_mat_utrig = upper_tri_masking(K_mat)
norm_array_utrig[eigv_id] = np.linalg.norm(K_mat_utrig-target_utrig)
# +
fig, ax = plt.subplots(figsize=(8,4))
xlist = list(range(s_agent.n_node+1))
ax.plot(xlist, norm_array_utrig, '.-')
plt.tight_layout()
plt.show()
| notebooks/sum_rigidity_graph_0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cardstud/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/module2-samplling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="11OzdxWTM7UR" colab_type="text"
# ## Assignment - Build a confidence interval
#
# A confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.
#
# 52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.
#
# In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.
#
# But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.
#
# How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."
#
# For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.
#
# Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.
#
# Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):
#
#
#
# + [markdown] id="4v23V9FaDLPp" colab_type="text"
#
# ### Confidence Intervals:
# 1. Generate and numerically represent a confidence interval
# 2. Graphically (with a plot) represent the confidence interval
# 3. Interpret the confidence interval - what does it tell you about the data and its distribution?
# + id="nztJXZ_sKcb3" colab_type="code" colab={}
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# + id="Ckcr4A4FM7cs" colab_type="code" colab={}
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data', header=None)
# + id="I8MD2RPNKO0G" colab_type="code" outputId="6608556b-af41-466c-e632-cf5977a6bbab" colab={"base_uri": "https://localhost:8080/", "height": 204}
df.head()
# + [markdown] id="ffGvETNMKuL3" colab_type="text"
# ### Clean dataset
# + id="uYmDg3KcKO2g" colab_type="code" colab={}
df =df.replace('?', None)
df[11][0]='n'
# + id="q4LDSndmLOR1" colab_type="code" outputId="c7f773c4-81df-48d3-e5ab-6342f726ab00" colab={"base_uri": "https://localhost:8080/", "height": 204}
df.head()
# + id="RS4CDZIyKO7O" colab_type="code" outputId="d9278dea-8ce5-4788-abf1-1cf175f116d3" colab={"base_uri": "https://localhost:8080/", "height": 323}
df.isna().sum()
# + id="1WWgsaZRKO-r" colab_type="code" outputId="7dd94101-28ad-42b2-de0d-b19b29e92965" colab={"base_uri": "https://localhost:8080/", "height": 173}
df.describe(exclude='number')
# + id="1fUMF-HlLahd" colab_type="code" outputId="afdf8603-d113-4932-cad2-aaca08a37391" colab={"base_uri": "https://localhost:8080/", "height": 204}
# Change n/y to binary
df =df.replace('y', 1)
df = df.replace('n', 0)
df.head()
# + id="paxT23KWLamv" colab_type="code" outputId="f03430c6-c846-4b84-9f78-9d46786bf86c" colab={"base_uri": "https://localhost:8080/", "height": 204}
df.columns = ['class', 'infants', 'water_cost', 'budget', 'fee_freeze', 'aid_elsalvador', 'rel_school', 'satellite', 'aid_contras', 'mx_missle', 'immigration', 'cutback', 'education', 'right_to_sue', 'crime', 'duty_free_ex', 'export_south_africa']
df.head()
# + id="O-nXYDE7Lapb" colab_type="code" outputId="1744f6b2-9766-41a8-eea1-7ab57f57f02e" colab={"base_uri": "https://localhost:8080/", "height": 297}
df.describe()
# + [markdown] id="BJZ1hwHyLwwG" colab_type="text"
# ### Subset data into 2 subset for democrats and republicans
# + id="Xhc5JDrILart" colab_type="code" colab={}
df_republican = df[df['class']== 'republican']
# + id="5_u5jEEtMiGM" colab_type="code" outputId="67eee6a4-4842-45c1-a50a-d06ddb4efba7" colab={"base_uri": "https://localhost:8080/", "height": 204}
df_republican.head()
# + id="yxMDdU-pLauf" colab_type="code" outputId="393c6c55-fe95-4de3-b753-c1076022b600" colab={"base_uri": "https://localhost:8080/", "height": 34}
df_republican.shape
# + id="Nm3kQS-zKPKR" colab_type="code" outputId="e2b5c85f-75b3-4e0c-b316-a17d8c5129be" colab={"base_uri": "https://localhost:8080/", "height": 102}
df_republican.columns
# + id="26wD0xTyMmx_" colab_type="code" outputId="ca8b1f4c-5ef5-47c5-9d0c-8bb2ca231d86" colab={"base_uri": "https://localhost:8080/", "height": 297}
df_republican.describe()
# + id="M7f2-HPbKPMz" colab_type="code" colab={}
df_democrat = df[df['class']== 'democrat']
# + id="3V7QzwieMZ7V" colab_type="code" outputId="6d2a3bac-f06f-40e7-f5ea-4cd0d0e6ee17" colab={"base_uri": "https://localhost:8080/", "height": 204}
df_democrat.head()
# + id="2kD3zKmXMEua" colab_type="code" outputId="e6cdfb5b-35af-4f74-b877-f998a60425fc" colab={"base_uri": "https://localhost:8080/", "height": 34}
df_democrat.shape
# + id="be531digMEzn" colab_type="code" outputId="134257bb-5e20-4640-c315-1614e3c4c8d9" colab={"base_uri": "https://localhost:8080/", "height": 297}
df_democrat.describe()
# + [markdown] id="gZRi8Uh4Muu3" colab_type="text"
# ### Generate Confidence intervals
# 1. Generate and numerically represent a confidence interval
# 2. Graphically (with a plot) represent the confidence interval
# 3. Interpret the confidence interval - what does it tell you about the data and its distribution?
# + id="JKqZVvbgurpv" colab_type="code" colab={}
from scipy import stats
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval around a sample mean for given data.
Using t-distribution and two-tailed test, default 95% confidence.
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n - 1)
return (mean, mean - interval, mean + interval)
# + [markdown] id="xSnYEYqO1d-J" colab_type="text"
# ### Infants issue
# + id="vymo6kpTQzdg" colab_type="code" colab={}
dem_infants= df_democrat['infants']
# + id="FBDhdh2bU74n" colab_type="code" outputId="cec999f1-6e3a-438e-9b08-849ef28776cc" colab={"base_uri": "https://localhost:8080/", "height": 170}
dem_infants.describe()
# + id="YkIazfc5QEJu" colab_type="code" outputId="509096c1-3b41-4f46-b83d-d6777dbd8646" colab={"base_uri": "https://localhost:8080/", "height": 119}
sample_size = 100
sample = dem_infants.sample(sample_size)
sample.head()
# + id="4o_RCcH6P3EU" colab_type="code" outputId="dc4efdd6-94fe-4ec3-a3ef-9f6324569bf2" colab={"base_uri": "https://localhost:8080/", "height": 34}
sample_mean = sample.mean()
sample_std = np.std(sample, ddof=1)
print(sample_mean, sample_std)
# + id="_tWLgGY0RMiK" colab_type="code" outputId="1d15bdaa-129b-42ad-82a4-80832ca4004b" colab={"base_uri": "https://localhost:8080/", "height": 34}
standard_error = sample_std/np.sqrt(sample_size)
standard_error
# + id="fOzA4aa6RMl0" colab_type="code" outputId="2f935d85-9647-420d-bbc9-6aacb4495f45" colab={"base_uri": "https://localhost:8080/", "height": 34}
t = 1.84
(sample_mean, sample_mean - t*standard_error, sample_mean + t*standard_error)
# + id="UKs5nFt-MV3k" colab_type="code" outputId="755e3d1a-ea05-4174-ff25-c4d3f0349bf5" colab={"base_uri": "https://localhost:8080/", "height": 34}
confidence_interval(sample, confidence=0.95)
# + id="psloB9b9MV6r" colab_type="code" outputId="8b81e3d7-6290-45df-ebf9-194356134c73" colab={"base_uri": "https://localhost:8080/", "height": 34}
confidence_interval(dem_infants,confidence=0.95 )
# + id="lfQZMnePCyMX" colab_type="code" outputId="4ab732e3-c123-49f1-c41f-39c839ca13ce" colab={"base_uri": "https://localhost:8080/", "height": 320}
# So the mean, 0.59925 falls between the confidence interval of 0.54 and 0.65
# which is confirmed below via the histogram
plt.hist(sample, bins=10)
# + id="FBkWRh8T3dKI" colab_type="code" colab={}
rep_infants= df_republican['infants']
# + id="9gPQg5nu3dOY" colab_type="code" outputId="de183e8f-4bb8-465c-8c2f-991caa6499e9" colab={"base_uri": "https://localhost:8080/", "height": 170}
rep_infants.describe()
# + id="-fQl6Kjl3dRu" colab_type="code" outputId="42de49c0-b439-4467-86a7-029b9f94f914" colab={"base_uri": "https://localhost:8080/", "height": 119}
sample_size1 = 100
sample1 = rep_infants.sample(sample_size1)
sample1.head()
# + id="lOrpT7LC3djr" colab_type="code" outputId="36df68a4-1d69-48cf-c594-2d39af0942fa" colab={"base_uri": "https://localhost:8080/", "height": 34}
sample_mean1 = sample1.mean()
sample_std1 = np.std(sample1, ddof=1)
print(sample_mean1, sample_std1)
# + id="cQZW6Y2b3zKX" colab_type="code" outputId="ec887c9d-0040-45c9-f907-60726e3bfcb1" colab={"base_uri": "https://localhost:8080/", "height": 34}
standard_error1 = sample_std1/np.sqrt(sample_size)
standard_error1
# + id="1xO8lBEl312N" colab_type="code" outputId="e9b6a356-b00f-47f7-d7cd-2abc240ccf4b" colab={"base_uri": "https://localhost:8080/", "height": 34}
t = 1.84
(sample_mean1, sample_mean1 - t*standard_error1, sample_mean1 + t*standard_error1)
# + id="A-kRNyOB315b" colab_type="code" outputId="adef0d58-8a30-4c58-ab5a-71a9ed02f751" colab={"base_uri": "https://localhost:8080/", "height": 34}
confidence_interval(sample1, confidence=0.95)
# + id="gbKoLj19BYn4" colab_type="code" outputId="84509976-dd1c-4b80-a9ef-ed25a5825b7e" colab={"base_uri": "https://localhost:8080/", "height": 320}
# So the mean, 0.19 falls between the confidence interval of 0.112 and 0.268
# which is confirmed below via the histogram
plt.hist(sample1)
# + [markdown] id="Yflw0iT0C__0" colab_type="text"
#
# ### Chi-squared tests:
# 4. Take a dataset that we have used in the past in class that has **categorical** variables. Pick two of those categorical variables and run a chi-squared tests on that data
# - By hand using Numpy
# - In a single line using Scipy
#
# + id="9P-qdb61DHGk" colab_type="code" outputId="5be0a7b0-b299-4590-9569-9dadac6a3593" colab={"base_uri": "https://localhost:8080/", "height": 221}
# make a crosstab
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/adult.csv', na_values=" ?")
print(df.shape)
df.head()
# + id="s_5PnwwqDHOx" colab_type="code" outputId="8bab8e66-d5c1-499a-f38f-4db2eb5628de" colab={"base_uri": "https://localhost:8080/", "height": 269}
df['hours-per-week'].hist(bins=20); # focus on this like lecture but will use age instead of sex
# + id="Nw37mW83DHRg" colab_type="code" outputId="ec154548-f5d1-4de6-d5b0-c2331b3d7ccd" colab={"base_uri": "https://localhost:8080/", "height": 173}
df.describe(exclude='number')
# + id="muorHwn3DHT_" colab_type="code" outputId="f6d56008-1ea6-4f6b-8363-820028dc3845" colab={"base_uri": "https://localhost:8080/", "height": 136}
# see if difference in age in hours per week so turning hours-per-week into a category, nonnumerical and compare to sex
cut_points =[0,9,19,29,39,49,500] # cutoff points for hours per week
label_names = ['0-9', '10-19', '20-29', '30-39', '40-49', '50+'] # split into these time buckets
df['hours_per_week_categories'] = pd.cut(df['hours-per-week'], cut_points, labels=label_names)
df['hours_per_week_categories'].value_counts()
# + id="3moQH8CODHWW" colab_type="code" outputId="dc201232-25d2-439b-edc3-f61db573559e" colab={"base_uri": "https://localhost:8080/", "height": 1000}
df['age'].value_counts()
# + id="hBG2qc4uDHYw" colab_type="code" outputId="c2cc39f8-718e-4b88-cfe5-c7a0f50fde9f" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# create the crosstab
df = df.sort_values(by='hours_per_week_categories')
contingency_table = pd.crosstab(df['age'], df['hours_per_week_categories'], margins=True)
contingency_table
# + [markdown] id="QgL3pUANGUDx" colab_type="text"
# ## Expected Value Calculation
# \begin{align}
# expected_{i,j} =\frac{(row_{i\ total})(column_{j\ total} ) }{(\text{total observations})}
# \end{align}
# + id="lZ0lD2J_DHbd" colab_type="code" outputId="bf83eb63-f63f-4c77-8621-1ca7b11cd60c" colab={"base_uri": "https://localhost:8080/", "height": 68}
row_sums = contingency_table.iloc[0:2, 6].values # extract 0 row to 2, not including 2
col_sums = contingency_table.iloc[2, 0:6].values
print(row_sums)
print('__')
print(col_sums)
# + id="01NveWNKG2kQ" colab_type="code" outputId="2c52a809-8726-42fe-a108-c0b21feb4e47" colab={"base_uri": "https://localhost:8080/", "height": 34}
total = contingency_table.loc['All','All']
total
# + id="Z2RVofvhG2nB" colab_type="code" outputId="a9cd25ac-f45e-47e0-b27b-c9bbf15dba2e" colab={"base_uri": "https://localhost:8080/", "height": 68}
# showing how to manually get chi squared, although can do throug scipy
expected = []
for row_sum in row_sums:
expected_row = []
for column in col_sums:
expected_val = column*row_sum/total
expected_row.append(expected_val)
expected.append(expected_row)
expected = np.array(expected)
print(expected.shape)
print(expected)
# + [markdown] id="CuUH-g7THAa4" colab_type="text"
# ## Chi-Squared Statistic with Numpy
#
# \begin{align}
# \chi^2 = \sum \frac{(observed_{i}-expected_{i})^2}{(expected_{i})}
# \end{align}
#
# For the $observed$ values we will just use a version of our contingency table without the margins as a numpy array. In this way, if our observed values array and our expected values array are the same shape, then we can subtract them and divide them directly which makes the calculations a lot cleaner. No for loops!
# + id="3w3A4PP9HCtJ" colab_type="code" outputId="dbd1a5e1-396d-463d-d444-eff5a8296ed0" colab={"base_uri": "https://localhost:8080/", "height": 68}
observed = pd.crosstab(df['sex'], df['hours_per_week_categories']).values
print(observed.shape)
observed
# + id="n1BCyZ0jHCvI" colab_type="code" outputId="fe91fe17-103e-480e-bdb4-eff1e7a758df" colab={"base_uri": "https://localhost:8080/", "height": 34}
chi_square = ((observed - expected)**2/(expected)).sum()
chi_square
# + [markdown] id="hmnyhK-yHKeu" colab_type="text"
# ## Run a $\chi^{2}$ Test using Scipy
# + id="Kae_tcMhHCxW" colab_type="code" outputId="43b7a475-ff78-4ff6-a024-8bb3def5de10" colab={"base_uri": "https://localhost:8080/", "height": 85}
chi_squared, p_value, dof, expected = stats.chi2_contingency(observed)
print(chi_square, p_value, dof, expected)
# chi is 2287, p value so small its 0.0 so can reject it
# + id="cGaoshMOHC0L" colab_type="code" colab={}
# Reject null that hours per week is independent of age
# + [markdown] id="4ohsJhQUmEuS" colab_type="text"
# ## Stretch goals:
#
# 1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).
# 2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
# 3. Refactor your code so it is elegant, readable, and can be easily run for all issues.
# + [markdown] id="nyJ3ySr7R2k9" colab_type="text"
# ## Resources
#
# - [Interactive visualize the Chi-Squared test](https://homepage.divms.uiowa.edu/~mbognar/applets/chisq.html)
# - [Calculation of Chi-Squared test statistic](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test)
# - [Visualization of a confidence interval generated by R code](https://commons.wikimedia.org/wiki/File:Confidence-interval.svg)
# - [Expected value of a squared standard normal](https://math.stackexchange.com/questions/264061/expected-value-calculation-for-squared-normal-distribution) (it's 1 - which is why the expected value of a Chi-Squared with $n$ degrees of freedom is $n$, as it's the sum of $n$ squared standard normals)
| module2-samplling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.12 64-bit (''ml-zoomcamp'': conda)'
# name: python3
# ---
# ## 6.10 Homework
#
# The goal of this homework is to create a tree-based regression model for prediction apartment prices (column `'price'`).
#
# In this homework we'll again use the New York City Airbnb Open Data dataset - the same one we used in homework 2 and 3.
#
# You can take it from [Kaggle](https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data?select=AB_NYC_2019.csv)
# or download from [here](https://raw.githubusercontent.com/alexeygrigorev/datasets/master/AB_NYC_2019.csv)
# if you don't want to sign up to Kaggle.
#
# Let's load the data:
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# +
columns = [
'neighbourhood_group', 'room_type', 'latitude', 'longitude',
'minimum_nights', 'number_of_reviews','reviews_per_month',
'calculated_host_listings_count', 'availability_365',
'price'
]
df = pd.read_csv('AB_NYC_2019.csv', usecols=columns)
df.reviews_per_month = df.reviews_per_month.fillna(0)
# -
df
# * Apply the log tranform to `price`
# * Do train/validation/test split with 60%/20%/20% distribution.
# * Use the `train_test_split` function and set the `random_state` parameter to 1
# are there any NA's?
# They already used fillna(0) when loading the CSV, but let's make sure.
df.isna().any()
# OK, let's do the rest
from sklearn.model_selection import train_test_split
# log transform on price
df.price = np.log1p(df.price)
# +
df_full_train, df_test = train_test_split(df, test_size=0.2, random_state=1)
df_train, df_val = train_test_split(df_full_train, test_size=0.25, random_state=1)
# reset indices
df_train = df_train.reset_index(drop=True)
df_val = df_val.reset_index(drop=True)
df_test = df_test.reset_index(drop=True)
y_train = df_train.price.values
y_val = df_val.price.values
y_test = df_test.price.values
# -
del df_train['price']
del df_val['price']
del df_test['price']
# Now, use `DictVectorizer` to turn train and validation into matrices:
from sklearn.feature_extraction import DictVectorizer
# +
dv = DictVectorizer(sparse=False)
train_dict = df_train.to_dict(orient='records')
val_dict = df_val.to_dict(orient='records')
dv.fit(train_dict)
X_train = dv.transform(train_dict)
X_val = dv.transform(val_dict)
# -
# ## Question 1
#
# Let's train a decision tree regressor to predict the price variable.
#
# * Train a model with `max_depth=1`
from sklearn.tree import DecisionTreeRegressor
from sklearn.tree import export_text
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X_train, y_train)
print(export_text(dt, feature_names=dv.get_feature_names()))
# Which feature is used for splitting the data?
#
# * `room_type`
# * `neighbourhood_group`
# * `number_of_reviews`
# * `reviews_per_month`
# ## _Question 1 Answer_
#
# `room_type` is used for splitting the data
# ## Question 2
#
# Train a random forest model with these parameters:
#
# * `n_estimators=10`
# * `random_state=1`
# * `n_jobs=-1` (optional - to make training faster)
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
rf = RandomForestRegressor(n_estimators=10, random_state=1, n_jobs=-1)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_val)
rmse = mean_squared_error(y_val, y_pred, squared=False)
print(round(rmse, 3))
# What's the RMSE of this model on validation?
#
# * 0.059
# * 0.259
# * 0.459
# * 0.659
# ## _Question 2 answer_
#
# Perhaps `0.459`? I'm off by `0.003`.
# ## Question 3
#
# Now let's experiment with the `n_estimators` parameter
#
# * Try different values of this parameter from 10 to 200 with step 10
# * Set `random_state` to `1`
# * Evaluate the model on the validation dataset
from tqdm.auto import tqdm
# +
scores = []
for n in tqdm(range(10, 201, 10)):
rf = RandomForestRegressor(n_estimators=n, random_state=1, n_jobs=-1)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_val)
rmse = mean_squared_error(y_val, y_pred, squared=False)
scores.append((n, rmse))
# -
df_scores = pd.DataFrame(scores, columns=['n_estimators', 'rmse'])
plt.plot(df_scores.n_estimators, df_scores.rmse)
df_scores
df_scores[df_scores.rmse == df_scores.rmse.min()]
# After which value of `n_estimators` does RMSE stop improving?
#
# - 10
# - 50
# - 70
# - 120
# ## _Question 3 answer_
#
# My lowest value is `170`...
# ## Question 4
#
# Let's select the best `max_depth`:
#
# * Try different values of `max_depth`: `[10, 15, 20, 25]`
# * For each of these values, try different values of `n_estimators` from 10 till 200 (with step 10)
# * Fix the random seed: `random_state=1`
# +
scores = []
for d in tqdm([10, 15, 20, 25]):
for n in tqdm(range(10, 201, 10)):
rf = RandomForestRegressor(n_estimators=n, max_depth=d, random_state=1, n_jobs=-1)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_val)
rmse = mean_squared_error(y_val, y_pred, squared=False)
scores.append((n, d, rmse))
# -
df_scores = pd.DataFrame(scores, columns=['n_estimators', 'max_depth', 'rmse'])
# +
for d in [10, 15, 20, 25]:
df_subset = df_scores[df_scores.max_depth == d]
plt.plot(df_subset.n_estimators, df_subset.rmse, label='max_depth=%d' % d)
plt.legend()
# -
# What's the best `max_depth`:
#
# * 10
# * 15
# * 20
# * 25
#
# Bonus question (not graded):
#
# Will the answer be different if we change the seed for the model?
# ## _Question 4 answer_
#
# The best maximum depth is `15`.
#
# ***Bonus***: unlikely to change. Changing the random state might change results on the first iterations, but will eventually converge to similar results as other random states.
# ## Question 5
#
# We can extract feature importance information from tree-based models.
#
# At each step of the decision tree learning algorith, it finds the best split.
# When doint it, we can calculate "gain" - the reduction in impurity before and after the split.
# This gain is quite useful in understanding what are the imporatant features
# for tree-based models.
#
# In Scikit-Learn, tree-based models contain this information in the `feature_importances_` field.
#
# For this homework question, we'll find the most important feature:
#
# * Train the model with these parametes:
# * `n_estimators=10`,
# * `max_depth=20`,
# * `random_state=1`,
# * `n_jobs=-1` (optional)
# * Get the feature importance information from this model
rf = RandomForestRegressor(n_estimators=10, max_depth=20, random_state=1, n_jobs=-1)
rf.fit(X_train, y_train)
gains = zip(dv.get_feature_names(), rf.feature_importances_)
df_gains = pd.DataFrame(gains, columns=['feature_name', 'gain'])
df_gains
df_gains.sort_values(by='gain', ascending=False)
df_gains[df_gains['gain'] == df_gains['gain'].max()]
# What's the most important feature?
#
# * `neighbourhood_group=Manhattan`
# * `room_type=Entire home/apt`
# * `longitude`
# * `latitude`
# ## _Question 5 answer_
#
# The most important feature is `room_type=Entire home/apt`
# ## Question 6
# Now let's train an XGBoost model! For this question, we'll tune the `eta` parameter
#
# * Install XGBoost
# * Create DMatrix for train and validation
# * Create a watchlist
# * Train a model with these parameters for 100 rounds:
#
# ```
# xgb_params = {
# 'eta': 0.3,
# 'max_depth': 6,
# 'min_child_weight': 1,
#
# 'objective': 'reg:squarederror',
# 'nthread': 8,
#
# 'seed': 1,
# 'verbosity': 1,
# }
# ```
# install xgboost
import xgboost as xgb
# create DMatrix for train and validation
features = dv.get_feature_names()
dtrain = xgb.DMatrix(X_train, label=y_train, feature_names=features)
dval = xgb.DMatrix(X_val, label=y_val, feature_names=features)
xgb_params = {
'eta': 0.3,
'max_depth': 6,
'min_child_weight': 1,
'objective': 'reg:squarederror',
'nthread': 8,
'seed': 1,
'verbosity': 1,
}
watchlist = [(dtrain, 'train'), (dval, 'val')]
model = xgb.train(xgb_params, dtrain, num_boost_round=100, verbose_eval=5, evals=watchlist)
# Now change `eta` first to `0.1` and then to `0.01`
xgb_params['eta'] = 0.1
model = xgb.train(xgb_params, dtrain, num_boost_round=100, verbose_eval=5, evals=watchlist)
xgb_params['eta'] = 0.01
model = xgb.train(xgb_params, dtrain, num_boost_round=100, verbose_eval=5, evals=watchlist)
# Which eta leads to the best RMSE score on the validation dataset?
#
# * 0.3
# * 0.1
# * 0.01
# ## _Question 6 answer_
#
# `eta=0.1` has the lowest RMSE with a value of `0.43250`
# ## Submit the results
#
#
# Submit your results here: https://forms.gle/wQgFkYE6CtdDed4w8
#
# It's possible that your answers won't match exactly. If it's the case, select the closest one.
#
#
# ## Deadline
#
#
# The deadline for submitting is 20 October 2021, 17:00 CET (Wednesday). After that, the form will be closed.
#
#
| 06_trees/homework-6-starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## CS 229 Machine Learning, Fall 2017
# ### Problem Set 4
# #### Question: Reinforcement Learning: The inverted pendulum
# #### Author: <NAME>, <EMAIL>
"""
Parts of the code (cart and pole dynamics, and the state
discretization) are inspired from code available at the RL repository
http://www-anw.cs.umass.edu/rlr/domains.html
This file controls the pole-balancing simulation. You only need to
write code in between places marked
###### BEGIN YOUR CODE ######
###### END YOUR CODE ######
Briefly, the cart-pole system is described in `cart_pole.py`. The main
simulation loop in this file calls the `simulate()` function for
simulating the pole dynamics, `get_state()` for discretizing the
otherwise continuous state space in discrete states, and `show_cart()`
for display.
Some useful parameters are listed below:
`NUM_STATES`: Number of states in the discretized state space
You must assume that states are numbered 0 through `NUM_STATES` - 1. The
state numbered `NUM_STATES` - 1 (the last one) is a special state that
marks the state when the pole has been judged to have fallen (or when
the cart is out of bounds). However, you should NOT treat this state
any differently in your code. Any distinctions you need to make between
states should come automatically from your learning algorithm.
After each simulation cycle, you are supposed to update the transition
counts and rewards observed. However, you should not change either
your value function or the transition probability matrix at each
cycle.
Whenever the pole falls, a section of your code below will be
executed. At this point, you must use the transition counts and reward
observations that you have gathered to generate a new model for the MDP
(i.e. transition probabilities and state rewards). After that, you
must use value iteration to get the optimal value function for this MDP
model.
`TOLERANCE`: Controls the convergence criteria for each value iteration
run. In value iteration, you can assume convergence when the maximum
absolute change in the value function at any state in an iteration
becomes lower than `TOLERANCE.
You need to write code that chooses the best action according
to your current value function, and the current model of the MDP. The
action must be either 0 or 1 (corresponding to possible directions of
pushing the cart)
Finally, we assume that the simulation has converged when
`NO_LEARNING_THRESHOLD` consecutive value function computations all
converged within one value function iteration. Intuitively, it seems
like there will be little learning after this, so we end the simulation
here, and say the overall algorithm has converged.
Learning curves can be generated by calling a code snippet at the end
(it assumes that the learning was just executed, and the array
`time_steps_to_failure` that records the time for which the pole was
balanced before each failure are in memory). `num_failures` is a variable
that stores the number of failures (pole drops / cart out of bounds)
till now.
Other parameters in the code are described below:
`GAMMA`: Discount factor to be used
The following parameters control the simulation display; you dont
really need to know about them:
`pause_time`: Controls the pause between successive frames of the
display. Higher values make your simulation slower.
`min_trial_length_to_start_display`: Allows you to start the display only
after the pole has been successfully balanced for at least this many
trials. Setting this to zero starts the display immediately. Choosing a
reasonably high value (around 100) can allow you to rush through the
initial learning quickly, and start the display only after the
performance is reasonable.
"""
from __future__ import division, print_function
# %matplotlib inline
from cart_pole import CartPole, Physics
import matplotlib.pyplot as plt
import numpy as np
from scipy.signal import lfilter
# +
# Simulation parameters
pause_time = 0.0001
min_trial_length_to_start_display = 100
display_started = min_trial_length_to_start_display == 0
NUM_STATES = 163
NUM_ACTIONS = 2
GAMMA = 0.995
TOLERANCE = 0.01
NO_LEARNING_THRESHOLD = 20
# -
# Time cycle of the simulation
time = 0
# These variables perform bookkeeping (how many cycles was the pole
# balanced for before it fell). Useful for plotting learning curves.
time_steps_to_failure = []
num_failures = 0
time_at_start_of_current_trial = 0
# You should reach convergence well before this
max_failures = 500
# Initialize a cart pole
cart_pole = CartPole(Physics())
# Starting `state_tuple` is (0, 0, 0, 0)
# x, x_dot, theta, theta_dot represents the actual continuous state vector
x, x_dot, theta, theta_dot = 0.0, 0.0, 0.0, 0.0
state_tuple = (x, x_dot, theta, theta_dot)
# `state` is the number given to this state, you only need to consider
# this representation of the state
state = cart_pole.get_state(state_tuple)
#if min_trial_length_to_start_display == 0 or display_started == 1:
# cart_pole.show_cart(state_tuple, pause_time)
# <a id='6a'></a>
# ### Problem 6.a)
# +
# Perform all your initializations here:
# Assume no transitions or rewards have been observed.
# Initialize the value function array to small random values (0 to 0.10,
# say).
# Initialize the transition probabilities uniformly (ie, probability of
# transitioning for state x to state y using action a is exactly
# 1/NUM_STATES).
# Initialize all state rewards to zero.
###### BEGIN YOUR CODE ######
# TODO:
C_sas = np.zeros((NUM_STATES, NUM_ACTIONS, NUM_STATES))
R_new = np.zeros(NUM_STATES)
R_counts = np.zeros(NUM_STATES)
V = np.random.rand(NUM_STATES)
P_sas = np.zeros((NUM_STATES, NUM_ACTIONS, NUM_STATES)) + 1 / NUM_STATES
R_s = np.zeros(NUM_STATES)
###### END YOUR CODE ######
# -
consecutive_no_learning_trials = 0
i=0
while consecutive_no_learning_trials < NO_LEARNING_THRESHOLD:
# Write code to choose action (0 or 1).
# This action choice algorithm is just for illustration. It may
# convince you that reinforcement learning is nice for control
# problems!Replace it with your code to choose an action that is
# optimal according to the current value function, and the current MDP
# model.
###### BEGIN YOUR CODE ######
update = P_sas[state] @ V
action = np.argmax(update)
###### END YOUR CODE ######
# Get the next state by simulating the dynamics
state_tuple = cart_pole.simulate(action, state_tuple)
# x, x_dot, theta, theta_dot = state_tuple
# Increment simulation time
time = time + 1
# Get the state number corresponding to new state vector
new_state = cart_pole.get_state(state_tuple)
#if display_started == 1 and i % 1000 == 0:
# cart_pole.show_cart(state_tuple, pause_time)
# reward function to use - do not change this!
if new_state == NUM_STATES - 1:
R = -1
else:
R = 0
# Perform model updates here.
# A transition from `state` to `new_state` has just been made using
# `action`. The reward observed in `new_state` (note) is `R`.
# Write code to update your statistics about the MDP i.e. the
# information you are storing on the transitions and on the rewards
# observed. Do not change the actual MDP parameters, except when the
# pole falls (the next if block)!
###### BEGIN YOUR CODE ######
# record the number of times `state, action, new_state` occurs
# record the rewards for every `new_state`
# record the number of time `new_state` was reached
C_sas[state, action, new_state] += 1
R_new[new_state] += R
R_counts[new_state] += 1
###### END YOUR CODE ######
# Recompute MDP model whenever pole falls
# Compute the value function V for the new model
if new_state == NUM_STATES - 1:
# Update MDP model using the current accumulated statistics about the
# MDP - transitions and rewards.
# Make sure you account for the case when a state-action pair has never
# been tried before, or the state has never been visited before. In that
# case, you must not change that component (and thus keep it at the
# initialized uniform distribution).
###### BEGIN YOUR CODE ######
visited_states = R_counts > 0
R_s[visited_states] = R_new[visited_states] / R_counts[visited_states]
C_sa = np.sum(C_sas, axis=2)
sa_visited = C_sa > 0
P_sas[sa_visited] = C_sas[sa_visited] / C_sa[sa_visited].reshape(-1,1)
###### END YOUR CODE ######
# Perform value iteration using the new estimated model for the MDP.
# The convergence criterion should be based on `TOLERANCE` as described
# at the top of the file.
# If it converges within one iteration, you may want to update your
# variable that checks when the whole simulation must end.
###### BEGIN YOUR CODE ######
max_change = 1.0
count = 0
while max_change > TOLERANCE:
V_expected = (P_sas @ V)
V_new = R_s + GAMMA * np.max(V_expected, axis=1)
max_change = np.max(np.abs(V_new - V))
V = V_new
count += 1
if count == 1:
consecutive_no_learning_trials += 1
else:
consecutive_no_learning_trials = 0
###### END YOUR CODE ######
# Do NOT change this code: Controls the simulation, and handles the case
# when the pole fell and the state must be reinitialized.
if new_state == NUM_STATES - 1:
num_failures += 1
if num_failures >= max_failures:
break
print('[INFO] Failure number {}'.format(num_failures))
time_steps_to_failure.append(time - time_at_start_of_current_trial)
# time_steps_to_failure[num_failures] = time - time_at_start_of_current_trial
time_at_start_of_current_trial = time
if time_steps_to_failure[num_failures - 1] > min_trial_length_to_start_display:
display_started = 1
# Reinitialize state
# x = 0.0
x = -1.1 + np.random.uniform() * 2.2
x_dot, theta, theta_dot = 0.0, 0.0, 0.0
state_tuple = (x, x_dot, theta, theta_dot)
state = cart_pole.get_state(state_tuple)
else:
state = new_state
i += 1
# <a id='6a'></a>
# ### Problem 6.a)
# plot the learning curve (time balanced vs. trial)
log_tstf = np.log(np.array(time_steps_to_failure))
plt.plot(np.arange(len(time_steps_to_failure)), log_tstf, 'k')
window = 30
w = np.array([1/window for _ in range(window)])
weights = lfilter(w, 1, log_tstf)
x = np.arange(window//2, len(log_tstf) - window//2)
plt.plot(x, weights[window:len(log_tstf)], 'r--')
plt.xlabel('Num failures')
plt.ylabel('Num steps to failure')
plt.show()
| Problem Sets/ps4/ps4_problem6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import os
import numpy as np
from scipy.spatial import ConvexHull
from UliEngineering.Math.Coordinates import BoundingBox
import xml.etree.cElementTree as ET
from PIL import Image
from glob import glob
# This notebook transform jsons that are the result of "Magic Project" (objects on images markup) to xml format that are TF object detection pipeline required.
#
# Переводим json-ы, которые сделал magic при разметке, в xml-ки для TF object detection.
path_to_dataset = 'dataset/' #INSERT HERE A PATH TO YOUR OWN DATASET >>>
def get_height_and_width(cur_filename):
im = Image.open(cur_filename)
return im.size
def BB_8(data):
pts = []
for i in range(len(data['hand_pts'])):
pts.append(data['hand_pts'][i][0:2])
hull_js = ConvexHull(pts)
hull_points = []
for i in hull_js.vertices:
hull_points.append(hull_js.points[i])
the_hull = []
for i in range(len(hull_points)):
the_hull.append(hull_points[i].tolist())
the_hull_array = np.asarray(the_hull)
cur_bb = BoundingBox(the_hull_array)
eps = 0 #int(cur_bb.area/100)
return max(int(cur_bb.minx) - eps, 0), max(int(cur_bb.miny) - eps,0), min(int(cur_bb.maxx) + eps, cur_width), min(int(cur_bb.maxy) + eps, cur_height)
def create_xml(img_file, cur_filename, cur_width, cur_height, xmin, ymin, xmax, ymax):
annotation = ET.Element('annotation', verified = "yes")
folder = ET.SubElement(annotation, "folder").text = "images"
filename = ET.SubElement(annotation, "filename").text = img_file
path= ET.SubElement(annotation, "path").text = cur_filename
source = ET.SubElement(annotation, "source")
database = ET.SubElement(source, "database").text = 'Faradenza_DB'
size = ET.SubElement(annotation, "size")
width = ET.SubElement(size, "width").text = str(cur_width)
height = ET.SubElement(size, "height").text = str(cur_height)
depth = ET.SubElement(size, "depth").text = '3'
segmented = ET.SubElement(annotation, "segmented").text = '0'
object = ET.SubElement(annotation, "object")
name = ET.SubElement(object, "name").text = 'hand'
pose = ET.SubElement(object, "pose").text = 'Unspecified'
truncated = ET.SubElement(object, "truncated").text = '0'
difficult = ET.SubElement(object, "difficult").text = '0'
bndbox = ET.SubElement(object, "bndbox")
xmin = ET.SubElement(bndbox, "xmin").text = str(xmin)
ymin = ET.SubElement(bndbox, "ymin").text = str(ymin)
xmax = ET.SubElement(bndbox, "xmax").text = str(xmax)
ymax = ET.SubElement(bndbox, "ymax").text = str(ymax)
tree = ET.ElementTree(annotation)
tree.write(os.path.splitext(cur_filename)[0]+'.xml')
#
for cur_json_name in glob(path_to_dataset+ '/*.json'):
with open(cur_json_name, 'r') as file:
data = json.load(file)
img_file = os.path.basename(data['img_path'])
cur_filename = os.path.abspath(path_to_dataset) + img_file
cur_width, cur_height = get_height_and_width(cur_filename)
xmin, ymin, xmax, ymax = BB_8(data)
create_xml(img_file, cur_filename, cur_width, cur_height, xmin, ymin, xmax, ymax)
| tensorflow/'magic' json 2 TF (obj detection) xml.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
#pd.options.display.max_columns = None
np.__version__
# +
# Citation Request:
# This dataset is publicly available for research. The details are described in [Moro et al., 2014].
# Please include this citation if you plan to use this database:
# [Moro et al., 2014] <NAME>, <NAME> and <NAME>. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, In press, http://dx.doi.org/10.1016/j.dss.2014.03.001
# Available at: [pdf] http://dx.doi.org/10.1016/j.dss.2014.03.001
# [bib] http://www3.dsi.uminho.pt/pcortez/bib/2014-dss.txt
# 1. Title: Bank Marketing (with social/economic context)
# 2. Sources
# Created by: <NAME> (ISCTE-IUL), <NAME> (Univ. Minho) and <NAME> (ISCTE-IUL) @ 2014
# 3. Past Usage:
# The full dataset (bank-additional-full.csv) was described and analyzed in:
# <NAME>, <NAME> and <NAME>. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems (2014), doi:10.1016/j.dss.2014.03.001.
# 4. Relevant Information:
# This dataset is based on "Bank Marketing" UCI dataset (please check the description at: http://archive.ics.uci.edu/ml/datasets/Bank+Marketing).
# The data is enriched by the addition of five new social and economic features/attributes (national wide indicators from a ~10M population country), published by the Banco de Portugal and publicly available at: https://www.bportugal.pt/estatisticasweb.
# This dataset is almost identical to the one used in [Moro et al., 2014] (it does not include all attributes due to privacy concerns).
# Using the rminer package and R tool (http://cran.r-project.org/web/packages/rminer/), we found that the addition of the five new social and economic attributes (made available here) lead to substantial improvement in the prediction of a success, even when the duration of the call is not included. Note: the file can be read in R using: d=read.table("bank-additional-full.csv",header=TRUE,sep=";")
# The zip file includes two datasets:
# 1) bank-additional-full.csv with all examples, ordered by date (from May 2008 to November 2010).
# 2) bank-additional.csv with 10% of the examples (4119), randomly selected from bank-additional-full.csv.
# The smallest dataset is provided to test more computationally demanding machine learning algorithms (e.g., SVM).
# The binary classification goal is to predict if the client will subscribe a bank term deposit (variable y).
# 5. Number of Instances: 41188 for bank-additional-full.csv
# 6. Number of Attributes: 20 + output attribute.
# 7. Attribute information:
# For more information, read [Moro et al., 2014].
# Input variables:
# # bank client data:
# 1 - age (numeric)
# 2 - job : type of job (categorical: "admin.","blue-collar","entrepreneur","housemaid","management","retired","self-employed","services","student","technician","unemployed","unknown")
# 3 - marital : marital status (categorical: "divorced","married","single","unknown"; note: "divorced" means divorced or widowed)
# 4 - education (categorical: "basic.4y","basic.6y","basic.9y","high.school","illiterate","professional.course","university.degree","unknown")
# 5 - default: has credit in default? (categorical: "no","yes","unknown")
# 6 - housing: has housing loan? (categorical: "no","yes","unknown")
# 7 - loan: has personal loan? (categorical: "no","yes","unknown")
# # related with the last contact of the current campaign:
# 8 - contact: contact communication type (categorical: "cellular","telephone")
# 9 - month: last contact month of year (categorical: "jan", "feb", "mar", ..., "nov", "dec")
# 10 - day_of_week: last contact day of the week (categorical: "mon","tue","wed","thu","fri")
# 11 - duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y="no"). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model.
# # other attributes:
# 12 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
# 13 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)
# 14 - previous: number of contacts performed before this campaign and for this client (numeric)
# 15 - poutcome: outcome of the previous marketing campaign (categorical: "failure","nonexistent","success")
# # social and economic context attributes
# 16 - emp.var.rate: employment variation rate - quarterly indicator (numeric)
# 17 - cons.price.idx: consumer price index - monthly indicator (numeric)
# 18 - cons.conf.idx: consumer confidence index - monthly indicator (numeric)
# 19 - euribor3m: euribor 3 month rate - daily indicator (numeric)
# 20 - nr.employed: number of employees - quarterly indicator (numeric)
# Output variable (desired target):
# 21 - y - has the client subscribed a term deposit? (binary: "yes","no")
# 8. Missing Attribute Values: There are several missing values in some categorical attributes, all coded with the "unknown" label. These missing values can be treated as a possible class label or using deletion or imputation techniques.
# -
df_cmb_master = pd.read_csv("bank-additional-full.csv", sep = ';')
df_cmb_master.shape
df_cmb_master.head()
df_cmb_master.dtypes
df_cmb_master.describe()
df_cmb_master.describe(include = 'object')
df_cmb_master['y'] = np.where(df_cmb_master['y'] == 'yes',1,0)
df_cmb_master['y'] = df_cmb_master['y'].astype(str)
# # Check Missing Values
##Check Missing Values
df_cmb_master.isnull().sum()
# # Impute Missing Values##
###Impute Missing Values##
col_list = list(df_cmb_master.columns)
col_list.remove('y')
for col in col_list:
if df_cmb_master[[col]][col].dtype == 'object':
df_cmb_master[[col]][col] = df_cmb_master[[col]][col].fillna((df_cmb_master[[col]][col].mode()))
else:
df_cmb_master[[col]][col] = df_cmb_master[[col]][col].fillna((df_cmb_master[[col]][col].mean()))
# # Impute Outliers####
# +
####Impute Outliers####
# for col in col_list:
# if df_cmb_master[[col]][col].dtype != 'object':
# ul = df_cmb_master[[col]][col].mean() + (3 * df_cmb_master[[col]][col].std() )
# ll = df_cmb_master[[col]][col].mean() + (3 * df_cmb_master[[col]][col].std() )
# df_cmb_master[col] = np.where(df_cmb_master[col] > ul, ul,
# np.where(df_cmb_master[col] < ll,ll,df_cmb_master[col] ) )
# -
# # Binning##
# +
##Binning of Age##
bins = [0, 1, 5, 10, 25, 50, 100]
df_cmb_master['age'] = pd.cut(df_cmb_master['age'], bins)
df_cmb_master['age'] = df_cmb_master.age.astype(str)
# -
df_cmb_master['age'].unique()
df_cmb_master.head()
# # Chisq Test for Independence
# +
from scipy.stats import chisquare
import scipy.stats
from scipy.stats import chi2
#from scipy import stats
from scipy.stats import chi2_contingency
###Chisq Test for Independence
dataset_table=pd.crosstab(df_cmb_master['age'],df_cmb_master['y'])
#print(dataset_table)
#Observed Values
Observed_Values = dataset_table.values
#print("Observed Values :-\n",Observed_Values)
val=chi2_contingency(dataset_table)
#val
Expected_Values=val[3]
#Expected_Values
chi_square=sum([(o-e)**2./e for o,e in zip(Observed_Values,Expected_Values)])
chi_square_statistic=chi_square[0]+chi_square[1]
no_of_rows=len(dataset_table.iloc[0:2,0])
no_of_columns=len(dataset_table.iloc[0,0:2])
ddof=(no_of_rows-1)*(no_of_columns-1)
#print("Degree of Freedom:-",ddof)
alpha = 0.05
#print("chi-square statistic:-",chi_square_statistic)
#scipy.stats.chi2.ppf() function
critical_value=scipy.stats.chi2.ppf(q=1-alpha,df=ddof)
#print('critical_value:',critical_value)
#p-value
p_value=1-chi2.cdf(x=chi_square_statistic,df=ddof)
print('p-value:',p_value)
print('Significance level: ',alpha)
print('Degree of Freedom: ',ddof)
print('p-value:',p_value)
# -
df_cmb_master.dtypes
###Impute Missing Values##
col_list = list(df_cmb_master.columns)
col_list.remove('y')
for col in col_list:
if df_cmb_master[[col]][col].dtype == 'object':
df_cmb_master[[col]][col] = df_cmb_master[[col]][col].fillna((df_cmb_master[[col]][col].mode()))
else:
df_cmb_master[[col]][col] = df_cmb_master[[col]][col].fillna((df_cmb_master[[col]][col].mean()))
###Chisq Test for Independence for all object fields
col_list = list(df_cmb_master.columns)
col_list.remove('y')
for col in col_list:
if df_cmb_master[[col]][col].dtype == 'object':
###Chisq Test for Independence
dataset_table=pd.crosstab(df_cmb_master[col],df_cmb_master['y'])
#print(dataset_table)
#Observed Values
Observed_Values = dataset_table.values
#print("Observed Values :-\n",Observed_Values)
val=chi2_contingency(dataset_table)
#val
Expected_Values=val[3]
#Expected_Values
chi_square=sum([(o-e)**2./e for o,e in zip(Observed_Values,Expected_Values)])
chi_square_statistic=chi_square[0]+chi_square[1]
no_of_rows=len(dataset_table.iloc[0:2,0])
no_of_columns=len(dataset_table.iloc[0,0:2])
ddof=(no_of_rows-1)*(no_of_columns-1)
#print("Degree of Freedom:-",ddof)
alpha = 0.05
#print("chi-square statistic:-",chi_square_statistic)
#scipy.stats.chi2.ppf() function
critical_value=scipy.stats.chi2.ppf(q=1-alpha,df=ddof)
#print('critical_value:',critical_value)
#p-value
p_value=1-chi2.cdf(x=chi_square_statistic,df=ddof)
print(col)
print('p-value:',p_value)
#print('Significance level: ',alpha)
#print('Degree of Freedom: ',ddof)
#print('p-value:',p_value)
###Drop fields with insignificant chisquare test p-value
df_cmb_master.drop(columns = ['loan','day_of_week'],inplace = True)
# # Binning of all numeric columns for IV##
#
def calculate_woe_iv(dataset, feature, target):
lst = []
for i in range(dataset[feature].nunique()):
val = list(dataset[feature].unique())[i]
lst.append({
'Value': val,
'All': dataset[dataset[feature] == val].count()[feature],
'Good': dataset[(dataset[feature] == val) & (dataset[target] == 0)].count()[feature],
'Bad': dataset[(dataset[feature] == val) & (dataset[target] == 1)].count()[feature]
})
dset = pd.DataFrame(lst)
dset['Distr_Good'] = dset['Good'] / dset['Good'].sum()
dset['Distr_Bad'] = dset['Bad'] / dset['Bad'].sum()
dset['WoE'] = np.log(dset['Distr_Good'] / dset['Distr_Bad'])
dset = dset.replace({'WoE': {np.inf: 0, -np.inf: 0}})
dset['IV'] = (dset['Distr_Good'] - dset['Distr_Bad']) * dset['WoE']
iv = dset['IV'].sum()
dset = dset.sort_values(by='WoE')
return dset, iv
df_cmb_master['y'] = df_cmb_master['y'].astype(int)
col_list = list(df_cmb_master.columns)
#col_list = ['age']
for col in col_list:
if col == 'y':
continue
elif df_cmb_master[col].dtype == 'object':
print('IV for column: {}'.format(col))
df, iv = calculate_woe_iv(df_cmb_master, col, 'y')
#print(df)
print('IV score: {:.2f}'.format(iv))
print('\n')
# +
###Drop fields with low IV
df_cmb_master.drop(columns = ['age', 'marital', 'education', 'housing'],inplace = True)
# -
# # Multicollinearity
df_cmb_master.columns
df_cmb_master.dtypes
# +
col_list = []
for col in df_cmb_master.columns:
if ((df_cmb_master[col].dtype == 'object') & (col != 'y') ):
col_list.append(col)
# df = pd.DataFrame({'name': ['Manie', 'Joyce', 'Ami'],
# 'Org': ['ABC2', 'ABC1', 'NSV2'],
# 'Dept': ['Finance', 'HR', 'HR']
# })
df_2 = pd.get_dummies(df_cmb_master[col_list],drop_first=True)
for col in df_2.columns:
df_2[col] = df_2[col].astype(int)
df_2.shape
# -
df_combined = pd.concat([df_cmb_master, df_2], axis=1)
df_combined.shape
col_list = []
for col in df_cmb_master.columns:
if ((df_cmb_master[col].dtype == 'object') & (col != 'y') ):
col_list.append(col)
col_list
# +
###Drop fields for which dummy vars already created
df_combined.drop(columns = col_list,axis = 1,inplace = True)
# -
df_combined.dtypes
# +
#####Drop Variables causing Multicollinearity
# col_list =[]
# for col in df_combined.columns:
# if col.startswith('housing'):
# col_list.append(col)
# df_combined.drop(columns = col_list, axis = 1,inplace = True)
# df_combined.drop(columns = ['cons.price.idx'], axis = 1,inplace = True)
# df_combined.drop(columns = ['nr.employed'], axis = 1,inplace = True)
# df_combined.drop(columns = ['euribor3m'], axis = 1,inplace = True)
# df_combined.drop(columns = ['pdays'], axis = 1,inplace = True)
# col_list =[]
# for col in df_combined.columns:
# if col.startswith('poutcome'):
# col_list.append(col)
# df_combined.drop(columns = col_list, axis = 1,inplace = True)
# col_list =[]
# for col in df_combined.columns:
# if col.startswith('poutcome'):
# col_list.append(col)
# df_combined.drop(columns = col_list, axis = 1,inplace = True)
# +
from statsmodels.stats.outliers_influence import variance_inflation_factor
col_list = []
for col in df_combined.columns:
if ((df_combined[col].dtype != 'object') & (col != 'y') ):
col_list.append(col)
X = df_combined[col_list]
vif_data = pd.DataFrame()
vif_data["feature"] = X.columns
vif_data["VIF"] = [variance_inflation_factor(X.values, i)
for i in range(len(X.columns))]
print(vif_data)
# -
df_combined.columns
# +
# ###drop categorical columns for which dummy variables are created
# col_list = ['marital','education','default','loan','contact','month','day_of_week','job']
# df_combined.drop(columns = col_list, axis = 1, inplace = True)
# -
Ind_Features = list(df_combined.columns)
Ind_Features.remove('y')
df_ind = df_combined[Ind_Features]
df_dep = df_combined['y']
df_ind.dtypes
# # Train Test Split
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(df_ind, df_dep, test_size=0.25, random_state=0)
x_train.shape
x_test.shape
# # Model Fitting/Training###
from sklearn.linear_model import LogisticRegression
# all parameters not specified are set to their defaults
logisticRegr = LogisticRegression()
x_train
#####Model Fitting/Training###
logisticRegr.fit(x_train, y_train)
# # Predictions
# +
# Predictions
# Returns a NumPy Array
test_pred = logisticRegr.predict(x_test)
# -
np.unique(test_pred)
x_test.shape
test_pred.shape
test_pred
# # Model Validation
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
###Confusion Matrix
cm = metrics.confusion_matrix(y_test, test_pred)
print(cm)
Accuracy = (8909+429)/(8909+429+729+230)
Accuracy*100
Sensitivity = 429/(429 + 729 )
Sensitivity*100
Specifiicty = 8909/(8909 + 230 )
Specifiicty*100
# Use score method to get accuracy of model
score = logisticRegr.score(x_test, y_test)
print(score)
# # Receiver Operating Characteristics
test_pred_prob = logisticRegr.predict_proba(x_test)
test_pred_prob
test_pred_prob[:, 1]
test_pred
np.array(y_test)
# +
from sklearn import metrics
y = np.array([1, 1, 2, 2])
scores = np.array([0.1, 0.4, 0.35, 0.8])
fpr, tpr, thresholds = metrics.roc_curve(np.array(y_test), test_pred_prob[:, 1], pos_label=1)
# -
fpr
tpr
thresholds
# +
auc_table = pd.DataFrame(fpr)
auc_table['tpr'] = tpr
auc_table['thresholds'] = thresholds
auc_table.columns = ['fpr','tpr','thresholds']
auc_table.head()
# +
# roc curve and auc
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from matplotlib import pyplot
# calculate scores
auc = roc_auc_score(np.array(y_test), test_pred_prob[:, 1])
# summarize scores
print('Logistic: ROC AUC=%.3f' % (auc))
# calculate roc curves
fpr, tpr, _ = roc_curve(np.array(y_test), test_pred_prob[:, 1])
# plot the roc curve for the model
pyplot.plot(fpr, tpr, marker='.', label='Logistic')
# axis labels
pyplot.xlabel('False Positive Rate')
pyplot.ylabel('True Positive Rate')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
# -
| 3_Logistic_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from nltk.corpus import stopwords
# Utils
import pickle
from sklearn.metrics import mean_absolute_error
import pickle
# -
def final_func_1(x):
'''Final function 1 handling all the cleaning and loading of tokenizer , vectorizer and model and predicting '''
#Using regex and string manipulation to clean the data
def clean(data):
#https://www.kaggle.com/andrej0marinchenko/jigsaw-ensemble-0-86/notebook
data = data.str.replace('https?://\S+|www\.\S+', ' social medium ')
data = data.str.lower()
data = data.str.replace("4", "a")
data = data.str.replace("2", "l")
data = data.str.replace("5", "s")
data = data.str.replace("1", "i")
data = data.str.replace("!", "i")
data = data.str.replace("|", "i")
data = data.str.replace("0", "o")
data = data.str.replace("8", "ate")
data = data.str.replace("3", "e")
data = data.str.replace("9", "g")
data = data.str.replace("6", "g")
data = data.str.replace("@", "a")
data = data.str.replace("$", "s")
data = data.str.replace("l3", "b")
data = data.str.replace("7", "t")
data = data.str.replace("7", "+")
data = data.str.replace("#ofc", " of fuckin course ")
data = data.str.replace("fggt", " faggot ")
data = data.str.replace("your", " your ")
data = data.str.replace("self", " self ")
data = data.str.replace("cuntbag", " cunt bag ")
data = data.str.replace("fartchina", " fart china ")
data = data.str.replace("youi", " you i ")
data = data.str.replace("cunti", " cunt i ")
data = data.str.replace("sucki", " suck i ")
data = data.str.replace("pagedelete", " page delete ")
data = data.str.replace("cuntsi", " cuntsi ")
data = data.str.replace("i'm", " i am ")
data = data.str.replace("offuck", " of fuck ")
data = data.str.replace("centraliststupid", " central ist stupid ")
data = data.str.replace("hitleri", " hitler i ")
data = data.str.replace("i've", " i have ")
data = data.str.replace("i'll", " sick ")
data = data.str.replace("fuck", " fuck ")
data = data.str.replace("f u c k", " fuck ")
data = data.str.replace("shit", " shit ")
data = data.str.replace("bunksteve", " bunk steve ")
data = data.str.replace('wikipedia', ' social medium ')
data = data.str.replace("faggot", " faggot ")
data = data.str.replace("delanoy", " delanoy ")
data = data.str.replace("jewish", " jewish ")
data = data.str.replace("sexsex", " sex ")
data = data.str.replace("allii", " all ii ")
data = data.str.replace("i'd", " i had ")
data = data.str.replace("'s", " is ")
data = data.str.replace("youbollocks", " you bollocks ")
data = data.str.replace("dick", " dick ")
data = data.str.replace("cuntsi", " cuntsi ")
data = data.str.replace("mothjer", " mother ")
data = data.str.replace("cuntfranks", " cunt ")
data = data.str.replace("ullmann", " jewish ")
data = data.str.replace("mr.", " mister ")
data = data.str.replace("aidsaids", " aids ")
data = data.str.replace("njgw", " nigger ")
data = data.str.replace("wiki", " social medium ")
data = data.str.replace("administrator", " admin ")
data = data.str.replace("gamaliel", " jewish ")
data = data.str.replace("rvv", " vanadalism ")
data = data.str.replace("admins", " admin ")
data = data.str.replace("pensnsnniensnsn", " penis ")
data = data.str.replace("pneis", " penis ")
data = data.str.replace("pennnis", " penis ")
data = data.str.replace("pov.", " point of view ")
data = data.str.replace("vandalising", " vandalism ")
data = data.str.replace("cock", " dick ")
data = data.str.replace("asshole", " asshole ")
data = data.str.replace("youi", " you ")
data = data.str.replace("afd", " all fucking day ")
data = data.str.replace("sockpuppets", " sockpuppetry ")
data = data.str.replace("iiprick", " iprick ")
data = data.str.replace("penisi", " penis ")
data = data.str.replace("warrior", " warrior ")
data = data.str.replace("loil", " laughing out insanely loud ")
data = data.str.replace("vandalise", " vanadalism ")
data = data.str.replace("helli", " helli ")
data = data.str.replace("lunchablesi", " lunchablesi ")
data = data.str.replace("special", " special ")
data = data.str.replace("ilol", " i lol ")
data = data.str.replace(r'\b[uU]\b', 'you')
data = data.str.replace(r"what's", "what is ")
data = data.str.replace(r"\'s", " is ")
data = data.str.replace(r"\'ve", " have ")
data = data.str.replace(r"can't", "cannot ")
data = data.str.replace(r"n't", " not ")
data = data.str.replace(r"i'm", "i am ")
data = data.str.replace(r"\'re", " are ")
data = data.str.replace(r"\'d", " would ")
data = data.str.replace(r"\'ll", " will ")
data = data.str.replace(r"\'scuse", " excuse ")
data = data.str.replace('\s+', ' ')
data = data.str.replace(r'(.)\1+', r'\1\1')
data = data.str.replace("[:|♣|'|§|♠|*|/|?|=|%|&|-|#|•|~|^|>|<|►|_]", '')
data = data.str.replace(r"what's", "what is ")
data = data.str.replace(r"\'ve", " have ")
data = data.str.replace(r"can't", "cannot ")
data = data.str.replace(r"n't", " not ")
data = data.str.replace(r"i'm", "i am ")
data = data.str.replace(r"\'re", " are ")
data = data.str.replace(r"\'d", " would ")
data = data.str.replace(r"\'ll", " will ")
data = data.str.replace(r"\'scuse", " excuse ")
data = data.str.replace(r"\'s", " ")
# Clean some punctutations
data = data.str.replace('\n', ' \n ')
data = data.str.replace(r'([a-zA-Z]+)([/!?.])([a-zA-Z]+)',r'\1 \2 \3')
# Replace repeating characters more than 3 times to length of 3
data = data.str.replace(r'([*!?\'])\1\1{2,}',r'\1\1\1')
# Add space around repeating characters
data = data.str.replace(r'([*!?\']+)',r' \1 ')
# patterns with repeating characters
data = data.str.replace(r'([a-zA-Z])\1{2,}\b',r'\1\1')
data = data.str.replace(r'([a-zA-Z])\1\1{2,}\B',r'\1\1\1')
data = data.str.replace(r'[ ]{2,}',' ').str.strip()
data = data.str.replace(r'[ ]{2,}',' ').str.strip()
data = data.apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)]))
return data
# Loading the pre trained bert tokenizer
with open('tokenizer.pickle', 'rb') as handle:
tokenizer = pickle.load(handle)
def dummy_fun(doc):
return doc
# Loading the pre trained tfidf vectorizer
with open('tfidf.pickle', 'rb') as handle:
vectorizer = pickle.load(handle)
# Loading the model
regressor = pickle.load(open("/Users/rupesh/Downloads/Toxicity /finalized_model.sav", 'rb'))
# cleaning the text
# x = clean(x,'text')
tokenized_comments = tokenizer(x.tolist())['input_ids']
comments_tr = vectorizer.transform(tokenized_comments)
preds = regressor.predict(comments_tr)
return preds
# +
VALID_DATA_PATH = "/Users/rupesh/Downloads/Toxicity /jigsaw-toxic-severity-rating/validation_data.csv"
TEST_DATA_PATH = "/Users/rupesh/Downloads/Toxicity /jigsaw-toxic-severity-rating/comments_to_score.csv"
cts=pd.read_csv(TEST_DATA_PATH)
df_valid2=pd.read_csv(VALID_DATA_PATH)
# -
final_func_1(df_valid2['more_toxic'])
# predictions for more toxic comments (1)
final_func_1(df_valid2['less_toxic'])
# predictions for less toxic comments (1)
final_func_1(cts['text'])
# predictions for comments to score for kaggle
# in this kaggle problem the majority of the results of the results are averaged and ranked so there is no official metrics and in many kernels they have used simple metrics like accuracy , MAE , etc
def final_func_2(X,y):
preds=final_func_1(df_valid2['more_toxic'])
score =mean_absolute_error(y, preds)
return score
final_func_2(df_valid2['more_toxic'] , [1] * len(df_valid2['more_toxic']))
# As we declared 1 as more toxic in modelling so we are comparing them
final_func_2(df_valid2['less_toxic'] , [0] * len(df_valid2['less_toxic']))
# As we declared 0 as less toxic in modelling so we are comparing them
| Final_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Interpolation to a structured grid from a cloud of points
#
# First of all, we have a cloud of XYZ points and we want to predict a vertice of a 2D structured grid based on this cloud of points (I will abreviate that as CP and the structured grid as SG).
#
# It's important to note that, the CP has 3 values which describes each point:
# * the X value which represents the value in the X axis;
# * the Y value which represents the value in the Y axis;
# * The Z value which describes a property value (like temperature, or depth, or pressure, etc).
#
# So, I'm going to use KNN (K-Nearest Neighbors) strategy to search for the nearest points of each vertice of the SG and I must predict the property value of that vertice.
#
# However if the nearest point of the vertice is too far away I will assume that point has to be blank (or in this case I will just put a zero value). I'm going to explain in the next sections how to apply this interpolation.
#
# ------
# Obs: For more information about KNN algorithm please visit:
# * http://scikit-learn.org/stable/modules/neighbors.html
# * https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
# ------
#
#
# ## Imports
# Imports
import math
import seaborn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
from sklearn.neighbors import NearestNeighbors
from sklearn.preprocessing import normalize
# ## Loading the points and creating the structured grid
# +
# Loading xyz map
correct_map = pd.read_csv('correct_map.xyz', sep=' ', dtype='d', header=None, names=['x', 'y', 'z'])
scattered_data_10000 = pd.read_csv('scattered_data_10000.xyz', sep=' ', dtype='d', header=None, names=['x', 'y', 'z'])
NI = 100
NJ = 100
number_neighbors = 5
# Creating grid points
x_grid = np.linspace(1, 10, NI)
y_grid = np.linspace(1, 10, NJ)
grid_points = pd.DataFrame()
grid_points['x'] = np.tile(x_grid, NJ)
grid_points['y'] = y_grid.repeat(NI)
grid_points['z'] = np.zeros(NI*NJ)
# -
# **Structured Grid:**
# +
import matplotlib as mpl
# %matplotlib inline
mpl.rcParams['savefig.dpi'] = 250
mpl.rcParams['figure.dpi'] = 250
grid_points.plot(kind='scatter', x='x', y='y', marker='.', s=5)
plt.show()
# -
# **Cloud of Points:**
scattered_data_10000.plot(kind='scatter', x='x', y='y', marker='.', s=5)
plt.show()
# ## Applying KNN
# +
# Applying KNN
neighbors = NearestNeighbors(n_neighbors=number_neighbors, algorithm='ball_tree').fit(scattered_data_10000.loc[:, ['x', 'y']])
# Distance and index of each point from each vertice of the grid
distances, indexes = neighbors.kneighbors(grid_points.loc[:, ['x', 'y']])
# -
# ## Calculating the radius which the nearest point has to be located
#
# | Symbol | Meaning |
# |:------: |:-------:|
# |$$ S_x $$| Step in X axis|
# |$$ x_{max} $$| Maximum value of X in grid|
# |$$ x_{max} $$| Minimum value of X in grid|
# |$$ N_i $$ | Number of vertices in X axis|
# |$$ S_y $$| Step in Y axis|
# |$$ y_{max} $$| Maximum value of Y in grid|
# |$$ y_{max} $$| Minimum value of Y in grid|
# |$$ N_j $$ | Number of vertices in Y axis|
# |$$ R $$ | Radius |
# |$$ d_{norm} $$| Distance normalized|
# |$$ n $$ | Number of neighbors|
# |$$ w $$ | One minus the normalization, represent the weight of each distance |
# |$$ P $$ | Result of the scalar points |
# |$$ Z $$ | Represents the property value |
#
# ### Formula to calculate each axis step
#
# $$ S_x = \frac{x_{max} - x_{min}}{N_i} $$
#
# $$ S_y = \frac{y_{max} - y_{min}}{N_j} $$
#
#
# ### Formula to calculate the radius which will be the maximum distance which the first nearest point needs to be located
#
# $$ R = 2\sqrt{S_x^2 + S_y^2} $$
#
# In Python:
# +
# Maximum and minimum values in X axis
max_x = grid_points.loc[:, 'x'].max()
min_x = grid_points.loc[:, 'x'].min()
# Maximum and minimum values in Y axis
max_y = grid_points.loc[:, 'y'].max()
min_y = grid_points.loc[:, 'y'].min()
# Step X and Step Y
step_x = (max_x - min_x) / NI
step_y = (max_y - min_y) / NJ
# Radius
radius = 2 * math.sqrt((step_x ** 2) + (step_y ** 2))
# -
# Selecting the points which the distance are equal or less than the radius:
less_radius = distances[:, 0] <= radius
distances = distances[less_radius, :]
indexes = indexes[less_radius, :]
# ### It is interesting to normalize the distance and subtract the value from 1. That will be the weight of each distance.
#
# Using the l2 normalization which can be represented by:
# $$ d_{norm} = \frac{d}{\sqrt{\sum_{i=1}^{n} d_i}} $$
#
# The weight of each distance will be:
#
# $$ w = 1 - d_{norm} $$
#
# In python:
# Using the scikit-learn library
weight_norm = 1 - normalize(distances, axis=1)
# ### Formula to calculate the value for each vertice of the strcutured grid
#
# $$ P = \frac{\sum_{i=1}^{n} (w_{i}\times Z_i)}{\sum_{j=1}^{n} w_{j}} $$
#
# In Python:
prod = weight_norm * scattered_data_10000.values[indexes, 2]
scalars = np.full(NI * NJ, 0.0)
grid_points.loc[less_radius, 'z'] = prod.sum(axis=1) / (weight_norm.sum(axis=1))
# ## Example - Map desired
plt.pcolor(correct_map.values[:, 0].reshape(NI, NJ), correct_map.values[:, 1].reshape(NI, NJ), correct_map.values[:, 2].reshape(NI, NJ), cmap=cm.jet)
# ### Map reconstructed using the algorithm described
plt.pcolor(grid_points.values[:, 0].reshape(NI, NJ), grid_points.values[:, 1].reshape(NI, NJ), grid_points.values[:, 2].reshape(NI, NJ), cmap=cm.jet)
# ### Error
#
# I'm going to calculate the error of the map reconstructed.
dif_map = correct_map.z - grid_points.z
dif_map.describe()
error = (grid_points.z / correct_map.z) - 1
plt.hist(error)
error[error < 0] *= -1
error.describe()
# Analyzing the histogram, the mean and the standard deviation we can see the mean error is something around 0.33% and the most part of the values is near to 0.46%.
| interpolation_to_a_structured_grid_from_a_cloud_of_points.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# FUNCIONANDO BEM
df_geo = data[['id','lat','long']].copy()
df_geo['road'] = 'NA'
df_geo['house_number'] = 'NA'
df_geo['neighbourhood'] = 'NA'
df_geo['city'] = 'NA'
df_geo['country'] = 'NA'
df_geo['state'] = 'NA'
df_geo['osm'] = 'NA'
s = 0
e = 999
geolocator = Nominatim( user_agent='geoapiExercises')
print('INICIO DA COLETA')
while e <= len(df_geo):
for i in range(s,e):
# geolocator = Nominatim( user_agent='geoapiExercises')
query = str(df_geo.loc[i, 'lat']) + ',' + str(df_geo.loc[i,'long'])
#print(i)
response = geolocator.reverse(query) #API Request
#Populate data
# if house number existir na tabela address ele executa a ação senão deixa NA
if 'house_number' in response.raw['address']:
df_geo.loc[i, 'house_number'] = response.raw['address']['house_number']
if 'road' in response.raw['address']:
df_geo.loc[i, 'road'] = response.raw['address']['road']
if 'neighbourhood' in response.raw['address']:
df_geo.loc[i, 'neighbourhood'] = response.raw['address']['neighbourhood']
if 'city' in response.raw['address']:
df_geo.loc[i, 'city'] = response.raw['address']['city']
if 'country' in response.raw['address']:
df_geo.loc[i, 'country'] = response.raw['address']['country']
if 'state' in response.raw['address']:
df_geo.loc[i, 'state'] = response.raw['address']['state']
if 'osm_type' in response.raw:
df_geo.loc[i, 'osm'] = response.raw['osm_type']
print('Fim da coleta {} | {}'.format(s,e))
report = df_geo
report.to_csv('../data/geolocator.csv', index=False, header=False)
s = s + 1000
e = e + 999
if e > len(df_geo):
e = len(df_geo)
else:
e = e
print('FIM DA COLETA')
print('nova dimensão:', df_geo.shape)
# -
# +
geo = pd.read_csv('../data/geolocator.csv')
if ((len(data) < len(geo)) or (len(data) > len(geo)) or (len(geo) < 1)):
df_geo = data[['id','lat','long']].copy()
# Novas colunas
df_geo['road'] = 'NA'
df_geo['house_number'] = 'NA'
df_geo['neighbourhood'] = 'NA'
df_geo['city'] = 'NA'
df_geo['country'] = 'NA'
df_geo['state'] = 'NA'
df_geo['osm'] = 'NA'
s = 1
e = 10
# geolocator = Nominatim( user_agent='geoapiExercises')
while e <= len(df_geo):
for i in range(s,e):
geolocator = Nominatim( user_agent='geoapiExercises')
query = str(df_geo.loc[i, 'lat']) + ',' + str(df_geo.loc[i,'long'])
response = geolocator.reverse(query) #API Request
#Populate data
# if house number existir na tabela address ele executa a ação senão deixa NA
if 'house_number' in response.raw['address']:
df_geo.loc[i, 'house_number'] = response.raw['address']['house_number']
if 'road' in response.raw['address']:
df_geo.loc[i, 'road'] = response.raw['address']['road']
if 'neighbourhood' in response.raw['address']:
df_geo.loc[i, 'neighbourhood'] = response.raw['address']['neighbourhood']
if 'city' in response.raw['address']:
df_geo.loc[i, 'city'] = response.raw['address']['city']
if 'country' in response.raw['address']:
df_geo.loc[i, 'country'] = response.raw['address']['country']
if 'state' in response.raw['address']:
df_geo.loc[i, 'state'] = response.raw['address']['state']
if 'osm_type' in response.raw:
df_geo.loc[i, 'osm'] = response.raw['osm_type']
print('Fim da coleta {} | {}'.format(s,e))
report = df_geo
report.to_csv('../data/geolocator.csv', index=False, header=False)
s = s + 500
e = e + 500
if e > len(df_geo):
e = len(df_geo)
else:
e = e
print('FIM')
print('nova dimensão:', df_geo.shape)
else:
print('Não houve alteração no dataset')
# -
# df_geo = geo
cols_name = ['id','lat','long','road','house_number','neighbourhood','city','country','state','osm']
df_geo = pd.read_csv('../data/geolocator.csv',names=cols_name)
df_geo.to_csv('../data/dataset43454.csv', index=False)
df_geo.head()
| notebooks/old/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import collections
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
if os.getcwd().endswith('notebook'):
os.chdir('..')
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.Data import CodonTable
# -
default_random_seed = 444
sns.set(palette='colorblind', font_scale=1.3)
metadata_path = os.path.join(os.getcwd(), 'data/gtdb/bac120_metadata.tsv')
metadata_df = pd.read_csv(metadata_path, delimiter='\t')
species_with_ogt_path = os.path.join(os.getcwd(), 'data/bac_dive/species_new.csv')
species_with_ogt_df = pd.read_csv(species_with_ogt_path)
species_with_ogt_df.shape
species_with_ogt_df.head()
marker_info_path = os.path.join(os.getcwd(), 'data/gtdb/bac120_msa_marker_info.tsv')
marker_info_df = pd.read_csv(marker_info_path, delimiter='\t')
marker_info_df.head()
marker_info_df.shape
metadata_df.shape
metadata_df.columns.tolist()
len(metadata_df), len(metadata_df['accession'].unique())
for c in metadata_df.columns:
print(c, '|', metadata_df.iloc[0][c])
# +
taxonomy = metadata_df['ssu_silva_taxonomy'].head().values
taxonomy[0].split(';')[-1].strip()
metadata_df['specie_name'] = metadata_df['ssu_silva_taxonomy'].apply(lambda v: v.split(';')[-1].strip())
# -
metadata_df['phylum'] = metadata_df['gtdb_taxonomy'].apply(lambda v: v.split(';')[1].strip()[3:])
metadata_df['phylum'].head()
metadata_df['specie_name'].head()
len(metadata_df['specie_name'].unique())
len(metadata_df['gtdb_genome_representative'].unique())
metadata_df['gtdb_genome_representative'].head()
metadata_df['domain'] = 'Bacteria'
# +
gtdb_simple_df = metadata_df[
['accession', 'specie_name', 'domain', 'phylum', 'gtdb_genome_representative']
].reset_index(drop=True)
merged_df = pd.merge(
left=gtdb_simple_df,
right=species_with_ogt_df[['specie_name', 'temperature', 'temperature_range']],
on='specie_name'
)
merged_df = merged_df.drop_duplicates(subset=['specie_name', 'gtdb_genome_representative'])
merged_df.shape
# -
len(merged_df['specie_name'].unique())
merged_df.head()
temperatures = merged_df['temperature'].values
f, ax = plt.subplots(1, 1, figsize=(12, 6))
_, bins, _ = ax.hist(temperatures, bins=10)
bins
def balance_dataset(data_df, bins, max_items=80, random_seed=default_random_seed):
rs = np.random.RandomState(random_seed)
z_bins = list(zip(bins, bins[1:]))
selected_indices = []
for i, (low, high) in enumerate(z_bins):
if i + 1 == len(z_bins):
high += 1
indices = data_df[
(data_df['temperature'] >= low) &
(data_df['temperature'] < high)
].index
selected_indices += rs.choice(indices, size=min(max_items, len(indices)), replace=False).tolist()
return data_df.loc[selected_indices].reset_index(drop=True)
balanced_df = balance_dataset(merged_df, bins)
balanced_df.shape
balanced_df.head()
f, ax = plt.subplots(1, 1, figsize=(12, 6))
_, bins, _ = ax.hist(balanced_df['temperature'].values, bins=10)
ax.set_title('Distribution of temperature in the balanced dataset');
ax.set_xlabel('Optimal Growth Temperature (°C)');
len(balanced_df) * 120
balanced_df['temperature'].mean()
# ## Load sequences
nucleotides_fasta_folder = os.path.join(os.getcwd(), 'data/gtdb/bac120_89_individual_genes/fna')
amino_acids_fasta_folder = os.path.join(os.getcwd(), 'data/gtdb/bac120_89_individual_genes/faa')
def load_sequences(fasta_path):
records = {}
with open(fasta_path) as f:
for record in SeqIO.parse(f, 'fasta'):
records[record.id] = record.seq._data
return records
def load_sequences_per_marker_id(marker_info_df, folder_path, extension):
sequences_per_marker_id = {}
for i, index in enumerate(marker_info_df.index):
row = marker_info_df.loc[index]
gene_marker_id_parts = row['Marker Id'].split('_')
assert len(gene_marker_id_parts) == 2
gene_marker_id = gene_marker_id_parts[1]
fasta_path = os.path.join(folder_path, f'{gene_marker_id}{extension}')
sequences_per_marker_id[gene_marker_id] = load_sequences(fasta_path)
return sequences_per_marker_id
nucleotide_sequences = load_sequences_per_marker_id(marker_info_df, nucleotides_fasta_folder, '.fna')
amino_acid_sequences = load_sequences_per_marker_id(marker_info_df, amino_acids_fasta_folder, '.faa')
def merge_data_and_sequences(
balanced_df,
marker_info_df,
nucleotide_sequences,
amino_acid_sequences,
):
output = []
output_columns = [
'accession',
'specie_name',
'domain',
'phylum',
'gtdb_genome_representative',
'temperature',
'temperature_range',
'gene_marker_id',
'gene_name',
'nucleotide_sequence',
'amino_acid_sequence',
]
for tpl in balanced_df.itertuples():
genome_key = tpl.gtdb_genome_representative[3:]
for i, index in enumerate(marker_info_df.index):
row = marker_info_df.loc[index]
gene_marker_id = row['Marker Id']
gene_name = row['Name']
gene_marker_id_key = gene_marker_id.split('_')[1]
nucleotide_sequence = nucleotide_sequences[gene_marker_id_key].get(genome_key)
amino_acid_sequence = amino_acid_sequences[gene_marker_id_key].get(genome_key)
if nucleotide_sequence is None or amino_acid_sequence is None:
continue
if 'X' in nucleotide_sequence or 'X' in amino_acid_sequence:
continue
output.append([
tpl.accession,
tpl.specie_name,
tpl.domain,
tpl.phylum,
tpl.gtdb_genome_representative,
tpl.temperature,
tpl.temperature_range,
gene_marker_id,
gene_name,
nucleotide_sequence,
amino_acid_sequence,
])
return pd.DataFrame(output, columns=output_columns)
df_with_sequences = merge_data_and_sequences(
balanced_df,
marker_info_df,
nucleotide_sequences,
amino_acid_sequences,
)
df_with_sequences.shape
full_dataset_export_path = os.path.join(os.getcwd(), 'data/gtdb/dataset_full.csv')
# df_with_sequences.to_csv(full_dataset_export_path, index=False)
# ## Split train / test sets
def split_train_test_set_per_specie(df, test_ratio=0.2, random_seed=default_random_seed):
rs = np.random.RandomState(random_seed)
representative_genomes = sorted(df['gtdb_genome_representative'].unique().tolist())
n_seq = len(representative_genomes)
test_species = rs.choice(representative_genomes, size=int(test_ratio * n_seq), replace=False)
test_species_set = set(test_species.tolist())
train_species = np.array([s for s in representative_genomes if s not in test_species_set])
return (
df[df['gtdb_genome_representative'].isin(train_species)].reset_index(drop=True),
df[df['gtdb_genome_representative'].isin(test_species)].reset_index(drop=True),
)
train_df, test_df = split_train_test_set_per_specie(df_with_sequences)
len(train_df), len(test_df)
# +
train_dataset_export_path = os.path.join(os.getcwd(), 'data/gtdb/dataset_full_train.csv')
test_dataset_export_path = os.path.join(os.getcwd(), 'data/gtdb/dataset_full_test.csv')
# train_df.to_csv(train_dataset_export_path, index=False)
# test_df.to_csv(test_dataset_export_path, index=False)
# +
nucleotide_sequence_lengths = [len(s) for s in df_with_sequences['nucleotide_sequence']]
min_ = np.min(nucleotide_sequence_lengths)
max_ = np.max(nucleotide_sequence_lengths)
mean_ = int(np.mean(nucleotide_sequence_lengths))
std_ = int(np.std(nucleotide_sequence_lengths))
print(min_, max_, mean_, std_)
# -
_, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.hist(nucleotide_sequence_lengths, bins=20);
ax.set_title('Histogram of nucleotide sequence lengths');
| notebook/Genome Taxonomy Database.ipynb |