code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Instructions: click restart and run all above. Figures will show once the entire notebook has finished running
import sys
sys.path.append('..')
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
# %matplotlib notebook
# # Quantifying membrane and cytoplasmic protein from AF-corrected images
# After autofluorescence correction, any remaining signal can be attributed to fluorescent protein. At the cell perimeter, this protein will be a combination of cytoplasmic protein and membrane protein, and this distribution will vary around the circumference of the cell in the case of a polarised protein. This is observable in a straightened image of the cortex (see [here](./appendix_rois_and_straightening.ipynb) for discussion of straightening algorithm), in which each position in the x direction represents a cross-section across the cortex at that position:
# +
from membranequant.funcs import load_image, straighten, rolling_ave_2d
# Load data
path = '../test_datasets/dataset2_par2_neon/01/'
img = load_image(path + '/af_corrected.tif')
roi = np.loadtxt(path + '/ROI.txt')
# Straighten, apply rolling average
straight = straighten(img, roi=roi, thickness=50)
straight_filtered = rolling_ave_2d(straight, window=20, periodic=True)
def fig1():
fig = plt.figure()
gs = fig.add_gridspec(3, 3)
ax1 = fig.add_subplot(gs[0, :])
ax2 = fig.add_subplot(gs[1:, :])
ymin, ymax = np.min(straight), np.max(straight)
@widgets.interact(position=(0, straight.shape[1]-1, 1))
def update(position=10):
ax1.clear()
ax1.imshow(straight)
ax1.axvline(position, c='r')
ax1.set_yticks([])
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.xaxis.set_label_position('top')
ax2.clear()
ax2.plot(straight[:, position], label='Profile')
ax2.plot(straight_filtered[:, position], label='Local average')
ax2.set_ylabel('Intensity')
ax2.set_xlabel('Position (y)')
ax2.legend(frameon=False, loc='upper left', fontsize='small')
ax2.set_ylim(ymin, ymax)
fig.set_size_inches(5, 5)
fig.tight_layout()
fig1()
# -
# At each position in the straightened image, we get a local profile (or local average profile), the shape of which will
# depend on the concentrations of membrane and cytoplasmic tagged protein in the local vicinity, as well as any processes in the optical path that obscure this signal (diffraction and scattering of light within the sample and microscope). In order to accurately quantify cytoplasmic and membrane protein concentrations, we can build a model that characterises these processes, and describes the expected shape of the cross-cortex profile as a function of local cytoplasmic and membrane concentrations.
# ## A simple 2D model of cytoplasmic and membrane protein
#
# This approach was employed in Gross et al., 2018, using a simple model that describes straightened images as a pure optical section (i.e. no contribution from out of focus planes), aiming to account for scattering of light within the plane of the image. Considering a profile across the cortex, membrane protein can be modelled as single peak, and cytoplasmic protein as a step, both of which are convolved by a Gaussian point spread function. The overall profile can be described as the sum of a Gaussian (membrane component) and an error function (cytoplasmic component), which can be fit to cross-cortex profiles to quantify local cytoplasmic and membrane concentrations. Both of the function components have a single shared parameter, sigma, which represents the degree of scattering. This parameter can be independently estimated (e.g. by imaging beads), but as the degree of scattering will be environment-specific (e.g. depth of the section, optical properties of the sample), it is best estimated by fitting to the data.
#
# This model is demonstrated in the interactive plot below, which plots the expected shape of a cross-sectional profile as a function of local cytoplasmic and membrane concentrations (cyt and mem), as well as the light-scattering factor sigma:
# +
from scipy.special import erf
cyt_amplitude = 1
mem_amplitude = 2
sigma = 2
def model1_fig():
fig, (ax1, ax2, ax3) = plt.subplots(1,3)
@widgets.interact(sigma=(1, 10, 1), cyt=(0, 1, 0.1), mem=(0, 1, 0.1))
def update(sigma=3.0, cyt=0.5, mem=0.5):
thickness = 50
npoints = 1000
cyt_profile = (1 + erf((np.arange(npoints) - npoints / 2) / (sigma * npoints/thickness))) / 2
mem_profile = np.exp(-((np.arange(npoints) - npoints / 2) ** 2) / (2 * (sigma * npoints/thickness) ** 2))
# Cyt profile
ax1.clear()
ax1.set_title('Cytoplasmic reference profile')
ax1.plot(np.linspace(0, thickness, npoints), cyt_profile)
ax1.axvline(thickness / 2, c='k', linestyle='--')
ax1.set_xlabel('Position (y)')
ax1.set_ylabel('Normalised intensity')
ax1.set_ylim(-0.1, 1.1)
# Mem profile
ax2.clear()
ax2.set_title('Membrane reference profile')
ax2.plot(np.linspace(0, thickness, npoints), mem_profile)
ax2.axvline(thickness / 2, c='k', linestyle='--')
ax2.set_xlabel('Position (y)')
ax2.set_ylabel('Normalised intensity')
ax2.set_ylim(-0.1, 1.1)
# Total profile
ax3.clear()
ax3.set_title('Total signal')
ax3.plot(np.linspace(0, thickness, npoints), cyt * cyt_profile + mem * mem_profile)
ax3.axvline(thickness / 2, c='k', linestyle='--')
ax3.set_xlabel('Position (y)')
ax3.set_ylabel('Signal intensity')
ax3.set_ylim(-0.1, 2)
fig.set_size_inches(9,3)
fig.tight_layout()
model1_fig()
# -
# We can see how this model can be used to quantify cytoplasmic and membrane concentrations [here.](./3_simple_model_fitting.ipynb)
#
# (Note: cyt and mem are not in common concentration units, but rather in units of their own respective reference profiles. However, these values will be proportional to the true molar concentration, for a given sigma. Later, I will discuss a method that can be used to convert these into common units.)
# ## Accounting for out of focus light
#
# A major limitation of this model is that it doesn’t account for protein above and below the focal plane, which may contribute to the midplane signal that we see. This can be significant, depending on the microscope, and must be accounted for in order to accurately quantify concentrations.
#
# ### Membrane protein
#
# As an example, I demonstrate this phenomenon below, using a highly simplified model of a uniform membrane protein with circular geometry in the y-z plane (analagous to protein on a curved membrane) and a Gaussian point spread function.
# +
from scipy.integrate import quad
ys = np.linspace(-25, 25, 1000)[:, np.newaxis]
zs = np.linspace(-100, 100, 1000)[np.newaxis, :]
def gaus2d(y, z, sigmay, sigmaz):
return np.exp(- (((y ** 2) / (2 * (sigmay ** 2))) + ((z ** 2) / (2 * (sigmaz ** 2)))))
def boundary(theta, r):
return r - r * np.cos(theta), r * np.sin(theta)
def mem_profile(sigmay, sigmaz, r, width, n):
res = np.zeros([n])
yvals = np.linspace(-width / 2, width / 2, n)
for i, y in enumerate(yvals):
res[i] = quad(lambda theta: gaus2d(y - r + r * np.cos(theta), r * np.sin(theta), sigmay, sigmaz), -np.pi, np.pi)[0]
return yvals, res
def model2_fig():
fig, (ax1, ax2, ax3) = plt.subplots(1,3)
@widgets.interact(sigmay=(1, 10, 1), sigmaz=(1, 100, 1), r=(10, 300, 10))
def update(sigmay=2.0, sigmaz=20.0, r=100):
# Ground truth
ax1.clear()
ax1.set_title('Ground truth')
ax1.plot(*boundary(np.linspace(-np.pi, np.pi, 1000), r=r), c='w')
ax1.axhline(0, c='r', linestyle='--')
ax1.set_xlim(-25, 25)
ax1.set_ylim(-100, 100)
ax1.set_aspect('equal')
ax1.set_facecolor('k')
ax1.set_xlabel('y')
ax1.set_ylabel('z')
# PSF
ax2.clear()
ax2.set_title('PSF')
ax2.imshow(gaus2d(ys, zs, sigmay=sigmay, sigmaz=sigmaz).T, origin='lower', extent=[-25, 25, -100, 100], aspect='equal', cmap='gray')
ax2.set_xlabel('y')
ax2.set_ylabel('z')
# Midplane image
ax3.clear()
ax3.set_title('Membrane reference profile')
profile_x, profile_y = mem_profile(sigmay, sigmaz, r=r, width=50, n=100)
ax3.plot(profile_x, profile_y / max(profile_y))
ax3.set_xlabel('Position (y)')
ax3.set_ylabel('Normalised Intensity')
fig.set_size_inches(9,3)
fig.tight_layout()
model2_fig()
# -
# The PSF has two parameters, for sigma in the y direction (would be equal to sigma in the x direction, but we don’t need to account for this as adjacent position in the x direction are usually very similar), and sigma in the z direction (which is usually significantly higher).
#
# As we can see, the shape of the profile depends not only on diffraction within the plane (sigma_y), but also on diffraction in the z direction (sigma_z) and geometry of the object above and below the plane (i.e. the radius of curvature in this model). If sigma_z is large and/or r is small, the membrane reference profile becomes significantly asymmetric, with a higher signal intensity within the embryo than outside. In some cases this can resemble a profile similar to what we would obtain in the 2D model with a mix of membrane and cytoplasmic protein, even though all of the protein in our model is in fact membrane bound.
# ### Cytoplasmic protein
#
# We see a similar phenomenon for cytoplasmic protein. (Very laggy plot due to numerical solving, be patient!)
# +
from scipy.integrate import quad
ys = np.linspace(-25, 25, 1000)[:, np.newaxis]
zs = np.linspace(-100, 100, 1000)[np.newaxis, :]
def gaus2d(y, z, sigmay, sigmaz):
return np.exp(- (((y ** 2) / (2 * (sigmay ** 2))) + ((z ** 2) / (2 * (sigmaz ** 2)))))
def cyt_profile(sigmay, sigmaz, r, width, n):
res = np.zeros([n])
yvals = np.linspace(-width / 2, width / 2, n)
for i, y in enumerate(yvals):
for j in np.linspace(0, r, n):
res[i] += j * quad(lambda theta: gaus2d(y - 2 * r + j + j * np.cos(theta), j * np.sin(theta), sigmay, sigmaz), -np.pi, np.pi)[0]
return yvals, res
def model2_fig():
fig, (ax1, ax2, ax3) = plt.subplots(1,3)
@widgets.interact(sigmay=(2, 10, 1), sigmaz=(1, 100, 1), r=(10, 200, 10))
def update(sigmay=2.0, sigmaz=20.0, r=100):
# Ground truth
ax1.clear()
ax1.set_title('Ground truth')
circle = plt.Circle((r, 0), r, color='w')
ax1.add_patch(circle)
ax1.axhline(0, c='r', linestyle='--')
ax1.set_xlim(-25, 25)
ax1.set_ylim(-100, 100)
ax1.set_aspect('equal')
ax1.set_facecolor('k')
ax1.set_xlabel('y')
ax1.set_ylabel('z')
# PSF
ax2.clear()
ax2.set_title('PSF')
ax2.imshow(gaus2d(ys, zs, sigmay=sigmay, sigmaz=sigmaz).T, origin='lower', extent=[-25, 25, -100, 100], aspect='equal', cmap='gray')
ax2.set_xlabel('y')
ax2.set_ylabel('z')
# Midplane image
ax3.clear()
ax3.set_title('Cytoplasmic reference profile')
profile_x, profile_y = cyt_profile(sigmay, sigmaz, r=r, width=50, n=100)
ax3.plot(profile_x, profile_y / max(profile_y))
ax3.set_xlabel('Position (y)')
ax3.set_ylabel('Normalised Intensity')
fig.set_size_inches(9,3)
fig.tight_layout()
model2_fig()
# -
# In some cases, we can see intensity of this profile increase beyond the cell boundary as we go further into the cell, due to an increse in cell thicknesss. As before, this depends on sigmaz and r
# ## Discussion
# If accuracy is not essential, then the simple 2D model is an easy and intuitive model to use, and makes few assumptions. However, as the simulations above demonstrate, we may need to account for 3D effects in order to accurately quantify membrane and cytoplasmic concentrations (depending on geometry and the 3D PSF in our set-up)
#
# The models above makes strong assumptions about geometry and the PSF, and are only intended as toy models to demonstrate the effects that out of focus light can have on our observations of cytoplasmic and membrane protein. In reality, the geometry of our sample above and below the plane is unpredictable, and may depend on the method of sample preparation (agarose vs beads, size of beads). True PSFs in confocal systems also tend to deviate significantly from Gaussian in 3D, and will vary depending on the microscope setup and, optical properties of the sample, and imaging depth. Tools do exist that allow the point spread function to be estimated based on microscope and sample parameters, but they are not good enough to be relied upon for accurate models.
#
# For these reasons, we cannot build a comprehensive model that accurately describes geometry and z-direction scattering from first principles. However, regardless of the underlying processes, we can at least assume that the resulting reference profiles for membrane and cytoplasmic protein should have a constant normalised shape, for the following reasons:
# - local geometry is roughly uniform (i.e. uniform membrane curvature). There may be small variations between the midcell and pole, but these are likely minor (I look into this [here](./control_profile_spatial_variation.ipynb))
# - at a local level, cytoplasmic and membrane concentrations can usually be considered uniform in the x, y and z directions (except at polarity domain boundaries, where this assumption will break down somewhat)
# - the PSF can be assumed constant throughout the image in the x-y plane, and is likely small enough that we only need to consider local geometry and concentrations (i.e. a profile at the anterior pole will not be influence by protein at the posterior pole)
# - we will be imaging all embryos with the same microscope set-up
# - we will take all midplane images at roughly the same depth
#
# Thus, regardless of the underlying mechanics, in order to quantify local cytoplasmic and membrane concentrations based on cross-cortex profiles, we just need to know the shape of the two reference profiles for membrane and cytoplasmic protein specific to our set-up, and then fit measured profiles to a two-component model based on these profiles (as above for the 2D model).
#
# The derivation of these two reference profiles is the subject of discussion in [this notebook](./4_custom_model.ipynb)
|
notebooks/2_models_of_membrane_and_cytoplasmic_protein.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Nov 24 16:05:13 2017
@author: zlchen
"""
#导入必要的库
import pandas
import numpy
import math
import csv
from pandas import read_csv
#--------------------------------数据预处理----------------------------------#
#读入数据集
Data=pandas.read_csv("MT_Train.csv")
#把文本型变量变成数字型
for name in ["job","marital","education","default","housing","loan","contact",
"month","day_of_week","poutcome"]:
col=pandas.Categorical(Data[name])
Data[name]=col.codes #对字符串型数据进行编码
Data.head() #显示对数据集编码以后的数据
#统计数据集中结果”y”列中的”yes””和”no”的个数
print(Data['y'].value_counts())
#设置随机种子
numpy.random.seed(190)
#重排数据
Data=Data.reindex(numpy.random.permutation(Data.index))
#设置训练样本占原始数据集的80%
train_max_row=math.floor(Data.shape[0]*0.8)
train=Data.iloc[:int(train_max_row)]
test=Data.iloc[int(train_max_row):]
#-------------------------随机森林模型训练-------------------------------#
from sklearn.ensemble import RandomForestClassifier
#选择x变量属性
columns=["age","job","marital","education","default","housing","loan","contact",
"month","day_of_week","duration","campaign","pdays","previous","poutcome",
"emp.var.rate","cons.price.idx","cons.conf.idx","euribor3m","nr.employed"]
#建立190棵树
clf=RandomForestClassifier(n_estimators=190,random_state=1,
min_samples_leaf=2)
#进行数据拟合,训练模型
clf.fit(train[columns],train["y"])
#对测试集进行预测
predictions=clf.predict(test[columns])
#输出测试集预测结果
print(predictions)
#将测试集真实结果化为一维列向量
test_y=test["y"].values.reshape(len(test["y"]),1)
from pandas import DataFrame
data1=DataFrame(test_y)
#--------------------------计算相关指标---------------------------------#
TP=0 #初始化真正例个数为0
T_predict=0 #初始化预测结果为正例个数为0
T_real=0 #初始化样本真实结果为正例个数为0
correct=0 #初始化样本分类正确个数为0
for i in range(1,len(data1)):
if(predictions[i]=='yes'):
T_predict+=1 #统计预测结果为正例个数
if(data1.iloc[i,0]=='yes'):
T_real+=1 #统计样本真实结果为正例个数
if((data1.iloc[i,0]=='yes')&(predictions[i]=='yes')):
TP+=1 #统计真正例个数
if(data1.iloc[i,0]==predictions[i]):
correct+=1 #统计样本分类正确个数
#计算并输出查准率P,召回率R,mean-F1,精度accuracy
P=(float) (TP)/T_predict
R=(float) (TP)/T_real
F1=(float) (2*P*R/(P+R))
precision=(float) (correct)/len(data1)
print(P)
print(R)
print(F1)
print(precision)
#----------------------------对未标记的测试集作出预测----------------------------#
#读入测试集
data_test=pandas.read_csv('MT_Test.csv')
#把文本型变量变成数字型
for name in ["job","marital","education","default","housing","loan","contact",
"month","day_of_week","poutcome"]:
col=pandas.Categorical(data_test[name])
data_test[name]=col.codes #对字符串型数据进行编码
data_test.head() #显示对测试集编码以后的数据
columns=["age","job","marital","education","default","housing","loan","contact",
"month","day_of_week","duration","campaign","pdays","previous","poutcome",
"emp.var.rate","cons.price.idx","cons.conf.idx","euribor3m","nr.employed"]
result=clf.predict(data_test[columns]) #对测试集进行预测
print(result)
result1=result.reshape(len(result),1) #将预测结果化为一维列向量
print(result1)
#-------------------------------预测结果写入csv文件------------------------------#
#调整结果至正确格式
from pandas import DataFrame
data2=DataFrame(result1)
data2.to_csv('result.csv')
df = read_csv('result.csv')
df.columns = ['SampleId', 'y'] #写入列名
df.to_csv('.result.csv')
with open(".result.csv","r") as source:
rdr= csv.reader( source )
with open("FinalResult.csv","w") as result:
wtr= csv.writer( result )
for r in rdr:
del r[0]
wtr.writerow(r)
#将预测结果写入最终csv文件
FinalData=pandas.read_csv("FinalResult.csv")
#检查结果”y”列中的”yes””和”no”的个数
FinalData['y'].value_counts()
# -
|
MidTerm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Logical constraint exercise
# Your customer has ordered six products to be delivered over the next month. You will need to ship multiple truck loads to deliver all of the products. There is a weight limit on your trucks of 25,000 lbs. For cash flow reasons you desire to ship the most profitable combination of products that can fit on your truck.
#
# Product Weight (lbs) Profitability (US)
#
# A 12,583 102,564
# B 9,204 130,043
# C 12,611 127,648
# D 12,131 155,058
# E 12,889 238,846
# F 11,529 197,030
from pulp import *
prod = ['A', 'B', 'C', 'D', 'E', 'F']
weight = {'A': 12583, 'B': 9204, 'C': 12611, 'D': 12131, 'E': 12889, 'F': 11529}
prof = {'A': 102564, 'B': 130043, 'C': 127648, 'D': 155058, 'E': 238846, 'F': 197030}
print(weight)
print(prof)
# Initialized model, defined decision variables and objective
model = LpProblem("Loading Truck Problem", LpMaximize)
x = LpVariable.dicts('ship', prod, cat='Binary')
model += lpSum([prof[i] * x[i] for i in prod])
print(x)
# Define Constraint
model += lpSum([weight[i] * x[i] for i in prod]) <= 25000
model += x['D'] + x['E'] + x['F'] <= 1
# +
model.solve()
for v in model.variables():
print(v.name, "=", v.varValue)
# The optimised objective function value is printed to the screen
print("The optimised objective function= ", value(model.objective))
|
PULP/tutorial/2.7 Logical constraint exercise solve.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div class="alert alert-block alert-info">
# <b><h1>ENGR 1330 Computational Thinking with Data Science </h1></b>
# </div>
#
# Copyright © 2021 <NAME> and <NAME>
#
# Last GitHub Commit Date:
#
# # 24: Ordinary Functions as Predictor-Response Models
# - Line (affine functions)
# - Polynomials
# - Periodic
#
# Data Models for Predictor-Response.
#
# <!---->
#
# ## Objectives
# - To understand the fundamental concepts involved in representing a data collection with some functional form to make predictions;
# - Interpolation
# - Extrapolation
# - Concept of a fitting function
#
# ## Computational Thinking Concepts
# The CT concepts include:
#
# - Decomposition => Assert data are drawn from some process that is functionally explainable
# - Abstraction => Represent data behavior with a function
# - Algorithm Design => Use the function to predict "new" values of observations
#
# ## Explaining Data
#
# Interpolation and extrapolation were discussed earlier in the course, here we will revist in a prelude to regression tools. Data modeling as used herein has two related but quite different approaches. First (as discussed here) is a predictor-response type model, the second is a magnitude-probability type model (which is discussed in the subsequent lesson).
#
# In predictor-response, we are seeking a functional form (and parameters) that relate a predictor variable to a response so we can use the **model** to predict anticipated responses to different predictor inputs.
#
# Recall our speed and time example, repeated below.
# Our data
time = [0,1.0,2.0,3.0,4.0,5.0,6.0]
speed = [0,3,7,12,20,30,45.6]
# Our data model
def poly1(b0,b1,x):
# return y = b0 + b1*x
poly1=b0+b1*x
return(poly1)
# Our plotting function
import matplotlib.pyplot as plt
def make2plot(listx1,listy1,listx2,listy2,strlablx,strlably,strtitle):
mydata = plt.figure(figsize = (10,5)) # build a square drawing canvass from figure class
plt.plot(listx1,listy1, c='red', marker='p',linewidth=0) # basic data plot
plt.plot(listx2,listy2, c='blue',linewidth=1) # basic model plot
# plt.plot(listx3,listy3, c='green',linewidth=1) # basic model plot
plt.xlabel(strlablx)
plt.ylabel(strlably)
plt.legend(['Data','Model'])# modify for argument insertion
plt.title(strtitle)
plt.show()
# Our "fitting" process
intercept = -3.0
slope = 7.0
modelSpeed = [] # empty list
for i in range(len(time)):
modelSpeed.append(poly1(intercept,slope,time[i]))
# Plotting results
charttitle="Plot of y=b0+b1*x model and observations \n" + " Model equation: y = " + str(intercept) + " + " + str(slope) + "x"
make2plot(time,speed,time,modelSpeed,'time (sec.)','speed (m/s)',charttitle)
# So the data model is $y=-3.0 + 7x$ and we can assess how "good" the model is by some measure of error (sum of squared error, max error, and various other measures). We could also postulate another data model as
# +
def poly2(b0,b1,b2,x):
# return y = b0 + b1*x
poly2=b0+b1*x+b2*x**2
return(poly2)
# Our "fitting" process
intercept = 0.0
slope = 2.0
curvature = 0.9
modelSpeed = [] # empty list
for i in range(len(time)):
modelSpeed.append(poly2(intercept,slope,curvature,time[i]))
# Plotting results
charttitle="Plot of y=b0+b1*x+b2*x^2 model and observations \n" + " Model equation: y = " + str(intercept) + " + " + str(slope) + " x + " + str(curvature) + " x^2 "
make2plot(time,speed,time,modelSpeed,'time (sec.)','speed (m/s)',charttitle)
# -
# Now this model appears visibly "superior" and we would anticipate that the error measurement would be smaller than the previous model. If we decided our second model was awesome, we now have a prediction tool.
#
# Our model is $y = 0.0 + 2.0 x + 0.9 x^2$ so if we wished to predict the speed at time 11 seconds we would simply evaluate the function using the parameters at $x=11$
print("Speed @ time = 11 sec. is ",poly2(intercept,slope,curvature,11)," meters per second")
# Now we will examine a few common kinds of predictor-response models.
# ### Additive Models
# A function that produces a line is called an affine function. In the example above, we have only a single predictor variable, but we can have multiple predictors. If the multiple predictors are additive the model will look like:
#
# $$y = \beta_0 + \beta_{11}*u +\beta_{12}*v + \beta_{13}*w$$
#
# where $u,v,w$ are predictors
# ### Polynomials and Product Models
#
# In the example above, we have only additive predictor variables, but we can form products of predictors. Product models will look like:
#
# $$y = \beta_0 + \beta_{11}*u +\beta_{12}*v + \beta_{13}*w +\beta_{21}*u^2 +\beta_{22}*uv + \beta_{23}*uw + \beta_{31}*uv +\beta_{32}*v^2 + \beta_{33}*wv +\beta_{41}*uw +\beta_{42}*vw + \beta_{43}*w^2$$
#
# where $u,v,w$ are predictors.
#
# If the product model is simply powers of single predictors the model is called a polynomial model such as:
#
# $$y = \beta_0 + \beta_{11}*u +\beta_{21}*u^2 + \beta_{31}*u^3 +\beta_{41}*u^4 +\dots$$
#
# Often the coefficient list is stored as a matrix, and this is called a design matrix.
#
# #### Polynomial Example
# Consider the data collected during the boost-phase of a ballistic missle. The maximum speed of a solid-fueled missle at burn-out (when the boost-phase ends) is about 7km/s. Using this knowledge and the early-time telemetry below; fit a data model using the linear system approach and use the model to estimate boost phase burn-out. Plot the model and data on the same axis to demonstrate the quality of the fit.
#
# |Elapsed Time (s)|Speed (m/s)|
# |---:|---:|
# |0 |0|
# |1.0 |3|
# |2.0 |7.4|
# |3.0 |16.2|
# |4.0 |23.5|
# |5.0 |32.2|
# |6.0 | 42.2|
# |7.0 | 65.1 |
# |8.0 | 73.5 |
# |9.0 | 99.3 |
# |10.0| 123.4|
#
# #### Design Matrix
# The data model as a linear system is:
#
# $$\begin{gather}
# \mathbf{X} \cdot \mathbf{\beta} = \mathbf{Y}
# \end{gather}$$
#
# For example using the Polynomial Model (order 2 for brevity, but extendable as justified)
#
# $$
# \begin{gather}
# \mathbf{X}=
# \begin{pmatrix}
# 1 & x_1 & x_1^2\\
# ~\\
# 1 & x_2 & x_2^2\\
# ~ \\
# 1 & x_3 & x_3^2\\
# \dots & \dots & \dots \\
# 1 & x_n & x_n^2\\
# \end{pmatrix}
# \end{gather}
# $$
#
# $$
# \begin{gather}
# \mathbf{\beta}=
# \begin{pmatrix}
# \beta_0 \\
# ~\\
# \beta_1 \\
# ~ \\
# \beta_2 \\
# \end{pmatrix}
# \end{gather}
# $$
#
# $$
# \begin{gather}
# \mathbf{X}=
# \begin{pmatrix}
# y_1 \\
# ~\\
# y_2 \\
# ~ \\
# y_3 \\
# \dots \\
# y_n \\
# \end{pmatrix}
# \end{gather}
# $$
#
# A way to find the unknown $\beta$ values is to solve the linear system below
#
# $$\begin{gather}
# [\mathbf{X^T}\mathbf{X}] \cdot \mathbf{\beta} = [\mathbf{X^T}]\mathbf{Y}
# \end{gather}$$
#
# Once the values for $\beta$ are obtained then we can apply our plotting tools and use the model to extrapolate and interpolate.
# start with the early-time data
time = [0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0]
speed = [0,3,7.4,16.2,23.5,32.2,42.2, 65.1 ,73.5 ,99.3 ,123.4,]
# trial-and-error approach
# our data model
def polynomial(b0,b1,b2,time):
polynomial = b0+b1*time+b2*time**2
return(polynomial)
# our "goodness" measure
def sqerr(a,b):
sqerr = (a-b)**2
return(sqerr)
#########################################
x = [0.0,0.2,1.5] # our "guessed" betas; these are all just guesses - how would you make a way to update the guess?
#########################################
# +
# build our data model to plot
my_model = [0 for i in range(len(time))]
for i in range(len(time)):
my_model[i] = polynomial(x[0],x[1],x[2],time[i])
# evaluate the model fit
err = 0
for i in range(len(time)):
err = err + sqerr(my_model[i],speed[i])
# our plotting tool
import matplotlib.pyplot as plt
def make2plot(listx1,listy1,listx2,listy2,strlablx,strlably,strtitle):
mydata = plt.figure(figsize = (10,5)) # build a square drawing canvass from figure class
plt.plot(listx1,listy1, c='red', marker='v',linewidth=0) # basic data plot
plt.plot(listx2,listy2, c='blue',linewidth=1) # basic model plot
plt.xlabel(strlablx)
plt.ylabel(strlably)
plt.legend(['Observations','Model'])# modify for argument insertion
plt.title(strtitle)
plt.show()
return
# build our plot
plottitle = "Polynomial Model: y = " + str(x[0]) + " + " + str(x[1]) + "*x + " + str(x[2]) + "*x^2 \n" + " SSE = " +str(round(err,3))
make2plot(time,speed,time,my_model,"Time","Speed",plottitle);
# estimate time to burnout
ttb = 79.
print('Estimated time to burn-out is: ',ttb,' seconds; Speed at burn-out is: ',polynomial(x[0],x[1],x[2],ttb),' meters/second')
# +
# solving the linear system to make a model
##############################
import numpy
X = [numpy.ones(len(time)),numpy.array(time),numpy.array(time)**2] # build the design X matrix #
X = numpy.transpose(X) # get into correct shape for linear solver
Y = numpy.array(speed) # build the response Y vector
A = numpy.transpose(X)@X # build the XtX matrix
b = numpy.transpose(X)@Y # build the XtY vector
x = numpy.linalg.solve(A,b) # avoid inversion and just solve the linear system
#print(x)
def polynomial(b0,b1,b2,time):
polynomial = b0+b1*time+b2*time**2
return(polynomial)
my_model = [0 for i in range(len(time))]
for i in range(len(time)):
my_model[i] = polynomial(x[0],x[1],x[2],time[i])
# evaluate the model fit
err = 0
for i in range(len(time)):
err = err + sqerr(my_model[i],speed[i])
import matplotlib.pyplot as plt
def make2plot(listx1,listy1,listx2,listy2,strlablx,strlably,strtitle):
mydata = plt.figure(figsize = (10,5)) # build a square drawing canvass from figure class
plt.plot(listx1,listy1, c='red', marker='v',linewidth=0) # basic data plot
plt.plot(listx2,listy2, c='blue',linewidth=1) # basic model plot
plt.xlabel(strlablx)
plt.ylabel(strlably)
plt.legend(['Observations','Model'])# modify for argument insertion
plt.title(strtitle)
plt.show()
return
plottitle = "Polynomial Model: y = " + str(x[0]) + " + " + str(x[1]) + "*x + " + str(x[2]) + "*x^2 \n" + " SSE = " +str(round(err,3))
make2plot(time,speed,time,my_model,"Time","Speed",plottitle);
ttb = 77.79
print('Estimated time to burn-out is: ',ttb,' seconds; Speed at burn-out is: ',polynomial(x[0],x[1],x[2],ttb),' meters/second')
# -
# ### Power-Law Models
#
# A power-law model is a model of the form:
#
# $y = \beta_{0}*u^{\beta_1}$
#
# One can join multiple power-law models (if they wish) in an additive or even a product model fashion; in such instances some physical understanding of the process is needed, to make reasonably useful models.
#
# +
# example goes here
# -
# ### Logarithmic Models
#
# A logarithmic model is a model of the form:
#
# $y = \beta_{0}+\beta{_1}log(u)$
# +
# example goes here
# -
# ### Periodic Models
#
# Models with periodic (repeating behavior) are often a special challenge and are often dealt with using transformations (LaPlace, Fourier, $\dots$)
# ## References
# <hr>
#
# ## Laboratory 24
#
# **Examine** (click) Laboratory 24 as a webpage at [Laboratory 24.html](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab24/Lab24.html)
#
# **Download** (right-click, save target as ...) Laboratory 24 as a jupyterlab notebook from [Laboratory 24.ipynb](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab24/Lab24.ipynb)
#
# <hr><hr>
#
# ## Exercise Set 24
#
# **Examine** (click) Exercise Set 24 as a webpage at [Exercise 24.html](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab24/Lab24-TH.html)
#
# **Download** (right-click, save target as ...) Exercise Set 24 as a jupyterlab notebook at [Exercise Set 24.ipynb](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab24/Lab24-TH.ipynb)
#
#
|
engr1330jb/_build/jupyter_execute/lessons/lesson24/lesson24.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Preparation
# ## Import Libraries
import numpy as np
import pandas as pd
# ## Import Data
# The dataset contains all available data for more than 800,000 consumer loans issued from 2007 to 2015 by Lending Club: a large US peer-to-peer lending company. There are several different versions of this dataset. We have used a version available on kaggle.com. You can find it here: https://www.kaggle.com/wendykan/lending-club-loan-data/version/1
# We divided the data into two periods because we assume that some data are available at the moment when we need to build Expected Loss models, and some data comes from applications after. Later, we investigate whether the applications we have after we built the Probability of Default (PD) model have similar characteristics with the applications we used to build the PD model.
loan_data_backup = pd.read_csv('loan_data_2007_2014.csv')
loan_data = loan_data_backup.copy()
# ## Explore Data
loan_data
pd.options.display.max_columns = None
#pd.options.display.max_rows = None
# Sets the pandas dataframe options to display all columns/ rows.
loan_data
loan_data.head()
loan_data.tail()
loan_data.columns.values
# Displays all column names.
loan_data.info()
# Displays column names, complete (non-missing) cases per column, and datatype per column.
# ## General Preprocessing
# ### Preprocessing few continuous variables
loan_data['emp_length'].unique()
# Displays unique values of a column.
loan_data['emp_length_int'] = loan_data['emp_length'].str.replace('\+ years', '')
loan_data['emp_length_int'] = loan_data['emp_length_int'].str.replace('< 1 year', str(0))
loan_data['emp_length_int'] = loan_data['emp_length_int'].str.replace('n/a', str(0))
loan_data['emp_length_int'] = loan_data['emp_length_int'].str.replace(' years', '')
loan_data['emp_length_int'] = loan_data['emp_length_int'].str.replace(' year', '')
# We store the preprocessed ‘employment length’ variable in a new variable called ‘employment length int’,
# We assign the new ‘employment length int’ to be equal to the ‘employment length’ variable with the string ‘+ years’
# replaced with nothing. Next, we replace the whole string ‘less than 1 year’ with the string ‘0’.
# Then, we replace the ‘n/a’ string with the string ‘0’. Then, we replace the string ‘space years’ with nothing.
# Finally, we replace the string ‘space year’ with nothing.
type(loan_data['emp_length_int'][0])
# Checks the datatype of a single element of a column.
loan_data['emp_length_int'] = pd.to_numeric(loan_data['emp_length_int'])
# Transforms the values to numeric.
type(loan_data['emp_length_int'][0])
# Checks the datatype of a single element of a column.
loan_data['earliest_cr_line']
# Displays a column.
loan_data['earliest_cr_line_date'] = pd.to_datetime(loan_data['earliest_cr_line'], format = '%b-%y')
# Extracts the date and the time from a string variable that is in a given format.
type(loan_data['earliest_cr_line_date'][0])
# Checks the datatype of a single element of a column.
pd.to_datetime('2017-12-01') - loan_data['earliest_cr_line_date']
# Calculates the difference between two dates and times.
# Assume we are now in December 2017
loan_data['mths_since_earliest_cr_line'] = round(pd.to_numeric((pd.to_datetime('2017-12-01') - loan_data['earliest_cr_line_date']) / np.timedelta64(1, 'M')))
# We calculate the difference between two dates in months, turn it to numeric datatype and round it.
# We save the result in a new variable.
loan_data['mths_since_earliest_cr_line'].describe()
# Shows some descriptive statisics for the values of a column.
# Dates from 1969 and before are not being converted well, i.e., they have become 2069 and similar,
# and negative differences are being calculated.
loan_data.loc[: , ['earliest_cr_line', 'earliest_cr_line_date', 'mths_since_earliest_cr_line']][loan_data['mths_since_earliest_cr_line'] < 0]
# We take three columns from the dataframe. Then, we display them only for the rows where a variable has negative value.
# There are 2303 strange negative values.
loan_data['mths_since_earliest_cr_line'][loan_data['mths_since_earliest_cr_line'] < 0] = loan_data['mths_since_earliest_cr_line'].max()
# We set the rows that had negative differences to the maximum value.
min(loan_data['mths_since_earliest_cr_line'])
# Calculates and shows the minimum value of a column.
# ### Homework
loan_data['term']
loan_data['term'].describe()
# Shows some descriptive statisics for the values of a column.
loan_data['term_int'] = loan_data['term'].str.replace(' months', '')
# We replace a string with another string, in this case, with an empty strng (i.e. with nothing).
loan_data['term_int']
type(loan_data['term_int'][25])
# Checks the datatype of a single element of a column.
loan_data['term_int'] = pd.to_numeric(loan_data['term'].str.replace(' months', ''))
# We remplace a string from a variable with another string, in this case, with an empty strng (i.e. with nothing).
# We turn the result to numeric datatype and save it in another variable.
loan_data['term_int']
type(loan_data['term_int'][0])
# Checks the datatype of a single element of a column.
loan_data['issue_d']
# Assume we are now in December 2017
loan_data['issue_d_date'] = pd.to_datetime(loan_data['issue_d'], format = '%b-%y')
# Extracts the date and the time from a string variable that is in a given format.
loan_data['mths_since_issue_d'] = round(pd.to_numeric((pd.to_datetime('2017-12-01') - loan_data['issue_d_date']) / np.timedelta64(1, 'M')))
# We calculate the difference between two dates in months, turn it to numeric datatype and round it.
# We save the result in a new variable.
loan_data['mths_since_issue_d'].describe()
# Shows some descriptive statisics for the values of a column.
# ### Preprocessing few discrete variables
loan_data.info()
# Displays column names, complete (non-missing) cases per column, and datatype per column.
# We are going to preprocess the following discrete variables: grade, sub_grade, home_ownership, verification_status, loan_status, purpose, addr_state, initial_list_status. Most likely, we are not going to use sub_grade, as it overlaps with grade.
pd.get_dummies(loan_data['grade'])
# Create dummy variables from a variable.
pd.get_dummies(loan_data['grade'], prefix = 'grade', prefix_sep = ':')
# Create dummy variables from a variable.
loan_data_dummies = [pd.get_dummies(loan_data['grade'], prefix = 'grade', prefix_sep = ':'),
pd.get_dummies(loan_data['sub_grade'], prefix = 'sub_grade', prefix_sep = ':'),
pd.get_dummies(loan_data['home_ownership'], prefix = 'home_ownership', prefix_sep = ':'),
pd.get_dummies(loan_data['verification_status'], prefix = 'verification_status', prefix_sep = ':'),
pd.get_dummies(loan_data['loan_status'], prefix = 'loan_status', prefix_sep = ':'),
pd.get_dummies(loan_data['purpose'], prefix = 'purpose', prefix_sep = ':'),
pd.get_dummies(loan_data['addr_state'], prefix = 'addr_state', prefix_sep = ':'),
pd.get_dummies(loan_data['initial_list_status'], prefix = 'initial_list_status', prefix_sep = ':')]
# We create dummy variables from all 8 original independent variables, and save them into a list.
# Note that we are using a particular naming convention for all variables: original variable name, colon, category name.
loan_data_dummies = pd.concat(loan_data_dummies, axis = 1)
# We concatenate the dummy variables and this turns them into a dataframe.
type(loan_data_dummies)
# Returns the type of the variable.
loan_data = pd.concat([loan_data, loan_data_dummies], axis = 1)
# Concatenates two dataframes.
# Here we concatenate the dataframe with original data with the dataframe with dummy variables, along the columns.
loan_data.columns.values
# Displays all column names.
# ### Check for missing values and clean
loan_data.isnull()
# It returns 'False' if a value is not missing and 'True' if a value is missing, for each value in a dataframe.
pd.options.display.max_rows = None
# Sets the pandas dataframe options to display all columns/ rows.
loan_data.isnull().sum()
pd.options.display.max_rows = 100
# Sets the pandas dataframe options to display 100 columns/ rows.
# 'Total revolving high credit/ credit limit', so it makes sense that the missing values are equal to funded_amnt.
loan_data['total_rev_hi_lim'].fillna(loan_data['funded_amnt'], inplace=True)
# We fill the missing values with the values of another variable.
loan_data['total_rev_hi_lim'].isnull().sum()
# ### Homework
loan_data['annual_inc'].fillna(loan_data['annual_inc'].mean(), inplace=True)
# We fill the missing values with the mean value of the non-missing values.
loan_data['mths_since_earliest_cr_line'].fillna(0, inplace=True)
loan_data['acc_now_delinq'].fillna(0, inplace=True)
loan_data['total_acc'].fillna(0, inplace=True)
loan_data['pub_rec'].fillna(0, inplace=True)
loan_data['open_acc'].fillna(0, inplace=True)
loan_data['inq_last_6mths'].fillna(0, inplace=True)
loan_data['delinq_2yrs'].fillna(0, inplace=True)
loan_data['emp_length_int'].fillna(0, inplace=True)
# We fill the missing values with zeroes.
# # PD model
# ## Data preparation
# ### Dependent Variable. Good/ Bad (Default) Definition. Default and Non-default Accounts.
loan_data['loan_status'].unique()
# Displays unique values of a column.
loan_data['loan_status'].value_counts()
# Calculates the number of observations for each unique value of a variable.
loan_data['loan_status'].value_counts() / loan_data['loan_status'].count()
# We divide the number of observations for each unique value of a variable by the total number of observations.
# Thus, we get the proportion of observations for each unique value of a variable.
# Good/ Bad Definition
loan_data['good_bad'] = np.where(loan_data['loan_status'].isin(['Charged Off', 'Default',
'Does not meet the credit policy. Status:Charged Off',
'Late (31-120 days)']), 0, 1)
# We create a new variable that has the value of '0' if a condition is met, and the value of '1' if it is not met.
loan_data['good_bad']
# ### Splitting Data
from sklearn.model_selection import train_test_split
# Imports the libraries we need.
train_test_split(loan_data.drop('good_bad', axis = 1), loan_data['good_bad'])
# Takes a set of inputs and a set of targets as arguments. Splits the inputs and the targets into four dataframes:
# Inputs - Train, Inputs - Test, Targets - Train, Targets - Test.
loan_data_inputs_train, loan_data_inputs_test, loan_data_targets_train, loan_data_targets_test = train_test_split(loan_data.drop('good_bad', axis = 1), loan_data['good_bad'])
# We split two dataframes with inputs and targets, each into a train and test dataframe, and store them in variables.
loan_data_inputs_train.shape
# Displays the size of the dataframe.
loan_data_targets_train.shape
# Displays the size of the dataframe.
loan_data_inputs_test.shape
# Displays the size of the dataframe.
loan_data_targets_test.shape
# Displays the size of the dataframe.
loan_data_inputs_train, loan_data_inputs_test, loan_data_targets_train, loan_data_targets_test = train_test_split(loan_data.drop('good_bad', axis = 1), loan_data['good_bad'], test_size = 0.2, random_state = 42)
# We split two dataframes with inputs and targets, each into a train and test dataframe, and store them in variables.
# This time we set the size of the test dataset to be 20%.
# Respectively, the size of the train dataset becomes 80%.
# We also set a specific random state.
# This would allow us to perform the exact same split multimple times.
# This means, to assign the exact same observations to the train and test datasets.
loan_data_inputs_train.shape
# Displays the size of the dataframe.
loan_data_targets_train.shape
# Displays the size of the dataframe.
loan_data_inputs_test.shape
# Displays the size of the dataframe.
loan_data_targets_test.shape
# Displays the size of the dataframe.
# ### Data Preparation: An Example
#####
df_inputs_prepr = loan_data_inputs_train
df_targets_prepr = loan_data_targets_train
#####
#df_inputs_prepr = loan_data_inputs_test
#df_targets_prepr = loan_data_targets_test
df_inputs_prepr['grade'].unique()
# Displays unique values of a column.
df1 = pd.concat([df_inputs_prepr['grade'], df_targets_prepr], axis = 1)
# Concatenates two dataframes along the columns.
df1.head()
df1.groupby(df1.columns.values[0], as_index = False)[df1.columns.values[1]].count()
# Groups the data according to a criterion contained in one column.
# Does not turn the names of the values of the criterion as indexes.
# Aggregates the data in another column, using a selected function.
# In this specific case, we group by the column with index 0 and we aggregate the values of the column with index 1.
# More specifically, we count them.
# In other words, we count the values in the column with index 1 for each value of the column with index 0.
df1.groupby(df1.columns.values[0], as_index = False)[df1.columns.values[1]].mean()
# Groups the data according to a criterion contained in one column.
# Does not turn the names of the values of the criterion as indexes.
# Aggregates the data in another column, using a selected function.
# Here we calculate the mean of the values in the column with index 1 for each value of the column with index 0.
df1 = pd.concat([df1.groupby(df1.columns.values[0], as_index = False)[df1.columns.values[1]].count(),
df1.groupby(df1.columns.values[0], as_index = False)[df1.columns.values[1]].mean()], axis = 1)
# Concatenates two dataframes along the columns.
df1
df1 = df1.iloc[:, [0, 1, 3]]
# Selects only columns with specific indexes.
df1
df1.columns = [df1.columns.values[0], 'n_obs', 'prop_good']
# Changes the names of the columns of a dataframe.
df1
df1['prop_n_obs'] = df1['n_obs'] / df1['n_obs'].sum()
# We divide the values of one column by he values of another column and save the result in a new variable.
df1
df1['n_good'] = df1['prop_good'] * df1['n_obs']
# We multiply the values of one column by he values of another column and save the result in a new variable.
df1['n_bad'] = (1 - df1['prop_good']) * df1['n_obs']
df1
df1['prop_n_good'] = df1['n_good'] / df1['n_good'].sum()
df1['prop_n_bad'] = df1['n_bad'] / df1['n_bad'].sum()
df1
df1['WoE'] = np.log(df1['prop_n_good'] / df1['prop_n_bad'])
# We take the natural logarithm of a variable and save the result in a nex variable.
df1
df1 = df1.sort_values(['WoE'])
# Sorts a dataframe by the values of a given column.
df1 = df1.reset_index(drop = True)
# We reset the index of a dataframe and overwrite it.
df1
df1['diff_prop_good'] = df1['prop_good'].diff().abs()
# We take the difference between two subsequent values of a column. Then, we take the absolute value of the result.
df1['diff_WoE'] = df1['WoE'].diff().abs()
# We take the difference between two subsequent values of a column. Then, we take the absolute value of the result.
df1
df1['IV'] = (df1['prop_n_good'] - df1['prop_n_bad']) * df1['WoE']
df1['IV'] = df1['IV'].sum()
# We sum all values of a given column.
df1
# ### Preprocessing Discrete Variables: Automating Calculaions
# WoE function for discrete unordered variables
def woe_discrete(df, discrete_variabe_name, good_bad_variable_df):
df = pd.concat([df[discrete_variabe_name], good_bad_variable_df], axis = 1)
df = pd.concat([df.groupby(df.columns.values[0], as_index = False)[df.columns.values[1]].count(),
df.groupby(df.columns.values[0], as_index = False)[df.columns.values[1]].mean()], axis = 1)
df = df.iloc[:, [0, 1, 3]]
df.columns = [df.columns.values[0], 'n_obs', 'prop_good']
df['prop_n_obs'] = df['n_obs'] / df['n_obs'].sum()
df['n_good'] = df['prop_good'] * df['n_obs']
df['n_bad'] = (1 - df['prop_good']) * df['n_obs']
df['prop_n_good'] = df['n_good'] / df['n_good'].sum()
df['prop_n_bad'] = df['n_bad'] / df['n_bad'].sum()
df['WoE'] = np.log(df['prop_n_good'] / df['prop_n_bad'])
df = df.sort_values(['WoE'])
df = df.reset_index(drop = True)
df['diff_prop_good'] = df['prop_good'].diff().abs()
df['diff_WoE'] = df['WoE'].diff().abs()
df['IV'] = (df['prop_n_good'] - df['prop_n_bad']) * df['WoE']
df['IV'] = df['IV'].sum()
return df
# Here we combine all of the operations above in a function.
# The function takes 3 arguments: a dataframe, a string, and a dataframe. The function returns a dataframe as a result.
# 'grade'
df_temp = woe_discrete(df_inputs_prepr, 'grade', df_targets_prepr)
# We execute the function we defined with the necessary arguments: a dataframe, a string, and a dataframe.
# We store the result in a dataframe.
df_temp
# ### Preprocessing Discrete Variables: Visualizing Results
import matplotlib.pyplot as plt
import seaborn as sns
# Imports the libraries we need.
sns.set()
# We set the default style of the graphs to the seaborn style.
# Below we define a function that takes 2 arguments: a dataframe and a number.
# The number parameter has a default value of 0.
# This means that if we call the function and omit the number parameter, it will be executed with it having a value of 0.
# The function displays a graph.
def plot_by_woe(df_WoE, rotation_of_x_axis_labels = 0):
x = np.array(df_WoE.iloc[:, 0].apply(str))
# Turns the values of the column with index 0 to strings, makes an array from these strings, and passes it to variable x.
y = df_WoE['WoE']
# Selects a column with label 'WoE' and passes it to variable y.
plt.figure(figsize=(18, 6))
# Sets the graph size to width 18 x height 6.
plt.plot(x, y, marker = 'o', linestyle = '--', color = 'k')
# Plots the datapoints with coordiantes variable x on the x-axis and variable y on the y-axis.
# Sets the marker for each datapoint to a circle, the style line between the points to dashed, and the color to black.
plt.xlabel(df_WoE.columns[0])
# Names the x-axis with the name of the column with index 0.
plt.ylabel('Weight of Evidence')
# Names the y-axis 'Weight of Evidence'.
plt.title(str('Weight of Evidence by ' + df_WoE.columns[0]))
# Names the grapth 'Weight of Evidence by ' the name of the column with index 0.
plt.xticks(rotation = rotation_of_x_axis_labels)
# Rotates the labels of the x-axis a predefined number of degrees.
plot_by_woe(df_temp)
# We execute the function we defined with the necessary arguments: a dataframe.
# We omit the number argument, which means the function will use its default value, 0.
# ### Preprocessing Discrete Variables: Creating Dummy Variables, Part 1
# 'home_ownership'
df_temp = woe_discrete(df_inputs_prepr, 'home_ownership', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# +
# There are many categories with very few observations and many categories with very different "good" %.
# Therefore, we create a new discrete variable where we combine some of the categories.
# 'OTHERS' and 'NONE' are riskiest but are very few. 'RENT' is the next riskiest.
# 'ANY' are least risky but are too few. Conceptually, they belong to the same category. Also, their inclusion would not change anything.
# We combine them in one category, 'RENT_OTHER_NONE_ANY'.
# We end up with 3 categories: 'RENT_OTHER_NONE_ANY', 'OWN', 'MORTGAGE'.
df_inputs_prepr['home_ownership:RENT_OTHER_NONE_ANY'] = sum([df_inputs_prepr['home_ownership:RENT'], df_inputs_prepr['home_ownership:OTHER'],
df_inputs_prepr['home_ownership:NONE'],df_inputs_prepr['home_ownership:ANY']])
# 'RENT_OTHER_NONE_ANY' will be the reference category.
# Alternatively:
#loan_data.loc['home_ownership' in ['RENT', 'OTHER', 'NONE', 'ANY'], 'home_ownership:RENT_OTHER_NONE_ANY'] = 1
#loan_data.loc['home_ownership' not in ['RENT', 'OTHER', 'NONE', 'ANY'], 'home_ownership:RENT_OTHER_NONE_ANY'] = 0
#loan_data.loc['loan_status' not in ['OWN'], 'home_ownership:OWN'] = 1
#loan_data.loc['loan_status' not in ['OWN'], 'home_ownership:OWN'] = 0
#loan_data.loc['loan_status' not in ['MORTGAGE'], 'home_ownership:MORTGAGE'] = 1
#loan_data.loc['loan_status' not in ['MORTGAGE'], 'home_ownership:MORTGAGE'] = 0
# -
# ### Preprocessing Discrete Variables: Creating Dummy Variables, Part 2
# 'addr_state'
df_inputs_prepr['addr_state'].unique()
df_temp = woe_discrete(df_inputs_prepr, 'addr_state', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
if ['addr_state:ND'] in df_inputs_prepr.columns.values:
pass
else:
df_inputs_prepr['addr_state:ND'] = 0
plot_by_woe(df_temp.iloc[2: -2, : ])
# We plot the weight of evidence values.
plot_by_woe(df_temp.iloc[6: -6, : ])
# We plot the weight of evidence values.
# +
# We create the following categories:
# 'ND' 'NE' 'IA' NV' 'FL' 'HI' 'AL'
# 'NM' 'VA'
# 'NY'
# 'OK' 'TN' 'MO' 'LA' 'MD' 'NC'
# 'CA'
# 'UT' 'KY' 'AZ' 'NJ'
# 'AR' 'MI' 'PA' 'OH' 'MN'
# 'RI' 'MA' 'DE' 'SD' 'IN'
# 'GA' 'WA' 'OR'
# 'WI' 'MT'
# 'TX'
# 'IL' 'CT'
# 'KS' 'SC' 'CO' 'VT' 'AK' 'MS'
# 'WV' 'NH' 'WY' 'DC' 'ME' 'ID'
# 'IA_NV_HI_ID_AL_FL' will be the reference category.
df_inputs_prepr['addr_state:ND_NE_IA_NV_FL_HI_AL'] = sum([df_inputs_prepr['addr_state:ND'], df_inputs_prepr['addr_state:NE'],
df_inputs_prepr['addr_state:IA'], df_inputs_prepr['addr_state:NV'],
df_inputs_prepr['addr_state:FL'], df_inputs_prepr['addr_state:HI'],
df_inputs_prepr['addr_state:AL']])
df_inputs_prepr['addr_state:NM_VA'] = sum([df_inputs_prepr['addr_state:NM'], df_inputs_prepr['addr_state:VA']])
df_inputs_prepr['addr_state:OK_TN_MO_LA_MD_NC'] = sum([df_inputs_prepr['addr_state:OK'], df_inputs_prepr['addr_state:TN'],
df_inputs_prepr['addr_state:MO'], df_inputs_prepr['addr_state:LA'],
df_inputs_prepr['addr_state:MD'], df_inputs_prepr['addr_state:NC']])
df_inputs_prepr['addr_state:UT_KY_AZ_NJ'] = sum([df_inputs_prepr['addr_state:UT'], df_inputs_prepr['addr_state:KY'],
df_inputs_prepr['addr_state:AZ'], df_inputs_prepr['addr_state:NJ']])
df_inputs_prepr['addr_state:AR_MI_PA_OH_MN'] = sum([df_inputs_prepr['addr_state:AR'], df_inputs_prepr['addr_state:MI'],
df_inputs_prepr['addr_state:PA'], df_inputs_prepr['addr_state:OH'],
df_inputs_prepr['addr_state:MN']])
df_inputs_prepr['addr_state:RI_MA_DE_SD_IN'] = sum([df_inputs_prepr['addr_state:RI'], df_inputs_prepr['addr_state:MA'],
df_inputs_prepr['addr_state:DE'], df_inputs_prepr['addr_state:SD'],
df_inputs_prepr['addr_state:IN']])
df_inputs_prepr['addr_state:GA_WA_OR'] = sum([df_inputs_prepr['addr_state:GA'], df_inputs_prepr['addr_state:WA'],
df_inputs_prepr['addr_state:OR']])
df_inputs_prepr['addr_state:WI_MT'] = sum([df_inputs_prepr['addr_state:WI'], df_inputs_prepr['addr_state:MT']])
df_inputs_prepr['addr_state:IL_CT'] = sum([df_inputs_prepr['addr_state:IL'], df_inputs_prepr['addr_state:CT']])
df_inputs_prepr['addr_state:KS_SC_CO_VT_AK_MS'] = sum([df_inputs_prepr['addr_state:KS'], df_inputs_prepr['addr_state:SC'],
df_inputs_prepr['addr_state:CO'], df_inputs_prepr['addr_state:VT'],
df_inputs_prepr['addr_state:AK'], df_inputs_prepr['addr_state:MS']])
df_inputs_prepr['addr_state:WV_NH_WY_DC_ME_ID'] = sum([df_inputs_prepr['addr_state:WV'], df_inputs_prepr['addr_state:NH'],
df_inputs_prepr['addr_state:WY'], df_inputs_prepr['addr_state:DC'],
df_inputs_prepr['addr_state:ME'], df_inputs_prepr['addr_state:ID']])
# -
# ### Preprocessing Discrete Variables: Homework
# 'verification_status'
df_temp = woe_discrete(df_inputs_prepr, 'verification_status', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# 'purpose'
df_temp = woe_discrete(df_inputs_prepr, 'purpose', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
# We combine 'educational', 'small_business', 'wedding', 'renewable_energy', 'moving', 'house' in one category: 'educ__sm_b__wedd__ren_en__mov__house'.
# We combine 'other', 'medical', 'vacation' in one category: 'oth__med__vacation'.
# We combine 'major_purchase', 'car', 'home_improvement' in one category: 'major_purch__car__home_impr'.
# We leave 'debt_consolidtion' in a separate category.
# We leave 'credit_card' in a separate category.
# 'educ__sm_b__wedd__ren_en__mov__house' will be the reference category.
df_inputs_prepr['purpose:educ__sm_b__wedd__ren_en__mov__house'] = sum([df_inputs_prepr['purpose:educational'], df_inputs_prepr['purpose:small_business'],
df_inputs_prepr['purpose:wedding'], df_inputs_prepr['purpose:renewable_energy'],
df_inputs_prepr['purpose:moving'], df_inputs_prepr['purpose:house']])
df_inputs_prepr['purpose:oth__med__vacation'] = sum([df_inputs_prepr['purpose:other'], df_inputs_prepr['purpose:medical'],
df_inputs_prepr['purpose:vacation']])
df_inputs_prepr['purpose:major_purch__car__home_impr'] = sum([df_inputs_prepr['purpose:major_purchase'], df_inputs_prepr['purpose:car'],
df_inputs_prepr['purpose:home_improvement']])
# 'initial_list_status'
df_temp = woe_discrete(df_inputs_prepr, 'initial_list_status', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# ### Preprocessing Continuous Variables: Automating Calculations and Visualizing Results
# WoE function for ordered discrete and continuous variables
def woe_ordered_continuous(df, discrete_variabe_name, good_bad_variable_df):
df = pd.concat([df[discrete_variabe_name], good_bad_variable_df], axis = 1)
df = pd.concat([df.groupby(df.columns.values[0], as_index = False)[df.columns.values[1]].count(),
df.groupby(df.columns.values[0], as_index = False)[df.columns.values[1]].mean()], axis = 1)
df = df.iloc[:, [0, 1, 3]]
df.columns = [df.columns.values[0], 'n_obs', 'prop_good']
df['prop_n_obs'] = df['n_obs'] / df['n_obs'].sum()
df['n_good'] = df['prop_good'] * df['n_obs']
df['n_bad'] = (1 - df['prop_good']) * df['n_obs']
df['prop_n_good'] = df['n_good'] / df['n_good'].sum()
df['prop_n_bad'] = df['n_bad'] / df['n_bad'].sum()
df['WoE'] = np.log(df['prop_n_good'] / df['prop_n_bad'])
#df = df.sort_values(['WoE'])
#df = df.reset_index(drop = True)
df['diff_prop_good'] = df['prop_good'].diff().abs()
df['diff_WoE'] = df['WoE'].diff().abs()
df['IV'] = (df['prop_n_good'] - df['prop_n_bad']) * df['WoE']
df['IV'] = df['IV'].sum()
return df
# Here we define a function similar to the one above, ...
# ... with one slight difference: we order the results by the values of a different column.
# The function takes 3 arguments: a dataframe, a string, and a dataframe. The function returns a dataframe as a result.
# ### Preprocessing Continuous Variables: Creating Dummy Variables, Part 1
# term
df_inputs_prepr['term_int'].unique()
# There are only two unique values, 36 and 60.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'term_int', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# Leave as is.
# '60' will be the reference category.
df_inputs_prepr['term:36'] = np.where((df_inputs_prepr['term_int'] == 36), 1, 0)
df_inputs_prepr['term:60'] = np.where((df_inputs_prepr['term_int'] == 60), 1, 0)
# emp_length_int
df_inputs_prepr['emp_length_int'].unique()
# Has only 11 levels: from 0 to 10. Hence, we turn it into a factor with 11 levels.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'emp_length_int', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# We create the following categories: '0', '1', '2 - 4', '5 - 6', '7 - 9', '10'
# '0' will be the reference category
df_inputs_prepr['emp_length:0'] = np.where(df_inputs_prepr['emp_length_int'].isin([0]), 1, 0)
df_inputs_prepr['emp_length:1'] = np.where(df_inputs_prepr['emp_length_int'].isin([1]), 1, 0)
df_inputs_prepr['emp_length:2-4'] = np.where(df_inputs_prepr['emp_length_int'].isin(range(2, 5)), 1, 0)
df_inputs_prepr['emp_length:5-6'] = np.where(df_inputs_prepr['emp_length_int'].isin(range(5, 7)), 1, 0)
df_inputs_prepr['emp_length:7-9'] = np.where(df_inputs_prepr['emp_length_int'].isin(range(7, 10)), 1, 0)
df_inputs_prepr['emp_length:10'] = np.where(df_inputs_prepr['emp_length_int'].isin([10]), 1, 0)
# ### Preprocessing Continuous Variables: Creating Dummy Variables, Part 2
df_inputs_prepr['mths_since_issue_d'].unique()
df_inputs_prepr['mths_since_issue_d_factor'] = pd.cut(df_inputs_prepr['mths_since_issue_d'], 50)
# Here we do fine-classing: using the 'cut' method, we split the variable into 50 categories by its values.
df_inputs_prepr['mths_since_issue_d_factor']
# mths_since_issue_d
df_temp = woe_ordered_continuous(df_inputs_prepr, 'mths_since_issue_d_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# We have to rotate the labels because we cannot read them otherwise.
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values, rotating the labels 90 degrees.
plot_by_woe(df_temp.iloc[3: , : ], 90)
# We plot the weight of evidence values.
# We create the following categories:
# < 38, 38 - 39, 40 - 41, 42 - 48, 49 - 52, 53 - 64, 65 - 84, > 84.
df_inputs_prepr['mths_since_issue_d:<38'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(38)), 1, 0)
df_inputs_prepr['mths_since_issue_d:38-39'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(38, 40)), 1, 0)
df_inputs_prepr['mths_since_issue_d:40-41'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(40, 42)), 1, 0)
df_inputs_prepr['mths_since_issue_d:42-48'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(42, 49)), 1, 0)
df_inputs_prepr['mths_since_issue_d:49-52'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(49, 53)), 1, 0)
df_inputs_prepr['mths_since_issue_d:53-64'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(53, 65)), 1, 0)
df_inputs_prepr['mths_since_issue_d:65-84'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(65, 85)), 1, 0)
df_inputs_prepr['mths_since_issue_d:>84'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(85, int(df_inputs_prepr['mths_since_issue_d'].max()))), 1, 0)
# int_rate
df_inputs_prepr['int_rate_factor'] = pd.cut(df_inputs_prepr['int_rate'], 50)
# Here we do fine-classing: using the 'cut' method, we split the variable into 50 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'int_rate_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
# +
# '< 9.548', '9.548 - 12.025', '12.025 - 15.74', '15.74 - 20.281', '> 20.281'
# -
df_inputs_prepr['int_rate:<9.548'] = np.where((df_inputs_prepr['int_rate'] <= 9.548), 1, 0)
df_inputs_prepr['int_rate:9.548-12.025'] = np.where((df_inputs_prepr['int_rate'] > 9.548) & (df_inputs_prepr['int_rate'] <= 12.025), 1, 0)
df_inputs_prepr['int_rate:12.025-15.74'] = np.where((df_inputs_prepr['int_rate'] > 12.025) & (df_inputs_prepr['int_rate'] <= 15.74), 1, 0)
df_inputs_prepr['int_rate:15.74-20.281'] = np.where((df_inputs_prepr['int_rate'] > 15.74) & (df_inputs_prepr['int_rate'] <= 20.281), 1, 0)
df_inputs_prepr['int_rate:>20.281'] = np.where((df_inputs_prepr['int_rate'] > 20.281), 1, 0)
# funded_amnt
df_inputs_prepr['funded_amnt_factor'] = pd.cut(df_inputs_prepr['funded_amnt'], 50)
# Here we do fine-classing: using the 'cut' method, we split the variable into 50 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'funded_amnt_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
# ### Data Preparation: Continuous Variables, Part 1 and 2: Homework
# mths_since_earliest_cr_line
df_inputs_prepr['mths_since_earliest_cr_line_factor'] = pd.cut(df_inputs_prepr['mths_since_earliest_cr_line'], 50)
# Here we do fine-classing: using the 'cut' method, we split the variable into 50 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'mths_since_earliest_cr_line_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
plot_by_woe(df_temp.iloc[6: , : ], 90)
# We plot the weight of evidence values.
# We create the following categories:
# < 140, # 141 - 164, # 165 - 247, # 248 - 270, # 271 - 352, # > 352
df_inputs_prepr['mths_since_earliest_cr_line:<140'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(140)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:141-164'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(140, 165)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:165-247'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(165, 248)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:248-270'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(248, 271)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:271-352'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(271, 353)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:>352'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(353, int(df_inputs_prepr['mths_since_earliest_cr_line'].max()))), 1, 0)
# delinq_2yrs
df_temp = woe_ordered_continuous(df_inputs_prepr, 'delinq_2yrs', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# Categories: 0, 1-3, >=4
df_inputs_prepr['delinq_2yrs:0'] = np.where((df_inputs_prepr['delinq_2yrs'] == 0), 1, 0)
df_inputs_prepr['delinq_2yrs:1-3'] = np.where((df_inputs_prepr['delinq_2yrs'] >= 1) & (df_inputs_prepr['delinq_2yrs'] <= 3), 1, 0)
df_inputs_prepr['delinq_2yrs:>=4'] = np.where((df_inputs_prepr['delinq_2yrs'] >= 9), 1, 0)
# inq_last_6mths
df_temp = woe_ordered_continuous(df_inputs_prepr, 'inq_last_6mths', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# Categories: 0, 1 - 2, 3 - 6, > 6
df_inputs_prepr['inq_last_6mths:0'] = np.where((df_inputs_prepr['inq_last_6mths'] == 0), 1, 0)
df_inputs_prepr['inq_last_6mths:1-2'] = np.where((df_inputs_prepr['inq_last_6mths'] >= 1) & (df_inputs_prepr['inq_last_6mths'] <= 2), 1, 0)
df_inputs_prepr['inq_last_6mths:3-6'] = np.where((df_inputs_prepr['inq_last_6mths'] >= 3) & (df_inputs_prepr['inq_last_6mths'] <= 6), 1, 0)
df_inputs_prepr['inq_last_6mths:>6'] = np.where((df_inputs_prepr['inq_last_6mths'] > 6), 1, 0)
# open_acc
df_temp = woe_ordered_continuous(df_inputs_prepr, 'open_acc', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
plot_by_woe(df_temp.iloc[ : 40, :], 90)
# We plot the weight of evidence values.
# Categories: '0', '1-3', '4-12', '13-17', '18-22', '23-25', '26-30', '>30'
df_inputs_prepr['open_acc:0'] = np.where((df_inputs_prepr['open_acc'] == 0), 1, 0)
df_inputs_prepr['open_acc:1-3'] = np.where((df_inputs_prepr['open_acc'] >= 1) & (df_inputs_prepr['open_acc'] <= 3), 1, 0)
df_inputs_prepr['open_acc:4-12'] = np.where((df_inputs_prepr['open_acc'] >= 4) & (df_inputs_prepr['open_acc'] <= 12), 1, 0)
df_inputs_prepr['open_acc:13-17'] = np.where((df_inputs_prepr['open_acc'] >= 13) & (df_inputs_prepr['open_acc'] <= 17), 1, 0)
df_inputs_prepr['open_acc:18-22'] = np.where((df_inputs_prepr['open_acc'] >= 18) & (df_inputs_prepr['open_acc'] <= 22), 1, 0)
df_inputs_prepr['open_acc:23-25'] = np.where((df_inputs_prepr['open_acc'] >= 23) & (df_inputs_prepr['open_acc'] <= 25), 1, 0)
df_inputs_prepr['open_acc:26-30'] = np.where((df_inputs_prepr['open_acc'] >= 26) & (df_inputs_prepr['open_acc'] <= 30), 1, 0)
df_inputs_prepr['open_acc:>=31'] = np.where((df_inputs_prepr['open_acc'] >= 31), 1, 0)
# pub_rec
df_temp = woe_ordered_continuous(df_inputs_prepr, 'pub_rec', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
# Categories '0-2', '3-4', '>=5'
df_inputs_prepr['pub_rec:0-2'] = np.where((df_inputs_prepr['pub_rec'] >= 0) & (df_inputs_prepr['pub_rec'] <= 2), 1, 0)
df_inputs_prepr['pub_rec:3-4'] = np.where((df_inputs_prepr['pub_rec'] >= 3) & (df_inputs_prepr['pub_rec'] <= 4), 1, 0)
df_inputs_prepr['pub_rec:>=5'] = np.where((df_inputs_prepr['pub_rec'] >= 5), 1, 0)
# total_acc
df_inputs_prepr['total_acc_factor'] = pd.cut(df_inputs_prepr['total_acc'], 50)
# Here we do fine-classing: using the 'cut' method, we split the variable into 50 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'total_acc_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
# Categories: '<=27', '28-51', '>51'
df_inputs_prepr['total_acc:<=27'] = np.where((df_inputs_prepr['total_acc'] <= 27), 1, 0)
df_inputs_prepr['total_acc:28-51'] = np.where((df_inputs_prepr['total_acc'] >= 28) & (df_inputs_prepr['total_acc'] <= 51), 1, 0)
df_inputs_prepr['total_acc:>=52'] = np.where((df_inputs_prepr['total_acc'] >= 52), 1, 0)
# acc_now_delinq
df_temp = woe_ordered_continuous(df_inputs_prepr, 'acc_now_delinq', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp)
# We plot the weight of evidence values.
# Categories: '0', '>=1'
df_inputs_prepr['acc_now_delinq:0'] = np.where((df_inputs_prepr['acc_now_delinq'] == 0), 1, 0)
df_inputs_prepr['acc_now_delinq:>=1'] = np.where((df_inputs_prepr['acc_now_delinq'] >= 1), 1, 0)
# total_rev_hi_lim
df_inputs_prepr['total_rev_hi_lim_factor'] = pd.cut(df_inputs_prepr['total_rev_hi_lim'], 2000)
# Here we do fine-classing: using the 'cut' method, we split the variable into 2000 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'total_rev_hi_lim_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp.iloc[: 50, : ], 90)
# We plot the weight of evidence values.
# Categories
# '<=5K', '5K-10K', '10K-20K', '20K-30K', '30K-40K', '40K-55K', '55K-95K', '>95K'
df_inputs_prepr['total_rev_hi_lim:<=5K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] <= 5000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:5K-10K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 5000) & (df_inputs_prepr['total_rev_hi_lim'] <= 10000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:10K-20K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 10000) & (df_inputs_prepr['total_rev_hi_lim'] <= 20000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:20K-30K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 20000) & (df_inputs_prepr['total_rev_hi_lim'] <= 30000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:30K-40K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 30000) & (df_inputs_prepr['total_rev_hi_lim'] <= 40000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:40K-55K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 40000) & (df_inputs_prepr['total_rev_hi_lim'] <= 55000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:55K-95K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 55000) & (df_inputs_prepr['total_rev_hi_lim'] <= 95000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:>95K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 95000), 1, 0)
# installment
df_inputs_prepr['installment_factor'] = pd.cut(df_inputs_prepr['installment'], 50)
# Here we do fine-classing: using the 'cut' method, we split the variable into 50 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'installment_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
# ### Preprocessing Continuous Variables: Creating Dummy Variables, Part 3
# annual_inc
df_inputs_prepr['annual_inc_factor'] = pd.cut(df_inputs_prepr['annual_inc'], 50)
# Here we do fine-classing: using the 'cut' method, we split the variable into 50 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'annual_inc_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
df_inputs_prepr['annual_inc_factor'] = pd.cut(df_inputs_prepr['annual_inc'], 100)
# Here we do fine-classing: using the 'cut' method, we split the variable into 100 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'annual_inc_factor', df_targets_prepr)
# We calculate weight of evidence.
df_temp
# Initial examination shows that there are too few individuals with large income and too many with small income.
# Hence, we are going to have one category for more than 150K, and we are going to apply our approach to determine
# the categories of everyone with 140k or less.
df_inputs_prepr_temp = df_inputs_prepr.loc[df_inputs_prepr['annual_inc'] <= 140000, : ]
#loan_data_temp = loan_data_temp.reset_index(drop = True)
#df_inputs_prepr_temp
df_inputs_prepr_temp["annual_inc_factor"] = pd.cut(df_inputs_prepr_temp['annual_inc'], 50)
# Here we do fine-classing: using the 'cut' method, we split the variable into 50 categories by its values.
df_temp = woe_ordered_continuous(df_inputs_prepr_temp, 'annual_inc_factor', df_targets_prepr[df_inputs_prepr_temp.index])
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
# WoE is monotonically decreasing with income, so we split income in 10 equal categories, each with width of 15k.
df_inputs_prepr['annual_inc:<20K'] = np.where((df_inputs_prepr['annual_inc'] <= 20000), 1, 0)
df_inputs_prepr['annual_inc:20K-30K'] = np.where((df_inputs_prepr['annual_inc'] > 20000) & (df_inputs_prepr['annual_inc'] <= 30000), 1, 0)
df_inputs_prepr['annual_inc:30K-40K'] = np.where((df_inputs_prepr['annual_inc'] > 30000) & (df_inputs_prepr['annual_inc'] <= 40000), 1, 0)
df_inputs_prepr['annual_inc:40K-50K'] = np.where((df_inputs_prepr['annual_inc'] > 40000) & (df_inputs_prepr['annual_inc'] <= 50000), 1, 0)
df_inputs_prepr['annual_inc:50K-60K'] = np.where((df_inputs_prepr['annual_inc'] > 50000) & (df_inputs_prepr['annual_inc'] <= 60000), 1, 0)
df_inputs_prepr['annual_inc:60K-70K'] = np.where((df_inputs_prepr['annual_inc'] > 60000) & (df_inputs_prepr['annual_inc'] <= 70000), 1, 0)
df_inputs_prepr['annual_inc:70K-80K'] = np.where((df_inputs_prepr['annual_inc'] > 70000) & (df_inputs_prepr['annual_inc'] <= 80000), 1, 0)
df_inputs_prepr['annual_inc:80K-90K'] = np.where((df_inputs_prepr['annual_inc'] > 80000) & (df_inputs_prepr['annual_inc'] <= 90000), 1, 0)
df_inputs_prepr['annual_inc:90K-100K'] = np.where((df_inputs_prepr['annual_inc'] > 90000) & (df_inputs_prepr['annual_inc'] <= 100000), 1, 0)
df_inputs_prepr['annual_inc:100K-120K'] = np.where((df_inputs_prepr['annual_inc'] > 100000) & (df_inputs_prepr['annual_inc'] <= 120000), 1, 0)
df_inputs_prepr['annual_inc:120K-140K'] = np.where((df_inputs_prepr['annual_inc'] > 120000) & (df_inputs_prepr['annual_inc'] <= 140000), 1, 0)
df_inputs_prepr['annual_inc:>140K'] = np.where((df_inputs_prepr['annual_inc'] > 140000), 1, 0)
# mths_since_last_delinq
# We have to create one category for missing values and do fine and coarse classing for the rest.
df_inputs_prepr_temp = df_inputs_prepr[pd.notnull(df_inputs_prepr['mths_since_last_delinq'])]
df_inputs_prepr_temp['mths_since_last_delinq_factor'] = pd.cut(df_inputs_prepr_temp['mths_since_last_delinq'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr_temp, 'mths_since_last_delinq_factor', df_targets_prepr[df_inputs_prepr_temp.index])
# We calculate weight of evidence.
df_temp
plot_by_woe(df_temp, 90)
# We plot the weight of evidence values.
# Categories: Missing, 0-3, 4-30, 31-56, >=57
df_inputs_prepr['mths_since_last_delinq:Missing'] = np.where((df_inputs_prepr['mths_since_last_delinq'].isnull()), 1, 0)
df_inputs_prepr['mths_since_last_delinq:0-3'] = np.where((df_inputs_prepr['mths_since_last_delinq'] >= 0) & (df_inputs_prepr['mths_since_last_delinq'] <= 3), 1, 0)
df_inputs_prepr['mths_since_last_delinq:4-30'] = np.where((df_inputs_prepr['mths_since_last_delinq'] >= 4) & (df_inputs_prepr['mths_since_last_delinq'] <= 30), 1, 0)
df_inputs_prepr['mths_since_last_delinq:31-56'] = np.where((df_inputs_prepr['mths_since_last_delinq'] >= 31) & (df_inputs_prepr['mths_since_last_delinq'] <= 56), 1, 0)
df_inputs_prepr['mths_since_last_delinq:>=57'] = np.where((df_inputs_prepr['mths_since_last_delinq'] >= 57), 1, 0)
|
Credit Risk Modeling/Credit Risk Modeling - Preparation - With Comments - 5-16.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
rt_data = pd.read_csv('../data/hue_data.csv')
rt_data
rt_data.groupby('Method').describe()
# +
my_dpi = 94
plt.rcParams.update({'font.size': 13})
plt.figure(figsize=(800/my_dpi, 600/my_dpi), dpi=my_dpi)
ax = sns.boxplot(x="Method", y="ResponseTime", data=rt_data, palette="Set3")
plt.ylabel('Response time (s)')
x = [0.056602, 1.354000, 3.176624,4.532000, 7.039000]
for i, v in enumerate(x):
plt.text(i-0.10, x[i], '%.2f'%v, color='black', size=11)
plt.tight_layout()
plt.xticks(rotation=-10)
plt.savefig('./cloud_vs_local_boxplot.png')
plt.show()
# -
|
graph/figure08-responsetime_of_hue_by_method.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Extract Hebrew to LXX verb mappings from the CATSS dataset
#
# The CATSS parallel dataset has been processed into JSON files in :
# https://github.com/codykingham/CATSS_parsers
#
# The dataset is not yet free of all parsing errors. But the overwhelming majority of
# lines in the dataset have been successfully parsed (greater than 99%).
#
# We will pick through this dataset to select some alignments of interest. At the
# moment we are most interested in verb alignments. Thus we will focus on the verbs.
#
# ## Making the connections
#
# Several connections need to be made to sucessfully retrieve a verb
# alignment to the Hebrew:
#
# * Hebrew word in parallel dataset needs to successfully string-match
# with its BHSA equivalent (requires some normalizing to get them to
# match). This is done within a verse.
# * The verse of the Greek word needs to be sucessfully cross-referenced
# to its LXX verse reference. We can use the Copenhagen-Alliance's
# [verse mappings](https://github.com/Copenhagen-Alliance/versification-specification/tree/master/versification-mappings/standard-mappings) to
# accomplish this; the parallel dataset has its own way to indicate where the
# versification of Rahlfs differs from the BHS, but at the moment I do not
# trust its reliability.
# * the string of the Greek word within the selected parallel verse needs to be matched
# with the string of the word in the morphology dataset
#
# If these specifications are met, we have a match. The parallel will
# be stored in a dictionary format that will map the BHSA node number
# to all of the word data for a matching Greek verb.
#
# ```
# {
# 3: {
# "utf8": "ἐποίησεν",
# "trans": "E)POI/HSEN",
# "typ": "verb",
# "styp": "VAI",
# "lexeme": "POIE/W",
# "morph_code": "VAI.AAI3S",
# "number": "sg",
# "tense": "aorist",
# "voice": "active",
# "mood": "indc",
# "person": "3"
# },
# }
# ```
# The bhsa node number can easily be used in Text-Fabric to get data on the word:
#
# ```
# T.text(3) == 'בָּרָא'
# F.sp.v(3) == 'verb'
# ```
# +
import sys
import json
import regex
import collections
from tf.app import use
from pathlib import Path
import pandas as pd
sys.path.append('../../scripts/hebrew')
from positions import PositionsTF
# configure output files
BHSA2LXX = Path('../../_private_/verb_data/bhsa2lxx.json')
LXXVERSES = Path('../../_private_/verb_data/lxx_verses.json')
# get data locations for LXX and alignments
github_dir = Path.home().joinpath('github')
catss_repo = github_dir.joinpath('codykingham/CATSS_parsers')
para_dir = catss_repo.joinpath('JSON/parallel')
morph_dir = catss_repo.joinpath('JSON/morphology')
# load the LXX data
para_data = [
json.loads(file.read_text())
for file in sorted(para_dir.glob('*.par.json'))
]
morph_data = [
json.loads(file.read_text())
for file in sorted(morph_dir.glob('*.mlxx.json'))
]
# get versification map for LXX and load it
lxx_verse_map_file = github_dir.joinpath('copenhagen_alliance/versification-specification\
/versification-mappings/standard-mappings/lxx.json')
lxx_verse_map = json.loads(lxx_verse_map_file.read_text())['mappedVerses']
# load Text-Fabric (for Hebrew BHSA data)
# and assign short-form variables for easy access to its methods
bhsa = use('bhsa')
api = bhsa.api
F, E, T, L = api.F, api.E, api.T, api.L
# -
para_data[0][0]
morph_data[0][0][:3]
list(lxx_verse_map.items())[:10]
# ## Map Verse References
#
# The basis of all our connections will be on verse references. We will map the
# verse references to the USX schema:
#
# https://ubsicap.github.io/usx/vocabularies.html
#
# The CATSS parallel and morphology data have already been normalized with USX-style
# references. The only exception being those cases such as Joshua and Judges which have
# appended A or B (appended with an underscore).
#
# The USX verse references have mappings between LXX and MT, provided by the
# Copenhagen Alliance verse mappings:
# https://github.com/Copenhagen-Alliance/versification-specification/tree/master/versification-mappings/standard-mappings
#
# We will need to make some modifications to the versification for Ezra-Nehemiah,
# since the CATSS dataset follows the Greek division of the book into 2Esdras or
# Εσδρας β, and the Copenhagen mapping / USX use the Latin division for these books.
# For more on this, see https://en.wikipedia.org/wiki/Esdras#Naming_conventions
#
# The mappings we need to make are:
# * 2Esdras 1:1-10:44 == Ezra
# * 2Esdras 11:1-23:31 = Nehemiah
#
# NB: we only map relevant books (i.e. those with Hebrew attestations).
#
# We also create a dictionary, `mt2lxx_verse`, which contains 1-to-1 verse mappings
# from an MT verse reference to a LXX reference.
#
# NB: Copenhagen Alliance verse mappings start from 0 in the Psalms, presumably for the
# superscriptions (?). We do not have these verses in any of the datasets so we ignore
# 0 verses.
#
#
# +
# filter out Ezra/Ezdras mappings
lxx_verse_map2 = {
v1:v2 for v1, v2 in lxx_verse_map.items()
if not regex.match('EZR|2ES', v1)
}
# add new mappings
lxx_verse_map2.update({
'2ES 1:1-11': 'EZR 1:1-11',
'2ES 2:1-70': 'EZR 2:1-70',
'2ES 3:1-13': 'EZR 3:1-13',
'2ES 4:1-24': 'EZR 4:1-24',
'2ES 5:1-17': 'EZR 5:1-17',
'2ES 6:1-22': 'EZR 6:1-22',
'2ES 7:1-28': 'EZR 7:1-28',
'2ES 8:1-36': 'EZR 8:1-36',
'2ES 9:1-15': 'EZR 9:1-15',
'2ES 10:1-44': 'EZR 10:1-44',
"2ES 11:1-11": "NEH 1:1-11",
"2ES 12:1-20": "NEH 2:1-20",
"2ES 13:1-38": "NEH 3:1-38",
"2ES 14:1-17": "NEH 4:1-17",
"2ES 15:1-19": "NEH 5:1-19",
"2ES 16:1-19": "NEH 6:1-19",
"2ES 17:1-72": "NEH 7:1-72",
"2ES 18:1-18": "NEH 8:1-18",
"2ES 19:1-37": "NEH 9:1-37",
"2ES 20:1-40": "NEH 10:1-40",
"2ES 21:1-36": "NEH 11:1-36",
"2ES 22:1-47": "NEH 12:1-47",
"2ES 23:1-31": "NEH 13:1-31",
})
mt2lxx_versemap = {v:k for k,v in lxx_verse_map2.items()}
def generate_verses(reference):
"""Split a reference range into individual references"""
reference = reference.replace(':0', ':1') # replace zero verses; don't know why they are there
book = reference.split()[0]
ch_vss = reference.split()[1]
ch = ch_vss.split(':')[0]
try:
vs_start, vs_end = ch_vss.split(':')[1].split('-')
except:
raise Exception(reference)
refs = []
for i in range(int(vs_start), int(vs_end)+1):
refs.append(f'{book} {ch}:{i}')
return refs
# expand versemap to include every verse in between the ranges
# so that a verse can be converted with a simple dict lookup
mt2lxx_verse = {}
for lxx_vss, mt_vss in lxx_verse_map2.items():
if '-' in lxx_vss and '-' in mt_vss:
lxx_refs = generate_verses(lxx_vss)
mt_refs = generate_verses(mt_vss)
mt2lxx_verse.update(zip(mt_refs, lxx_refs))
elif '-' not in lxx_vss and '-' not in mt_vss:
mt2lxx_verse[mt_vss] = lxx_vss
else:
raise Exception('NB: a not 1-to-1 mapping found')
# -
# e.g.
mt2lxx_verse['PSA 115:1']
# ### Map BHSA verse references to USX abbreviations
bhsa2usx = {
'Genesis': 'GEN',
'Exodus': 'EXO',
'Leviticus': 'LEV',
'Numbers': 'NUM',
'Deuteronomy': 'DEU',
'Joshua': 'JOS_B', # NB: going with B col for now
'Judges': 'JDG_A', # NB: going with A col for now
'1_Samuel': '1SA',
'2_Samuel': '2SA',
'1_Kings': '1KI',
'2_Kings': '2KI',
'Isaiah': 'ISA',
'Jeremiah': 'JER',
'Ezekiel': 'EZE',
'Hosea': 'HOS',
'Joel': 'JOL',
'Amos': 'AMO',
'Obadiah': 'OBA',
'Jonah': 'JON',
'Micah': 'MIC',
'Nahum': 'NAM',
'Habakkuk': 'HAB',
'Zephaniah': 'ZEP',
'Haggai': 'HAG',
'Zechariah': 'ZEC',
'Malachi': 'MAL',
'Psalms': 'PSA',
'Job': 'JOB',
'Proverbs': 'PRO',
'Ruth': 'RUT',
'Song_of_songs': 'SNG',
'Ecclesiastes': 'ECC',
'Lamentations': 'LAM',
'Esther': 'EST',
'Daniel': 'DAN',
'Ezra': 'EZR',
'Nehemiah': 'NEH',
'1_Chronicles': '1CH',
'2_Chronicles': '2CH',
}
# ### Map data to verse references
#
# Now we map the parallel and morphology data to verse references.
# These will serve as the primary basis for building the connections
# between the datasets.
# +
verse2para = {}
verse2morph = {}
for dataset, data_dict in [(para_data, verse2para), (morph_data, verse2morph)]:
for book in dataset:
for verse in book:
ref = verse[0]
verse_data = verse[1:]
data_dict[ref] = verse_data
# -
verse2para['GEN 1:1'] # Now we can access the data by verse references
verse2morph['GEN 1:1'][:2]
# ## Loop through BHSA candidates and build the connections
# +
missed_lxx_refs = set()
def get_greek_morphology(word, ref):
"""Look up a word's morphology data based on its reference."""
lxx_ref = mt2lxx_verse.get(ref, ref) # NB the MT to LXX verse mapping
# no parallel in the Greek
# TODO: double check this status
if lxx_ref not in verse2morph:
missed_lxx_refs.add(ref)
return None
for word_data in verse2morph[lxx_ref]:
if word_data['utf8'] == word:
word_data['LXX_verse'] = lxx_ref
return word_data
# -
# e.g.
ref = 'GEN 1:1'
get_greek_morphology(verse2para[ref][0][2][1][0], ref)
# +
match_ref = regex.compile(r'([A-Z0-9_]+) (\d+)?:?(\d+)')
def update_verse(verse_ref, tc_notes):
"""Iterate through text-critical notation and update verse if necessary."""
book, chapter, verse = match_ref.match(verse_ref).groups()
for note in tc_notes:
if note.isnumeric():
verse = note
break
return f'{book} {chapter}:{verse}'
# +
bhsa2lxx = {}
missed = []
bad_mtverses = set()
in_clause = lambda node1, node2: node2 in L.d(L.u(node1,'clause')[0], 'word')
# begin making the connections
for verb in F.pdp.s('verb'):
P = PositionsTF(verb, 'clause', api).get
# gather various candidates to attempt to match with the
# Hebrew parallel data stored in the LXX parallels dataset
possible_nodes = [(verb,)]
# handle attached elements to normalize with LXX
if P(-1, 'lex') == 'W':
bhsa_nodes = (verb-1, verb)
possible_nodes.append(bhsa_nodes)
elif F.vt.v(verb).startswith('inf') and P(-1, 'pdp') == 'prep':
bhsa_nodes = (verb-1, verb)
possible_nodes.append(bhsa_nodes)
if P(-2,'lex') == 'W':
bhsa_nodes = (P(-2),) + bhsa_nodes
possible_nodes.append(bhsa_nodes)
elif F.vt.v(verb).startswith('ptc') and P(-1, 'lex') == 'H':
bhsa_nodes = (verb-1, verb)
possible_nodes.append(bhsa_nodes)
# assemble the strings
possible_strings = set()
for pnn in possible_nodes:
possible_strings.add(''.join(F.g_cons_utf8.v(n) for n in pnn))
book, chapter, verse = T.sectionFromNode(verb)
usx_book = bhsa2usx[book]
if usx_book == 'OBA':
mt_ref = f'{usx_book} {verse}'
else:
mt_ref = f'{usx_book} {chapter}:{verse}'
# attempt link to parallel data line on basis of Hebrew text
link = {}
try:
para_cols = verse2para[mt_ref]
except:
bad_mtverses.add(mt_ref)
continue
for heb_colA, heb_colB, grk_col in para_cols :
if not heb_colA or heb_colA[0] == 'PARSING_ERROR':
continue
# attempt Hebrew connection to identify the right data line
heb_match = None
for word, tc_note in heb_colA:
if word in possible_strings:
heb_match = word
break
if not heb_match:
continue
# we have the right data line
# now attempt to make Greek connection
greek_match = None
if heb_match:
for greek_word, tc_note in grk_col:
# modify reference
lxx_ref = update_verse(mt_ref, tc_note)
morph = get_greek_morphology(greek_word, lxx_ref)
if morph and morph['typ'] == 'verb':
greek_match = morph
break
# we've made a connection!
# save the data and move on
if greek_match:
#verb_data = (verb, mt_ref, F.g_cons_utf8.v(verb))
link[verb] = greek_match
link[verb]['MT_verse'] = mt_ref
# record links and missed links
if link:
bhsa2lxx.update(link)
else:
missed.append((verb, mt_ref))
print(f'DONE')
percent_done = round(len(bhsa2lxx) / (len(bhsa2lxx)+len(missed)), 2)*100
percent_missed = round(len(missed) / (len(bhsa2lxx)+len(missed)), 2)*100
print(f'\tlinked: {len(bhsa2lxx)}', f'({percent_done}%)')
print(f'\tmissed: {len(missed)}', f'{percent_missed}%')
# -
bad_mtverses # NB: these need to be fixed in the parallel database
# +
#verse2morph['GEN 3:17']
# -
bhsa2lxx[3]
for verb, ref in missed[:0]:
print(verb)
print(F.g_cons_utf8.v(verb))
print(ref)
for line in verse2para[ref]:
print('\t', line)
print()
# Export LXX Data JSON
with open(BHSA2LXX, 'w') as out:
json.dump(bhsa2lxx, out, ensure_ascii=False, indent=2)
# +
# Assemble and export plain text verse data for LXX in a CSV
rows = []
for verse, words in verse2morph.items():
row = {}
row['MT_verse'] = mt2lxx_verse.get(verse, verse)
row['LXX_verse'] = verse
row['text'] = ' '.join(w['utf8'] for w in words)
rows.append(row)
df = pd.DataFrame(rows)
df.head()
# -
with open(LXXVERSES, 'w') as outfile:
json.dump(rows, outfile, indent=2, ensure_ascii=False)
|
workflow/notebooks/lxx/LXX_verbs.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Instrument synergy
# The purpose of this notebook is to prototype support for multiple instruments operating on the same signal incident at Earth. The instruments may have different wavebands and the data may or may not be phase resolved for each instrument.
# The unexecuted notebook for this tutorial may be found in a GitHub repository *together* with the necessary files which due to size are not included in the X-PSI respository:
# ``` bash
# git clone https://github.com/ThomasEdwardRiley/xpsi_workshop.git </path/to/clone>
#
# # cd </path/to/clone>/tutorials/v0.5/
# ```
# You can use the default atmosphere extension module ``xpsi/surface_radiation_field/archive/hot/blackbody.pyx``. To run this tutorial, you should therefore be able to simply use the default extensions that are automatically compiled when X-PSI is installed.
# +
# %matplotlib inline
from __future__ import print_function, division
import os
import numpy as np
import math
import time
from matplotlib import pyplot as plt
from matplotlib import rcParams
from matplotlib.ticker import MultipleLocator, AutoLocator, AutoMinorLocator
from matplotlib import gridspec
from matplotlib import cm
import xpsi
from xpsi.global_imports import _c, _G, _dpr, gravradius, _csq, _km, _2pi
# -
class Telescope(object):
pass
NICER = Telescope()
XMM = Telescope() # fabricated toy that we'll just pretend is XMM as a placeholder!
# ## Likelihood
# Let us load a synthetic data set that we generated in advance, and know the fictitious exposure time for.
# +
obs_settings = dict(counts=np.loadtxt('data/NICER_realisation.dat', dtype=np.double),
channels=np.arange(20, 201),
phases=np.linspace(0.0, 1.0, 33),
first=0,last=180,
exposure_time=984307.6661)
NICER.data = xpsi.Data(**obs_settings)
# +
obs_settings = dict(counts=np.loadtxt('data/XMM_realisation.dat', dtype=np.double).reshape(-1,1),
channels=np.arange(20, 201),
phases=np.array([0.0, 1.0]),
first=0,last=180,
exposure_time=1818434.247359)
XMM.data = xpsi.Data(**obs_settings)
# +
rcParams['text.usetex'] = False
rcParams['font.size'] = 14.0
def veneer(x, y, axes, lw=1.0, length=8):
""" Make the plots a little more aesthetically pleasing. """
if x is not None:
if x[1] is not None:
axes.xaxis.set_major_locator(MultipleLocator(x[1]))
if x[0] is not None:
axes.xaxis.set_minor_locator(MultipleLocator(x[0]))
else:
axes.xaxis.set_major_locator(AutoLocator())
axes.xaxis.set_minor_locator(AutoMinorLocator())
if y is not None:
if y[1] is not None:
axes.yaxis.set_major_locator(MultipleLocator(y[1]))
if y[0] is not None:
axes.yaxis.set_minor_locator(MultipleLocator(y[0]))
else:
axes.yaxis.set_major_locator(AutoLocator())
axes.yaxis.set_minor_locator(AutoMinorLocator())
axes.tick_params(which='major', colors='black', length=length, width=lw)
axes.tick_params(which='minor', colors='black', length=int(length/2), width=lw)
plt.setp(axes.spines.values(), linewidth=lw, color='black')
def plot_one_pulse(pulse, x, data, label=r'Counts',
cmap=cm.magma, vmin=None, vmax=None):
""" Plot a pulse resolved over a single rotational cycle. """
fig = plt.figure(figsize = (7,7))
gs = gridspec.GridSpec(1, 2, width_ratios=[50,1])
ax = plt.subplot(gs[0])
ax_cb = plt.subplot(gs[1])
profile = ax.pcolormesh(x,
data.channels,
pulse,
cmap = cmap,
vmin=vmin,
vmax=vmax,
linewidth = 0,
rasterized = True)
profile.set_edgecolor('face')
ax.set_xlim([0.0, 1.0])
ax.set_yscale('log')
ax.set_ylabel(r'Channel')
ax.set_xlabel(r'Phase')
cb = plt.colorbar(profile,
cax = ax_cb)
cb.set_label(label=label, labelpad=25)
cb.solids.set_edgecolor('face')
veneer((0.05, 0.2), (None, None), ax)
plt.subplots_adjust(wspace = 0.025)
# -
# Now for the data:
plot_one_pulse(NICER.data.counts, NICER.data.phases, NICER.data)
# +
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(111)
veneer((5, 25), (None,None), ax)
ax.plot(XMM.data.counts, 'k-', ls='steps', label='XMM')
ax.plot(np.sum(NICER.data.counts, axis=1), 'r-', ls='steps', label='NICER')
ax.legend()
ax.set_yscale('log')
ax.set_ylabel('Counts')
_ = ax.set_xlabel('Channel')
# -
# ### Instrument
# We require a model instrument object to transform incident specific flux signals into a form which enters directly in the sampling distribution of the data.
class CustomInstrument(xpsi.Instrument):
""" A model of the NICER telescope response. """
def __call__(self, signal, *args):
""" Overwrite base just to show it is possible.
We loaded only a submatrix of the total instrument response
matrix into memory, so here we can simplify the method in the
base class.
"""
matrix = self.construct_matrix()
self._folded_signal = np.dot(matrix, signal)
return self._folded_signal
@classmethod
def from_response_files(cls, ARF, RMF, max_input, min_input=0,
channel_edges=None, translate_edges=None, scaling=None):
""" Constructor which converts response files into :class:`numpy.ndarray`s.
:param str ARF: Path to ARF which is compatible with
:func:`numpy.loadtxt`.
:param str RMF: Path to RMF which is compatible with
:func:`numpy.loadtxt`.
:param str channel_edges: Optional path to edges which is compatible with
:func:`numpy.loadtxt`.
"""
if min_input != 0:
min_input = int(min_input)
max_input = int(max_input)
try:
ARF = np.loadtxt(ARF, dtype=np.double, skiprows=3)
RMF = np.loadtxt(RMF, dtype=np.double)
if channel_edges:
channel_edges = np.loadtxt(channel_edges, dtype=np.double, skiprows=3)[:,1:]
except:
print('A file could not be loaded.')
raise
if scaling is not None:
ARF[:,3] *= scaling
matrix = np.ascontiguousarray(RMF[min_input:max_input,20:201].T, dtype=np.double)
edges = np.zeros(ARF[min_input:max_input,3].shape[0]+1, dtype=np.double)
edges[0] = ARF[min_input,1]; edges[1:] = ARF[min_input:max_input,2]
for i in range(matrix.shape[0]):
matrix[i,:] *= ARF[min_input:max_input,3]
channels = np.arange(20, 201)
if translate_edges is not None:
edges += translate_edges
return cls(matrix, edges, channels, channel_edges[20:202,-2])
# Let's construct an instance.
NICER.instrument = CustomInstrument.from_response_files(ARF = '../../examples/model_data/nicer_v1.01_arf.txt',
RMF = '../../examples/model_data/nicer_v1.01_rmf_matrix.txt',
max_input = 500,
min_input = 0,
channel_edges = '../../examples/model_data/nicer_v1.01_rmf_energymap.txt')
XMM.instrument = CustomInstrument.from_response_files(ARF = '../../examples/model_data/nicer_v1.01_arf.txt',
scaling = 0.5,
RMF = '../../examples/model_data/nicer_v1.01_rmf_matrix.txt',
max_input = 500,
min_input = 0,
channel_edges = '../../examples/model_data/nicer_v1.01_rmf_energymap.txt',
translate_edges = 0.1)
# The NICER ``v1.01`` response matrix:
# +
fig = plt.figure(figsize = (14,7))
ax = fig.add_subplot(111)
veneer((25, 100), (10, 50), ax)
_ = ax.imshow(NICER.instrument.matrix,
cmap = cm.viridis,
rasterized = True)
ax.set_ylabel('Channel $-\;20$')
_ = ax.set_xlabel('Energy interval')
# -
# Summed over channel subset $[20,200]$:
# +
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(111)
veneer((0.1, 0.5), (50,250), ax)
ax.plot((NICER.instrument.energy_edges[:-1] + NICER.instrument.energy_edges[1:])/2.0,
np.sum(NICER.instrument.matrix, axis=0), 'k-')
ax.plot((XMM.instrument.energy_edges[:-1] + XMM.instrument.energy_edges[1:])/2.0,
np.sum(XMM.instrument.matrix, axis=0), 'r-')
ax.set_ylabel('Effective area [cm$^{-2}$]')
_ = ax.set_xlabel('Energy [keV]')
# -
# ### Signal
# +
from xpsi.likelihoods.default_background_marginalisation import eval_marginal_likelihood
from xpsi.likelihoods.default_background_marginalisation import precomputation
class CustomSignal(xpsi.Signal):
""" A custom calculation of the logarithm of the likelihood.
We extend the :class:`xpsi.Signal.Signal` class to make it callable.
We overwrite the body of the __call__ method. The docstring for the
abstract method is copied.
"""
def __init__(self, workspace_intervals = 1000, epsabs = 0, epsrel = 1.0e-8,
epsilon = 1.0e-3, sigmas = 10.0, support = None, *args, **kwargs):
""" Perform precomputation. """
super(CustomSignal, self).__init__(*args, **kwargs)
try:
self._precomp = precomputation(self._data.counts.astype(np.int32))
except AttributeError:
print('Warning: No data... can synthesise data but cannot evaluate a '
'likelihood function.')
else:
self._workspace_intervals = workspace_intervals
self._epsabs = epsabs
self._epsrel = epsrel
self._epsilon = epsilon
self._sigmas = sigmas
if support is not None:
self._support = support
else:
self._support = -1.0 * np.ones((self._data.counts.shape[0],2))
self._support[:,0] = 0.0
@property
def support(self):
return self._support
@support.setter
def support(self, obj):
self._support = obj
def __call__(self, **kwargs):
self.loglikelihood, self.expected_counts, self.background_signal, self.background_given_support = \
eval_marginal_likelihood(self._data.exposure_time,
self._data.phases,
self._data.counts,
self._signals,
self._phases,
self._shifts,
self._precomp,
self._support,
self._workspace_intervals,
self._epsabs,
self._epsrel,
self._epsilon,
self._sigmas,
kwargs.get('llzero'))
# -
# Note that if we need an additional overall phase shift parameter for additional instruments whose recorded data are phase resolved, then it could be passed to the subclass above for the signal associated with a given telescope.
NICER.signal = CustomSignal(data = NICER.data,
instrument = NICER.instrument,
background = None,
interstellar = None,
workspace_intervals = 1000,
epsrel = 1.0e-8,
epsilon = 1.0e-3,
sigmas = 10.0)
XMM.signal = CustomSignal(data = XMM.data,
instrument = XMM.instrument,
background = None,
interstellar = None,
support = None,
workspace_intervals = 1000,
epsrel = 1.0e-8,
epsilon = 1.0e-3,
sigmas = 10.0)
# ### Star
spacetime = xpsi.Spacetime.fixed_spin(300.0)
# +
bounds = dict(super_colatitude = (None, None),
super_radius = (None, None),
phase_shift = (-0.5, 0.5),
super_temperature = (None, None))
# a simple circular, simply-connected spot
primary = xpsi.HotRegion(bounds=bounds,
values={}, # no initial values and no derived/fixed
symmetry=True,
omit=False,
cede=False,
concentric=False,
sqrt_num_cells=32,
min_sqrt_num_cells=10,
max_sqrt_num_cells=64,
num_leaves=100,
num_rays=200,
do_fast=False,
prefix='p') # unique prefix needed because >1 instance
# +
class derive(xpsi.Derive):
def __init__(self):
"""
We can pass a reference to the primary here instead
and store it as an attribute if there is risk of
the global variable changing.
This callable can for this simple case also be
achieved merely with a function instead of a magic
method associated with a class.
"""
pass
def __call__(self, boundto, caller = None):
# one way to get the required reference
global primary # unnecessary, but for clarity
return primary['super_temperature'] - 0.2
bounds['super_temperature'] = None # declare fixed/derived variable
secondary = xpsi.HotRegion(bounds=bounds, # can otherwise use same bounds
values={'super_temperature': derive()}, # create a callable value
symmetry=True,
omit=False,
cede=False,
concentric=False,
sqrt_num_cells=32,
min_sqrt_num_cells=10,
max_sqrt_num_cells=100,
num_leaves=100,
num_rays=200,
do_fast=False,
is_antiphased=True,
prefix='s') # unique prefix needed because >1 instance
# +
from xpsi import HotRegions
hot = HotRegions((primary, secondary))
# -
class CustomPhotosphere(xpsi.Photosphere):
""" Implement method for imaging."""
def _global_variables(self):
return np.array([self['p__super_colatitude'],
self['p__phase_shift'] * _2pi,
self['p__super_radius'],
self['p__super_temperature'],
self['s__super_colatitude'],
(self['s__phase_shift'] + 0.5) * _2pi,
self['s__super_radius'],
self.hot.objects[1]['s__super_temperature']])
photosphere = CustomPhotosphere(hot = hot, elsewhere = None,
values=dict(mode_frequency = spacetime['frequency']))
star = xpsi.Star(spacetime = spacetime, photospheres = photosphere)
likelihood = xpsi.Likelihood(star = star, signals = [NICER.signal, XMM.signal],
threads = 1, externally_updated = False)
# The instrument wavebands exhibit a high degree of overlap. The energies at which we compute incident specific flux signals are based on the union of wavebands, distributed logarithmically:
fig = plt.figure(figsize=(8,8))
plt.plot(XMM.signal.energies, 'kx')
ax = plt.gca()
veneer((5,20),(0.2,1.0),ax)
ax.set_xlabel('Index')
_ = ax.set_ylabel('Energy [keV]')
# This is a simple, non-adaptive protocol to ensure that signals are not computed at very nearby energies for multiple telescopes, resulting in overhead.
# Let's call the ``likelihood`` object with the true model parameter values that we injected to generate the synthetic data rendered above, omitting background parameters:
# +
p = [1.4,
12.5,
0.2,
math.cos(1.25),
0.0,
1.0,
0.075,
6.2,
0.025,
math.pi - 1.0,
0.2]
t = time.time()
ll = likelihood(p, force=True) # force if you want to clear parameter value caches
print('ll = %.8f; time = %.3f' % (ll, time.time() - t))
# -
NICER.signal.loglikelihood # check NICER ll ~ -26713.602 ?
XMM.signal.loglikelihood # check XMM ll ~ -2608.841 ?
# Let's fabricate some rough prior information as the constrained support of the background parameters for XMM:
support = np.zeros((181, 2))
support[:,0] = XMM.signal.background_signal - 5.0 * np.sqrt(XMM.signal.background_signal)
support[:,1] = XMM.signal.background_signal + 5.0 * np.sqrt(XMM.signal.background_signal)
support /= XMM.data.exposure_time
XMM.signal.support = support
ll = likelihood(p, force=True)
# Let's confirm that the XMM background-marginalised likelihood did indeed change:
XMM.signal.loglikelihood
# The background-marginalised likelihood function has the following form. Subscripts N denote NICER, whilst subscripts X denote XMM.
#
# $$
# \begin{aligned}
# p(d_{\rm X}, d_{\rm N}, \{b_{\rm X}\} \,|\, s)
# \propto
# &
# \underbrace{\mathop{\int}_{\{0\}}^{\{\mathcal{U}_{\rm N}\}}
# p(d_{\rm N} \,|\, s, \{\mathbb{E}[b_{\rm N}]\}, \texttt{NICER})
# d\{\mathbb{E}[b_{\rm N}]\}}_{\rm exp( \texttt{NICER.signal.loglikelihood} )}\\
# &
# \times\underbrace{\mathop{\int}_{\{0\}}^{\{\mathcal{U}_{X}\}}
# p(d_{\rm X} \,|\, s, \{\mathbb{E}[b_{\rm X}]\}, \texttt{XMM})
# p(\{\mathbb{E}[b_{\rm X}]\} \,|\, \{b_{\rm X}\})d\{\mathbb{E}[b_{\rm X}]\}}_{\rm exp( \texttt{XMM.signal.loglikelihood} )}.
# \end{aligned}
# $$
#
# The term $p(\{\mathbb{E}[b_{\rm X}]\} \,|\, \{b_{\rm X}\})$ truncates the integral over XMM channel-by-channel background count rate variables to an interval $[a,b]$ in each channel, where $a,b\in\mathbb{R}^{+}$. This is the joint prior support of the variables $\{\mathbb{E}[b_{\rm X}]\}$ rendered in the spectral plot below. The form of the prior density $p(\{\mathbb{E}[b_{\rm X}]\} \,|\, \{b_{\rm X}\})$ is flat on this interval for each channel. This is a simplifying approximation to the probability of the background data $\{b_{\rm X}\}$ given the variables $\{\mathbb{E}[b_{\rm X}]\}$.
def plot_spectrum():
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(111)
veneer((5, 25), (None,None), ax)
ax.fill_between(np.arange(support.shape[0]),
support[:,0]*XMM.data.exposure_time,
support[:,1]*XMM.data.exposure_time,
step = 'pre',
color = 'k',
alpha = 0.5,
label = 'background support')
ax.plot(XMM.signal.background_signal, 'b-', ls='steps', label='MCL background')
ax.plot(XMM.signal.expected_counts, 'k-', ls='steps', label='MCL counts given support', lw=5.0)
ax.plot(XMM.data.counts, 'r-', ls='steps', label='XMM data')
ax.legend()
ax.set_yscale('log')
ax.set_ylabel('Counts')
_ = ax.set_xlabel('Channel')
plot_spectrum()
# The spectrum labelled *MCL counts given support* means the expected signal from the pulsar, plus the background count vector that maximises the conditional likelihood function given that pulsar signal, subject to the background vector existing in the prior support. The spectrum labelled *MCL background*, on the other hand, is the background vector that maximises the conditional likelihood function, but *not* subject to the prior support.
likelihood['p__super_temperature'] = 6.1
likelihood.externally_updated = True
likelihood()
XMM.signal.loglikelihood
plot_spectrum()
# ## Synthesis
# In this notebook thus far we have not generated sythetic data. However, we did condition on synthetic data. Below we outline how that data was generated.
# ### Background
# The background radiation field incident on the model instrument for the purpose of generating synthetic data was a time-invariant powerlaw spectrum, and was transformed into a count-rate in each output channel using the response matrix for synthetic data generation. We would reproduce this background here by writing a custom subclass as follows.
class CustomBackground(xpsi.Background):
""" The background injected to generate synthetic data. """
def __init__(self, bounds=None, value=None):
# first the parameters that are fundemental to this class
doc = """
Powerlaw spectral index.
"""
index = xpsi.Parameter('powerlaw_index',
strict_bounds = (-3.0, -1.01),
bounds = bounds,
doc = doc,
symbol = r'$\Gamma$',
value = value,
permit_prepend = False) # because to be shared by multiple objects
super(CustomBackground, self).__init__(index)
def __call__(self, energy_edges, phases):
""" Evaluate the incident background field. """
G = self['powerlaw_index']
temp = np.zeros((energy_edges.shape[0] - 1, phases.shape[0]))
temp[:,0] = (energy_edges[1:]**(G + 1.0) - energy_edges[:-1]**(G + 1.0)) / (G + 1.0)
for i in range(phases.shape[0]):
temp[:,i] = temp[:,0]
self.background = temp
background = CustomBackground(bounds=(None, None)) # use strict bounds, but do not fix/derive
# We will use this same background signal, albeit with different normalisations, for both telescopes. This is simply to generate finite background contributions.
# ### Data format
# We are also in need of a simpler data object.
class SynthesiseData(xpsi.Data):
""" Custom data container to enable synthesis. """
def __init__(self, channels, phases, first, last):
self.channels = channels
self._phases = phases
try:
self._first = int(first)
self._last = int(last)
except TypeError:
raise TypeError('The first and last channels must be integers.')
if self._first >= self._last:
raise ValueError('The first channel number must be lower than the '
'the last channel number.')
# Instantiate:
NICER.synth = SynthesiseData(np.arange(20,201), np.linspace(0.0, 1.0, 33), 0, 180)
XMM.synth = SynthesiseData(np.arange(20,201), np.array([0.0,1.0]), 0, 180)
# ### Custom method
from xpsi.tools.synthesise import synthesise_given_total_count_number as _synthesise
# +
def synthesise(self,
require_source_counts,
require_background_counts,
name='synthetic',
directory='./data',
**kwargs):
""" Synthesise data set.
"""
self._expected_counts, synthetic, _, _ = _synthesise(self._data.phases,
require_source_counts,
self._signals,
self._phases,
self._shifts,
require_background_counts,
self._background.registered_background)
try:
if not os.path.isdir(directory):
os.mkdir(directory)
except OSError:
print('Cannot create write directory.')
raise
np.savetxt(os.path.join(directory, name+'_realisation.dat'),
synthetic,
fmt = '%u')
self._write(self.expected_counts,
filename = os.path.join(directory, name+'_expected_hreadable.dat'),
fmt = '%.8e')
self._write(synthetic,
filename = os.path.join(directory, name+'_realisation_hreadable.dat'),
fmt = '%u')
def _write(self, counts, filename, fmt):
""" Write to file in human readable format. """
rows = len(self._data.phases) - 1
rows *= len(self._data.channels)
phases = self._data.phases[:-1]
array = np.zeros((rows, 3))
for i in range(counts.shape[0]):
for j in range(counts.shape[1]):
array[i*len(phases) + j,:] = self._data.channels[i], phases[j], counts[i,j]
np.savetxt(filename, array, fmt=['%u', '%.6f'] + [fmt])
# -
CustomSignal.synthesise = synthesise
CustomSignal._write = _write
# We now need to instantiate, and reconfigure the likelihood object:
# +
NICER.signal = CustomSignal(data = NICER.synth,
instrument = NICER.instrument,
background = background,
interstellar = None,
workspace_intervals = 1000,
epsrel = 1.0e-8,
epsilon = 1.0e-3,
sigmas = 10.0,
prefix='NICER')
XMM.signal = CustomSignal(data = XMM.synth,
instrument = XMM.instrument,
background = background,
interstellar = None,
workspace_intervals = 1000,
epsrel = 1.0e-8,
epsilon = 1.0e-3,
sigmas = 10.0,
prefix='XMM')
for h in hot.objects:
h.set_phases(num_leaves = 100)
likelihood = xpsi.Likelihood(star = star, signals = [NICER.signal, XMM.signal], threads=1)
# -
# ### Synthesise
# We proceed to synthesise. First we set an environment variable to seed the random number generator being called:
# %env GSL_RNG_SEED=0
# Check write path:
# !pwd
likelihood
# +
p = [1.4,
12.5,
0.2,
math.cos(1.25),
0.0,
1.0,
0.075,
6.2,
0.025,
math.pi - 1.0,
0.2,
-2.0]
NICER_kwargs = dict(require_source_counts = 2.0e6,
require_background_counts = 2.0e6,
name = 'new_NICER',
directory = './data')
XMM_kwargs = dict(require_source_counts = 1.0e6,
require_background_counts = 5.0e5,
name = 'new_XMM',
directory = './data')
likelihood.synthesise(p,
force = True,
NICER = NICER_kwargs,
XMM = XMM_kwargs) # SEED=0
# -
# Notice that the normalisations, with units photons/s/cm^2/keV, are different because we require so many background counts. This detail is unimportant for this notebook, wherein we simply want some finite background contributions.
plot_one_pulse(np.loadtxt('data/new_NICER_realisation.dat', dtype=np.double), NICER.data.phases, NICER.data)
# Check we have generated the same count numbers, given the same seed and resolution settings:
diff = XMM.data.counts - np.loadtxt('data/new_XMM_realisation.dat', dtype=np.double).reshape(-1,1)
(diff != 0.0).any()
diff = NICER.data.counts - np.loadtxt('data/new_NICER_realisation.dat', dtype=np.double)
(diff != 0.0).any()
# As discussed in the `Modeling` tutorial, with `xpsi.__version__` of `v0.6`, the same RNG seed does not yield the same ($\pm1$) count numbers, despite the small fractional difference between Poisson random variable expectations, for reasons that are unclear at present.
# +
x = np.loadtxt('data/NICER_expected_hreadable.dat')
y = np.loadtxt('data/new_NICER_expected_hreadable.dat')
xx = np.zeros(NICER.data.counts.shape)
yy = np.zeros(NICER.data.counts.shape)
for i in range(xx.shape[0]):
for j in range(xx.shape[1]):
xx[i,j] = x[i*32 + j,-1]
yy[i,j] = y[i*32 + j,-1]
# -
plot_one_pulse(yy-xx, NICER.data.phases, NICER.data)
_r = (yy-xx)/np.sqrt(yy)
plot_one_pulse(_r, NICER.data.phases, NICER.data,
'Standardised residuals',
cmap=cm.RdBu,
vmin=-np.max(np.fabs(_r)),
vmax=np.max(np.fabs(_r)))
plot_one_pulse(diff, NICER.data.phases, NICER.data)
|
docs/source/multiple_instruments.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import os
import csv
import sys
import numpy as np
from scipy import sparse
from collections import Counter
import xgboost as xgb
try:
import xml.etree.cElementTree as ET
except ImportError:
import xml.etree.ElementTree as ET
from sklearn.cross_validation import cross_val_score
from sklearn.feature_extraction.text import TfidfVectorizer
import util
# +
import csv
import sys
def reorder_submission(file_to_reorder, newfile_name = "experiment_results.csv"):
# READ IN KEYS IN CORRECT ORDER AS LIST
with open('keys.csv','r') as f:
keyreader = csv.reader(f)
keys = [key[0] for key in keyreader]
# READ IN ALL PREDICTIONS, REGARDLESS OF ORDER
with open(file_to_reorder) as f:
oldfile_reader = csv.reader(f)
D = {}
for i,row in enumerate(oldfile_reader):
if i == 0:
continue
_id, pred = row
D[_id] = pred
# WRITE PREDICTIONS IN NEW ORDER
with open(newfile_name,'wb') as f:
writer = csv.writer(f)
writer.writerow(('Id','Prediction'))
for key in keys:
writer.writerow((key,D[key]))
print("".join(["Reordered ", file_to_reorder," and wrote to ", newfile_name]))
# -
X_train = sparse.load_npz("X_train.npz")
t_train = np.load("t_train.npy")
train_ids = np.load("train_ids.npy")
print X_train.shape
print train_ids.shape
X_test = sparse.load_npz("X_test.npz")
test_ids = np.load("test_ids.npy")
print X_test.shape
print test_ids.shape
y_train = np.zeros((len(t_train),len(util.malware_classes)))
y_train[np.arange(len(t_train)), t_train] = 1
y_train.shape
# +
# X_train_bkup = X_train
# +
# X_test_bkup = X_test
# -
X_train = X_train_bkup
X_test = X_test_bkup
print t_train
param = {'max_depth':2, 'eta':1, 'silent':1, 'objective':'multi:softprob', 'num_class':15 }
dtrain = xgb.DMatrix(X_train, label = t_train)
dtest = xgb.DMatrix(X_test)
num_round = 2
bst = xgb.train(param, dtrain, num_round)
cv = xgb.cv(param, dtrain, 999, nfold=5, early_stopping_rounds=10, verbose_eval=1)
# make prediction
# preds = bst.predict(dtest)
# print preds
print "making predictions..."
results = np.argmax(preds, axis=1)
import matplotlib.pyplot as plt
# %matplotlib inline
plt.hist(results, bins=15, normed = True)
plt.show()
# +
mean = 1
min_mean = 1
for num_round in [2,6,10]:
for max_depth in [2, 4, 6]:
for eta in np.arange(0.05, 0.25, 0.05):
for min_child_weight in [1, 2]:
for col_sample in [0.5, 1]:
print("Test params: {}, {}, {}, {}, {}".format(num_round, max_depth, eta, min_child_weight, col_sample))
param = {'max_depth':max_depth, 'eta':eta, 'min_child_weight':min_child_weight, 'colsample_bytree':col_sample, 'objective':'multi:softprob', 'num_class':15 }
dtrain = xgb.DMatrix(X_train, label = t_train)
dtest = xgb.DMatrix(X_test)
# num_round = 2
bst = xgb.train(param, dtrain, num_round)
cv = xgb.cv(param, dtrain, 999, nfold=5, early_stopping_rounds=10)
mean = cv['test-merror-mean'].min()
boost_rounds = cv['test-merror-mean'].argmin()
print("\ttest-merror {} for {} rounds".format(mean, boost_rounds))
if mean < min_mean:
min_mean = mean
best_params = (num_round,max_depth,eta,min_child_weight,col_sample)
# -
print("Best params: {}, {}, {}, {}, {}, min_mean: {}".format(best_params[0], best_params[1], best_params[2], best_params[3], best_params[4], min_mean))
Test params: 2, 6, 0.2, 1, 1
test-merror 0.093334 for 25 rounds >>>>> 0.811xx
Test params: 2, 4, 0.15, 1, 0.5
test-merror 0.0933316 for 49 rounds >>>>> 0.82158
Test params: 2, 4, 0.1, 1, 0.5
test-merror 0.093983 for 68 rounds >>>> 0.825xx
param = {'max_depth':4, 'eta':0.1, 'min_child_weight':1, 'col_sample':0.5, 'objective':'multi:softprob', 'num_class':15 }
dtrain = xgb.DMatrix(X_train, label = t_train)
dtest = xgb.DMatrix(X_test)
num_round = 200
bst = xgb.train(param, dtrain, num_round)
# cv = xgb.cv(param, dtrain, 999, nfold=5, early_stopping_rounds=10, verbose_eval=1)
# make prediction
preds = bst.predict(dtest)
# print preds
print "making predictions..."
results = np.argmax(preds, axis=1)
print t_train
util.write_predictions(results, test_ids, "boost.csv")
reorder_submission("boost.csv", "boost_200_4_01_1_05.csv")
X_train1 = sparse.load_npz("X_train.npz")
t_train = np.load("t_train.npy")
train_ids = np.load("train_ids.npy")
X_test = sparse.load_npz("X_test.npz")
test_ids = np.load("test_ids.npy")
print X_test.shape
dtrain = xgb.DMatrix(X_train1, label = t_train)
dtest = xgb.DMatrix(X_test)
print X_test.shape
# +
#X_train = X_train.todense()
#X_test = X_test.todense()
# -
# +
from sklearn.cross_validation import StratifiedKFold as KFold
import pandas as pd
params = [{'max_depth':4, 'eta': 0.15, 'min_child_weight':1, 'colsample_bytree':0.5, 'objective':'multi:softprob', 'num_class':15 }]
for param in params:
print param
labels = t_train
bst = xgb.train(param, dtrain, 70)
preds = bst.predict(dtest)
labels_test = np.argmax(preds, axis=1)
kf = KFold(t_train, n_folds=4)
X = X_train1
stack_train = np.zeros((test_ids.shape[0],15)) # 15 classes.
for i, (train_fold, validate) in enumerate(kf):
print i
print X_test.shape
X_train_ = X_test[train_fold,:]
X_validate_ = X_test[validate,:]
labels_train_ = labels_test[train_fold]
labels_validate_ = labels_test[validate]
print X_train_.shape
print X_validate_.shape
X_train_ = sparse.vstack((X, X_train_))
print labels.shape
print labels_train_.shape
labels_train_ = np.concatenate((labels, labels_train_))
# clf.fit(X_train,labels_train)
dtrain_ = xgb.DMatrix(X_train_, label = labels_train_)
bst = xgb.train(param, dtrain_, 70)
dtest_ = xgb.DMatrix(X_validate_)
stack_train[validate] = bst.predict(dtest_)
results = np.argmax(stack_train, axis=1)
print results
util.write_predictions(results, test_ids, "boost.csv")
reorder_submission("boost.csv", "semi_boost_70_4_015_1_05.csv")
# -
import matplotlib.pyplot as plt
# %matplotlib inline
plt.hist(results,bins=15, normed = True)
plt.show()
# +
from sklearn.cross_validation import StratifiedKFold as KFold
import pandas as pd
params = [{'max_depth':4, 'eta': 0.15, 'min_child_weight':1, 'colsample_bytree':0.5, 'objective':'multi:softprob', 'num_class':15 }]
for param in params:
X = pd.DataFrame(X_train)
X['label'] = t_train.tolist()
# X = pd.merge(X, pd.t_train)
X_test = pd.DataFrame(X_test)
labels = t_train
bst = xgb.train(param, dtrain, 70)
preds = bst.predict(dtest)
labels_test = np.argmax(preds, axis=1)
X_test['label'] = labels_test.tolist()
# X_test = pd.merge(X_test, labels_test)
kf = KFold(t_train, n_folds=10)
X = X.as_matrix()
X_test = X_test.as_matrix()
stack_train = np.zeros((test_ids.shape[0],15)) # 15 classes.
for i, (train_fold, validate) in enumerate(kf):
print i
X_train_ = X_test[train_fold,:]
X_validate_ = X_test[validate,:]
labels_train_ = labels_test[train_fold]
labels_validate_ = labels_test[validate]
X_train = np.concatenate((X, X_train))
labels_train = np.concatenate((labels, labels_train))
clf.fit(X_train,labels_train)
stack_train[validate] = clf.predict_proba(X_validate)
results = np.argmax(stack_train, axis=1)
print results
util.write_predictions(results, test_ids, "boost.csv")
reorder_submission("boost.csv", "semi_boost_70_4_015_1_05.csv")
# -
from sklearn.ensemble import RandomForestClassifier
RF = RandomForestClassifier(n_estimators = 1000, n_jobs = -1, class_weight = "balanced")
RF.fit(X_train, y_train)
scores = cross_val_score(RF, X_train, y_train, cv=5)
print "Features: " + str(RF.n_features_) + ("\tAccuracy: %0.5f (+/- %0.5f)" % (scores.mean(), scores.std() * 2))
RF_best = RF
score_best = scores.mean()
X_train_best = X_train
X_test_best = X_test
from sklearn.feature_selection import SelectFromModel
while X_train.shape[1] > 1000:
model = SelectFromModel(RF, prefit=True, threshold = "0.5*mean")
X_train = model.transform(X_train)
## trick: break if we didn't remove any feature
if X_train.shape[1] == X_test.shape[1]:
break
X_test = model.transform(X_test)
RF = RandomForestClassifier(n_estimators = 1000, n_jobs = -1, class_weight = "balanced")
RF.fit(X_train, y_train)
scores = cross_val_score(RF, X_train, y_train, cv=5)
mean_score = scores.mean()
print "Features: " + str(RF.n_features_) + ("\tAccuracy: %0.5f (+/- %0.5f)" % (mean_score, scores.std() * 2))
if score_best <= mean_score:
del X_train_best
del X_test_best
RF_best = RF
score_best = mean_score
X_train_best = X_train
X_test_best = X_test
for n in [200, 600, 1000, 1400]:
for f in ['sqrt', 'log2', None]:
for c in [None, "balanced"]:
RF = RandomForestClassifier(n_estimators = n, n_jobs = -1, max_features = f, class_weight = c)
RF.fit(X_train_best, y_train)
scores = cross_val_score(RF, X_train_best, y_train, cv=5)
mean_score = scores.mean()
print str(n)
print f
print c
print ("\tAccuracy: %0.5f (+/- %0.5f)" % (mean_score, scores.std() * 2))
preds = RF.predict(X_test)
# +
# TODO make predictions on text data and write them out
print "making predictions..."
results = np.argmax(preds, axis=1)
print "writing predictions..."
util.write_predictions(results, test_ids, "test.csv")
reorder_submission("test.csv", "experiment_rf_results.csv")
# -
X_train = X_train_bkup
X_test = X_test_bkup
from sklearn.ensemble import RandomForestClassifier
RF = RandomForestClassifier(n_estimators = 1000, n_jobs = -1)
RF.fit(X_train, y_train)
scores = cross_val_score(RF, X_train, y_train, cv=5)
print "Features: " + str(RF.n_features_) + ("\tAccuracy: %0.5f (+/- %0.5f)" % (scores.mean(), scores.std() * 2))
RF_best2 = RF
score_best2 = scores.mean()
X_train_best2 = X_train
X_test_best2 = X_test
from sklearn.feature_selection import SelectFromModel
while X_train.shape[1] > 1000:
model = SelectFromModel(RF, prefit=True, threshold = "0.5*mean")
X_train = model.transform(X_train)
## trick: break if we didn't remove any feature
if X_train.shape[1] == X_test.shape[1]:
break
X_test = model.transform(X_test)
RF = RandomForestClassifier(n_estimators = 1000, n_jobs = -1)
RF.fit(X_train, y_train)
scores = cross_val_score(RF, X_train, y_train, cv=5)
mean_score = scores.mean()
print "Features: " + str(RF.n_features_) + ("\tAccuracy: %0.5f (+/- %0.5f)" % (mean_score, scores.std() * 2))
if score_best2 <= mean_score:
del X_train_best2
del X_test_best2
RF_best2 = RF
score_best2 = mean_score
X_train_best2 = X_train
X_test_best2 = X_test
import xgboost as xgb
|
final submission/boost.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.7 64-bit
# name: python37764bit01a1b0f807d94d33ba51b7c0bd27c49d
# ---
# <h3>Challenge 111</h3>
# <p>Create a .csv file that will store the following data. Call it “Books.csv”.</p>
# <table>
# <tr>
# <th>-</th>
# <th>Book</th>
# <th>Author</th>
# <th>Year Released</th>
# </tr>
# <tr>
# <th>0</th>
# <th>To Kill A Mockingbird</th>
# <th><NAME></th>
# <th>1960</th>
# </tr>
# <tr>
# <th>1</th>
# <th>A Brief History Of Time</th>
# <th><NAME></th>
# <th>1968</th>
# </tr>
# <tr>
# <th>2</th>
# <th>The Great Gatsby</th>
# <th><NAME></th>
# <th>1922</th>
# </tr>
# <tr>
# <th>3</th>
# <th>The Man Who Mistook His Wife For A Hat</th>
# <th><NAME></th>
# <th>1985</th>
# </tr>
# <tr>
# <th>4</th>
# <th>Pride And Prejudice</th>
# <th><NAME></th>
# <th>1813</th>
# </tr>
# </table>
#
# +
import csv
books = open("files/Books.csv", "w")
book1 = 'To Kill A Mocking Bird,<NAME>,1960' + '\n'
books.write(str(book1))
book2 = 'A Brief History Of Time,<NAME>,1968' + '\n'
books.write(str(book2))
book3 = 'The Great Gatsby,<NAME>,1922' + '\n'
books.write(str(book3))
book4 = 'The Man Who Mistook His Wife For A Hat,<NAME>,1985' + '\n'
books.write(str(book4))
book5 = 'Pride And Prejudice,<NAME>,1813' + '\n'
books.write(str(book5))
books.close()
# -
# <h3>Challenge 112</h3>
# <p>Using the Books.csv file from program 111, ask the user to enter another record and add it to the end of the file. Display each row of the .csv file on a separate line.</p>
# +
import csv
books = open("files/Books.csv", "a")
title = input("Enter the title of the book : ")
author = input("Enter the author name : ")
year = input("Enter the year published : ")
record = title + ',' + author + ',' + year + '\n'
books.write(record)
books.close()
# -
# <h3>Challenge 113</h3>
# <p>Using the Books.csv file, ask the user how many records they want to add to the list and then allow them to add that many. After all the data has been added, ask for an author and display all the books in the list by that author. If there are no books by that author in the list, display a suitable message.</p>
# +
import csv
books = open("files/Books.csv", "a")
no_of_records = int(input("How many records do you want to add?"))
for i in range(no_of_records) :
title = input("Enter the title of the book : ")
author = input("Enter the author name : ")
year = input("Enter the year published : ")
record = title + ',' + author + ',' + year + '\n'
books.write(record)
books.close()
books = open("files/Books.csv", "r")
records = list(csv.reader(books))
count = 0
author = input("Enter an author name : ")
for record in records :
if author in record:
count +=1
print(record[0] + ', ' + record[1] + ', ' + record[2])
if count == 0 :
print("There were no books by {}.".format(author))
books.close()
# -
# <h3>Challenge 114</h3>
# <p>Using the Books.csv file, ask the user to enter a starting year and an end year. Display all books released between those two years.</p>
# +
import csv
books = open("files/Books.csv", "r")
records = list(csv.reader(books))
count = 0
year1 = int(input("Enter a starting year : "))
year2 = int(input("Enter an end year : "))
for record in records :
if int(record[2]) > year1-1 and int(record[2]) < year2+1 :
count +=1
print(record[0] + ', ' + record[1] + ', ' + record[2])
if count == 0 :
print("There were no books published between {} and {}.".format(year1, year2))
books.close()
# -
# <h3>Challenge 115</h3>
# <p>Using the Books.csv file, display the data in the file along with the row number of each.</p>
# +
import csv
books = open("files/Books.csv", "r")
records = list(csv.reader(books))
for record in records :
print("Row {} : {}".format(records.index(record)+1, record[0] + ', ' + record[1] + ', ' + record[2]))
books.close()
# -
# <h3>Challenge 116</h3>
# <p>Import the data from the Books.csv file into a list. Display the list to the user. Ask them to select which row from the list they want to delete and remove it from the list. Ask the user which data they want to change and allow them to change it. Write the data back to the original .csv file, overwriting the existing data with the amended data.</p>
# +
import csv
books = open("files/Books.csv", "r")
## csv.reader(books) is an iterable object of type _csv.reader and every object within that object is a list of row elements (delimitted by a comma) in the .csv file
records = list(csv.reader(books)) # a two-dimensional list
for record in records :
print("Row {} : {}".format(records.index(record)+1, record[0]+', '+record[1]+', '+record[2]))
books.close()
remove = int(input("Enter the row to remove : "))
del records[remove-1]
print("\nRecord Removed!!")
print("New Records : ")
for record in records :
print("Row {} : {}".format(records.index(record)+1, record[0]+', '+record[1]+', '+record[2]))
choice = input("Want to make any changes? yes/no : ")
if choice == 'yes' :
row = int(input("Enter row in which you want to make changes : "))
print("""\nDo you want change -
1. Title of the book
2. Name of the author
3. Year published
""")
column = int(input("Enter 1, 2 or 3 : "))
if column == 1 :
title = input("Enter new title : ")
records[row-1][column-1] = title
elif column == 2 :
author = input("Enter new author : ")
records[row-1][column-1] = author
elif column == 3 :
year = input("Enter new year-published : ")
records[row-1][column-1] = year
else :
print("Invalid Option")
books = open("files/Books.csv", "w")
for record in records :
books.write(record[0] + ',' + record[1] + ',' + record[2] + '\n')
books.close()
# -
# <h3>Challenge 117</h3>
# <p>Create a simple maths quiz that will ask the user for their name and then generate two random questions. Store their name, the questions they were asked, their answers and their final score in a .csv file. Whenever the program is run it should add to the .csv file and not overwrite anything.</p>
# +
import csv
import random
try :
quiz = open("files/Quiz.csv", "r")
quiz.close()
except :
quiz = open("files/Quiz.csv", "w")
quiz.close()
quiz = open("files/Quiz.csv", "r")
rows = list(csv.reader(quiz))
quiz.close()
quiz = open("files/Quiz.csv", "a")
if len(rows) == 0 :
quiz.write('name'+','+'question_1'+','+'answer_1'+','+'question_2'+','+'answer_2'+','+'score'+'\n')
user = input("Enter your name : ")
operators = ['+', '-']
score = 0
questions = []
answers = []
for i in range(2) :
num1 = random.randint(1, 100)
num2 = random.randint(1, 100)
operator = random.choice(operators)
if operator == '+' :
questions.append([str(num1) + operator + str(num2) + '=', num1+num2])
else :
questions.append([str(num1) + operator + str(num2) + '=', num1-num2])
for question in questions :
answer = int(input("Question {} : {} ".format(questions.index(question)+1, question[0])))
answers.append(answer)
for i in range(len(questions)) :
if questions[i][1] == answers[i] :
score += 1
row = user+','+questions[0][0]+','+str(answers[0])+','+questions[1][0]+','+str(answers[1])+','+str(score)+'\n'
quiz.write(row)
quiz.close()
|
python by examples - solutions/14. read_write_csv_file.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from commons import *
from dataset_loader import load_images, prepare_dataset
from IPython.display import display
import cv2
import pickle
import matplotlib.pyplot as plt
from keras.models import load_model
DOTS_SRC = 'hit-images-final2/dot'
TRACKS_SRC = 'hit-images-final2/line'
WORMS_SRC = 'hit-images-final2/worms'
ARTIFACTS_SRC = 'hit-images-final2/artefact'
DOTS_AC = 'cache/dots_100000.h5'
TRACKS_AC = 'cache/tracks_100000.h5'
WORMS_AC = 'cache/worms_100000.h5'
ARTIFACTS_AC = 'cache/artifacts_100000.h5'
dots_set = prepare_dataset(load_images(DOTS_SRC))
worms_set = prepare_dataset(load_images(WORMS_SRC))
tracks_set = prepare_dataset(load_images(TRACKS_SRC))
artifacts_set = prepare_dataset(load_images(ARTIFACTS_SRC))
dots_autoencoder = load_model(DOTS_AC)
worms_autoencoder = load_model(WORMS_AC)
tracks_autoencoder = load_model(TRACKS_AC)
artifacts_autoencoder = load_model(ARTIFACTS_AC)
on = {'dots': calc_similarity(dots_autoencoder, dots_set, tracks_set, worms_set, artifacts_set, binarize_for_compare=True, cutoff_background=True),
'worms': calc_similarity(worms_autoencoder, dots_set, tracks_set, worms_set, artifacts_set, binarize_for_compare=True, cutoff_background=True),
'tracks': calc_similarity(tracks_autoencoder, dots_set, tracks_set, worms_set, artifacts_set, binarize_for_compare=True, cutoff_background=True),
'artifacts': calc_similarity(artifacts_autoencoder, dots_set, tracks_set, worms_set, artifacts_set, binarize_for_compare=True, cutoff_background=True)}
# + pycharm={"name": "#%%\n"}
confusion_matrix(on)
# + pycharm={"name": "#%%\n"}
|
03_05_threshold_binarize_cutoff.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import requests
import bs4 as bs
import urllib.request
# ## Extracting features of 2020 movies from Wikipedia
link = "https://en.wikipedia.org/wiki/List_of_American_films_of_2020"
source = urllib.request.urlopen(link).read()
soup = bs.BeautifulSoup(source,'lxml')
tables = soup.find_all('table',class_='wikitable sortable')
len(tables)
type(tables[0])
df1 = pd.read_html(str(tables[0]))[0]
df2 = pd.read_html(str(tables[1]))[0]
df3 = pd.read_html(str(tables[2]))[0]
df4 = pd.read_html(str(tables[3]).replace("'1\"\'",'"1"'))[0] # avoided "ValueError: invalid literal for int() with base 10: '1"'
df = df1.append(df2.append(df3.append(df4,ignore_index=True),ignore_index=True),ignore_index=True)
df
df_2020 = df[['Title','Cast and crew']]
df_2020
from tmdbv3api import TMDb
import json
import requests
tmdb = TMDb()
tmdb.api_key = 'c20d7bbd8fe9a9739ace4a9683a20d84'
from tmdbv3api import Movie
tmdb_movie = Movie()
def get_genre(x):
genres = []
result = tmdb_movie.search(x)
if not result:
return np.NaN
else:
movie_id = result[0].id
response = requests.get('https://api.themoviedb.org/3/movie/{}?api_key={}'.format(movie_id,tmdb.api_key))
data_json = response.json()
if data_json['genres']:
genre_str = " "
for i in range(0,len(data_json['genres'])):
genres.append(data_json['genres'][i]['name'])
return genre_str.join(genres)
else:
return np.NaN
df_2020['genres'] = df_2020['Title'].map(lambda x: get_genre(str(x)))
df_2020
def get_director(x):
if " (director)" in x:
return x.split(" (director)")[0]
elif " (directors)" in x:
return x.split(" (directors)")[0]
else:
return x.split(" (director/screenplay)")[0]
df_2020['director_name'] = df_2020['Cast and crew'].map(lambda x: get_director(str(x)))
def get_actor1(x):
return ((x.split("screenplay); ")[-1]).split(", ")[0])
df_2020['actor_1_name'] = df_2020['Cast and crew'].map(lambda x: get_actor1(str(x)))
def get_actor2(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 2:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[1])
df_2020['actor_2_name'] = df_2020['Cast and crew'].map(lambda x: get_actor2(str(x)))
def get_actor3(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 3:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[2])
df_2020['actor_3_name'] = df_2020['Cast and crew'].map(lambda x: get_actor3(str(x)))
df_2020
df_2020 = df_2020.rename(columns={'Title':'movie_title'})
new_df20 = df_2020.loc[:,['director_name','actor_1_name','actor_2_name','actor_3_name','genres','movie_title']]
new_df20
new_df20['comb'] = new_df20['actor_1_name'] + ' ' + new_df20['actor_2_name'] + ' '+ new_df20['actor_3_name'] + ' '+ new_df20['director_name'] +' ' + new_df20['genres']
new_df20.isna().sum()
new_df20 = new_df20.dropna(how='any')
new_df20['movie_title'] = new_df20['movie_title'].str.lower()
new_df20
# ## Extracting features of 2021 movies from Wikipedia
link = "https://en.wikipedia.org/wiki/List_of_American_films_of_2021"
source = urllib.request.urlopen(link).read()
soup = bs.BeautifulSoup(source,'lxml')
tables = soup.find_all('table',class_='wikitable sortable')
len(tables)
type(tables[0])
df1 = pd.read_html(str(tables[0]))[0]
df2 = pd.read_html(str(tables[1]))[0]
df3 = pd.read_html(str(tables[2]))[0]
df4 = pd.read_html(str(tables[3]).replace("'1\"\'",'"1"'))[0] # avoided "ValueError: invalid literal for int() with base 10: '1"'
df = df1.append(df2.append(df3.append(df4,ignore_index=True),ignore_index=True),ignore_index=True)
df
df_2021 = df[['Title','Cast and crew']]
df_2021
from tmdbv3api import TMDb
import json
import requests
tmdb = TMDb()
tmdb.api_key = 'c20d7bbd8fe9a9739ace4a9683a20d84'
from tmdbv3api import Movie
tmdb_movie = Movie()
def get_genre(x):
genres = []
result = tmdb_movie.search(x)
if not result:
return np.NaN
else:
movie_id = result[0].id
response = requests.get('https://api.themoviedb.org/3/movie/{}?api_key={}'.format(movie_id,tmdb.api_key))
data_json = response.json()
if data_json['genres']:
genre_str = " "
for i in range(0,len(data_json['genres'])):
genres.append(data_json['genres'][i]['name'])
return genre_str.join(genres)
else:
return np.NaN
df_2021['genres'] = df_2021['Title'].map(lambda x: get_genre(str(x)))
df_2021
def get_director(x):
if " (director)" in x:
return x.split(" (director)")[0]
elif " (directors)" in x:
return x.split(" (directors)")[0]
else:
return x.split(" (director/screenplay)")[0]
def get_actor1(x):
return ((x.split("screenplay); ")[-1]).split(", ")[0])
df_2021['director_name'] = df_2021['Cast and crew'].map(lambda x: get_director(str(x)))
df_2021['actor_1_name'] = df_2021['Cast and crew'].map(lambda x: get_actor1(str(x)))
def get_actor2(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 2:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[1])
df_2021['actor_2_name'] = df_2021['Cast and crew'].map(lambda x: get_actor2(str(x)))
def get_actor3(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 3:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[2])
df_2021
df_2021['actor_3_name'] = df_2021['Cast and crew'].map(lambda x: get_actor3(str(x)))
df_2021 = df_2021.rename(columns={'Title':'movie_title'})
new_df21 = df_2021.loc[:,['director_name','actor_1_name','actor_2_name','actor_3_name','genres','movie_title']]
new_df21
new_df21['comb'] = new_df21['actor_1_name'] + ' ' + new_df21['actor_2_name'] + ' '+ new_df21['actor_3_name'] + ' '+ new_df21['director_name'] +' ' + new_df21['genres']
new_df21.isna().sum()
new_df21 = new_df21.dropna(how='any')
new_df21.isna().sum()
new_df21['movie_title'] = new_df21['movie_title'].str.lower()
new_df21
my_df = new_df20.append(new_df21,ignore_index=True)
my_df
old_df = pd.read_csv('../new dataset/final_data.csv')
old_df
final_df = old_df.append(my_df,ignore_index=True)
final_df
final_df.isna().sum()
final_df = final_df.dropna(how='any')
final_df.isna().sum()
final_df.to_csv('../new dataset/main_data.csv',index=False)
|
dataset preprocessing/preprocessing4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Wind Statistics
# ### Introduction:
#
# The data have been modified to contain some missing values, identified by NaN.
# Using pandas should make this exercise
# easier, in particular for the bonus question.
#
# You should be able to perform all of these operations without using
# a for loop or other looping construct.
#
#
# 1. The data in 'wind.data' has the following format:
"""
Yr Mo Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
61 1 1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
61 1 2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
61 1 3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
"""
# The first three columns are year, month and day. The
# remaining 12 columns are average windspeeds in knots at 12
# locations in Ireland on that day.
#
# More information about the dataset go [here](wind.desc).
# ### Step 1. Import the necessary libraries
import pandas as pd
import numpy as np
import datetime
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/Wind_Stats/wind.data)
# ### Step 3. Assign it to a variable called data and replace the first 3 columns by a proper datetime index.
data = pd.read_table('wind.data', sep = "\s+", parse_dates = [[0,1,2]])
data.head(5)
# ### Step 4. Year 2061? Do we really have data from this year? Create a function to fix it and apply it.
# function that uses datetime
def fix_century(x):
year = x.year - 100 if x.year > 1989 else x.year
return datetime.date(year, x.month, x.day)
data['Yr_Mo_Dy'] = data['Yr_Mo_Dy'].apply(fix_century)
data.head()
# ### Step 5. Set the right dates as the index. Pay attention at the data type, it should be datetime64[ns].
data.index = data.Yr_Mo_Dy
data.head()
# ### Step 6. Compute how many values are missing for each location over the entire record.
# #### They should be ignored in all calculations below.
data.isnull().sum(axis=0)
# ### Step 7. Compute how many non-missing values there are in total.
data.notnull().sum(axis=0)
# ### Step 8. Calculate the mean windspeeds of the windspeeds over all the locations and all the times.
# #### A single number for the entire dataset.
data = data.drop('Yr_Mo_Dy',axis=1)
data.head()
data.stack().mean()
# ### Step 9. Create a DataFrame called loc_stats and calculate the min, max and mean windspeeds and standard deviations of the windspeeds at each location over all the days
#
# #### A different set of numbers for each location.
loc_stats = data.apply(['min','max','mean'])
loc_stats
# ### Step 10. Create a DataFrame called day_stats and calculate the min, max and mean windspeed and standard deviations of the windspeeds across all the locations at each day.
#
# #### A different set of numbers for each day.
# +
# create the dataframe
day_stats = pd.DataFrame()
# this time we determine axis equals to one so it gets each row.
day_stats['min'] = data.min(axis = 1) # min
day_stats['max'] = data.max(axis = 1) # max
day_stats['mean'] = data.mean(axis = 1) # mean
day_stats['std'] = data.std(axis = 1) # standard deviations
day_stats.head()
# -
# ### Step 11. Find the average windspeed in January for each location.
# #### Treat January 1961 and January 1962 both as January.
data.head()
# ### Step 12. Downsample the record to a yearly frequency for each location.
data.groupby(data.index.to_period('M')).mean()
# ### Step 13. Downsample the record to a monthly frequency for each location.
# ### Step 14. Downsample the record to a weekly frequency for each location.
# ### Step 15. Calculate the min, max and mean windspeeds and standard deviations of the windspeeds across all locations for each week (assume that the first week starts on January 2 1961) for the first 52 weeks.
|
06_Stats/Wind_Stats/Exercises.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
from statsmodels.tsa.stattools import pacf
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
# +
import sys
sys.path.append("..")
from src.config import *
# -
# ## Import clean data
# Read data
data_path = os.path.join(DATA_CLEAN_PATH, "ml-curated-data.csv")
dfCurated = pd.read_csv(data_path)
dfCurated.head()
# +
target_col = "wage_increase"
features = [c for c in dfCurated.columns if c != target_col]
train = dfCurated.sample(frac=0.7)
test = dfCurated.drop(train.index)
# +
train_x = train.drop(target_col, 1)
train_y = train.drop(features, 1)
test_x = test.drop(target_col, 1)
test_y = test.drop(features, 1)
# -
regr = RandomForestRegressor(max_depth=1, n_estimators=5000, warm_start=True, max_features="sqrt", min_impurity_decrease=0.1)
regr.fit(train_x, np.ravel(train_y))
estimates = regr.predict(train_x)
error = np.asmatrix(train_y.values - estimates)
sme = (error.T * error / len(error)).tolist()[0][0]
sme
np.sqrt(sme)
# +
def get_random_params():
return {
"n_estimators": random.choice((range(1, 900))),
"criterion": random.choice(["mse", "mae"]),
"max_depth": random.choice(list(range(1, 100)) + [None]),
"random_state": random.choice((range(10, 100))),
"max_features": random.choice(range(10, 100)) / 100,
"min_impurity_decrease": random.choice((range(10, 100)))/100,
}
param = get_random_params()
param
# -
def get_rsme(df, param, target_col, features):
train = df.sample(frac=0.7)
test = df.drop(train.index)
train_x = train.drop(target_col, 1)
train_y = train.drop(features, 1)
test_x = test.drop(target_col, 1)
test_y = test.drop(features, 1)
model= RandomForestRegressor(**param)
model.fit(train_x, np.ravel(train_y))
estimates = model.predict(train_x)
error = np.asmatrix(train_y.values - estimates)
sme = (error.T * error / len(error)).tolist()[0][0]
return np.sqrt(sme) , error
get_rsme(dfCurated, param, target_col="wage_increase", features=[c for c in dfCurated.columns if c != "wage_increase"])
result = []
for i in range(1000):
param = get_random_params()
rsme , error = get_rsme(dfCurated, param, target_col="wage_increase", features=[c for c in dfCurated.columns if c != "wage_increase"])
param["rsme"] = rsme
param["error"] = error
result.append(param)
result_df = pd.DataFrame(result)
result_df.head()
output_path = os.path.join(DATA_CLEAN_PATH, "param_random_forest_2.csv")
result_df.to_csv(output_path)
result_df.max_depth.unique()
result_df.describe()
result_df.rsme.min()
param = {'criterion': 'mse',
'max_depth': 7,
'max_features': 0.34,
'min_impurity_decrease': 0.44,
'n_estimators': 383,
'random_state': 68}
rsme , error = get_rsme(dfCurated, param, target_col="wage_increase", features=[c for c in dfCurated.columns if c != "wage_increase"])
# +
df_errors = pd.DataFrame({'error': [e for ls in error.tolist() for e in ls]})
df_errors.plot.kde()
plt.title("Error distribution")
plt.xlabel("Error")
plt.grid()
plt.show()
100 * df_errors.describe()
# +
test_results = pd.DataFrame(
{
"y": train_y.wage_increase.values,
"y_estimate": [e for ls in estimates.tolist() for e in ls]
}
)
100 * test_results.describe()
# -
test_results.y.plot.kde(c='r')
test_results.y_estimate.plot.kde(c='b')
plt.title("Kernel Density Estimation")
plt.grid()
plt.show()
plt.plot(test_results.y, test_results.y_estimate, '.b')
plt.plot(test_results.y, test_results.y, '.r')
plt.title("Estimate VS Original")
plt.grid()
plt.show()
|
eda/random-forest-regressor.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: my-first-appyter
# language: python
# name: my-first-appyter
# ---
# #%%appyter init
from appyter import magic
magic.init(lambda _=globals: _())
# # Compare Sets Appyter
# ##### This appyter creates a Venn diagram to visualize the intersections between 2-6 user-inputted gene sets. The user has a choice whether to upload a tsv file with the desired genes or to insert the genes into text boxes.
# +
# Imports
## Venn Diagram
from venn import venn, pseudovenn
## SuperVenn
from supervenn import supervenn
## Data Processing
import csv
## Link to Enrichr
import requests
import json
import time
from IPython.display import display, FileLink, Markdown, HTML
## Fisher Exact Test
import scipy.stats as stats
import math
## UpSet Plot
from upsetplot import from_contents, plot
from matplotlib import pyplot
## Table
import plotly.graph_objects as go
# %matplotlib inline
# -
# %%appyter hide_code
{% do SectionField(name='section0', title='Compare Gene Sets', subtitle='Create a venn diagram to compare your inputted gene sets', img = 'spiral.png') %}
{% do SectionField(name='section1', title='1. Submit Your Gene Lists', subtitle = 'Upload text files containing your gene list -OR- copy and paste your gene list into each text box below (One gene per row). Default genes are provided below, but you can submit your own gene sets and delete the default data if you wish to do so.', img = 'bulb.png') %}
{% do SectionField(name='section2', title='2. Venn Diagram', subtitle = 'Generate a Venn diagram from 2-6 inputted sets.', img = 'venndiagram.png') %}
{% do SectionField(name = 'section3', title = '3. SuperVenn', subtitle = 'Generate a SuperVenn diagram from the inputted sets. This can be useful if you wish to display your comparisons in a tabular format.', img = 'supervenn2.png') %}
{% do SectionField(name='section4', title='4. UpSet Plot', subtitle = 'Generate an UpSet plot from the inputted sets. This can be useful if you have many sets to compare.', img = 'gears.png') %}
{% do SectionField(name='section5', title='5. Fisher\'s Exact Test', subtitle = 'Fisher\'s Exact Test determines whether the overlap of two gene sets is significant.', img = 'brain.png') %}
# +
# %%appyter code_exec
# Inputting Lists and Settings
gs1 = {{ FileField(name = 'gs1', label = 'Gene Set 1 File', default = '', examples = {'Example Gene List 1': url_for('static', filename = 'A_Geneshot_PainGenes_GeneRIF_AssociatedGenes.tsv')}, section = 'section1') }}
gs2 = {{ FileField(name = 'gs2', label = 'Gene Set 2 File', default = '', examples = {'Example Gene List 2': url_for('static', filename = 'B_Geneshot_PainGenes_AutoRIF_AssociatedGenes.tsv')}, section = 'section1') }}
gs3 = {{ FileField(name = 'gs3', label = 'Gene Set 3 File', default = '', examples = {'Example Gene List 3': url_for('static', filename = 'C_Geneshot_PainGenes_GeneRIF_PredictedGenes_AutoRIF-CoOccurrence.tsv')}, section = 'section1') }}
gs4 = {{ FileField(name = 'gs4', label = 'Gene Set 4 File', default = '', examples = {'Example Gene List 4': url_for('static', filename = 'D_Geneshot_PainGenes_GeneRIF_PredictedGenes_GeneRIF-CoOccurrence.tsv')}, section = 'section1') }}
gs5 = {{ FileField(name = 'gs5', label = 'Gene Set 5 File', default = '', examples = {'Example Gene List 5': url_for('static', filename = 'E_Geneshot_PainGenes_GeneRIF_PredictedGenes_Enrichr-CoOccurrence.tsv')}, section = 'section1') }}
gs6 = {{ FileField(name = 'gs6', label = 'Gene Set 6 File', default = '', examples = {'Example Gene List 6': url_for('static', filename = 'F_Geneshot_PainGenes_GeneRIF_PredictedGenes_Tagger-CoOccurrence.tsv')}, section = 'section1') }}
gs1Text = {{ TextField(name = 'gs1Text', label = 'Gene Set 1', default = '''TRPV1
OPRM1
TRPA1
COMT
SCN9A
TNF
IL6
IL1B
CRP
BDNF
NGF
SLC6A4
MEFV
TRPM8
TRPV4
CALCA
NTRK1
TLR4
ASIC3
SCN10A
MMP9
CNR1
IL10
CCL2
TNNT2
NPPB
PTGS2
CYP2D6
P2RX3
TACR1''', section = 'section1') }}
gs2Text = {{ TextField(name = 'gs2Text', label = 'Gene Set 2', default = '''TNF
TRPV1
CRP
FOS
PTGS2
NGF
TRPA1
BDNF
CD34
POMC
IVD
IL10
ACE
CASP3
CCL2
TLR4
GFAP
TRPM8
IL6
CD68
KIT
OPRM1
SCN9A
CYP2D6
COMT
CEACAM5
GDNF
NPY
PTH
TRPV4''', section = 'section1') }}
gs3Text = {{ TextField(name = 'gs3Text', label = 'Gene Set 3', default = '''OPRD1
TRPV1
TRPA1
SCN9A
OPRM1
TRPM8
TACR1
OPRK1
TAC1
SCN3B
KCNS1
TRPV3
TRPV4
CACNA1B
CACNA2D2
SCN11A
NTRK1
PENK
SCN1B
OPRL1
PDYN
TRPV2
HTR3C
HTR3A
COMT
P2RX3
TRPM5
DRD2
NGFR
FAAH
ASIC3
PNOC
HTR3B
TRPM4
CACNA2D3
BDKRB1
ASIC4
HTR2A
KCNC2
CHRM4
TRPM3
HTR3E
CACNG2
CHRNA7
SCN10A''', section = 'section1') }}
gs4Text = {{ TextField(name = 'gs4Text', label = 'Gene Set 4', default = '', section = 'section1') }}
gs5Text = {{ TextField(name = 'gs5Text', label = 'Gene Set 5', default = '', section = 'section1') }}
gs6Text = {{ TextField(name = 'gs6Text', label = 'Gene Set 6', default = '', section = 'section1') }}
venndiagram = {{ BoolField(name = 'venndiagram', label = 'Venn Diagram?', default = 'true', description = 'Select \'Yes\' if you would like to generate a Venn diagram. Otherwise, select \'No\'', section = 'section2') }}
scheme = "{{ ChoiceField(name = 'scheme', label = 'Color Scheme', choices = ['viridis', 'cool', 'plasma', 'inferno', 'magma'], default = 'viridis', description = 'Choose a color scheme for your Venn diagram', section = 'section2') }}"
venn_file_format = {{ MultiCheckboxField(name = 'venn_file_format', label = 'File Format', choices = ['png', 'jpg', 'svg'], default = ['png'], description = 'Select the format(s) to save your Venn diagram', section = 'section2') }}
venn_file_name = {{ StringField(name = 'venn_file_name', label = 'File Name', default = 'venn', description = 'Enter a name/description to save your Venn diagram', section = 'section2') }}
svenn = {{ BoolField(name = 'svenn', label = 'SuperVenn?', default = 'true', description = 'Select \'Yes\' if you woul dlike to generate a SuperVenn diagram. Otherwise, select \'No\'', section = 'section3') }}
annotations = {{ IntField(name = 'annotations', label = 'Minimum Intersection Size to be Displayed', default = 1, min = 1, description = 'If you are comparing many sets, displaying all the intersection sizes can make the figure cluttered. Any intersection size below this value will not be displayed.', section = 'section3') }}
upset = {{ BoolField(name = 'upset', label = 'UpSet Plot?', default = 'true', description = 'Select \'Yes\' if you would like to generate an UpSet plot. Otherwise, select \'No\'', section = 'section4') }}
orient = "{{ ChoiceField(name = 'orient', label = 'Orientation', choices = ['Horizontal', 'Vertical'], default = 'Horizontal', description = 'Choose whether your UpSet plot will be displayed horizontally or vertically', section = 'section4') }}"
color = "{{ ChoiceField(name = 'color', label = 'Color', choices = ['Black', 'Blue', 'Red', 'Green', 'Grey', 'Orange', 'Purple', 'Yellow', 'Pink'], default = 'Black', section = 'section4') }}"
counts = {{ BoolField(name = 'counts', label = 'Show Counts?', default = 'true', description = 'This labels the intersection size bars with the cardinality of the intersection.', section = 'section4') }}
percent = {{ BoolField(name = 'percent', label = 'Show Percentages?', default = 'false', description = 'This labels the intersection size bars with the percentage of the intersection relative to the total dataset.', section = 'section4') }}
figure_file_format = {{ MultiCheckboxField(name = 'figure_file_format', label = 'File Format', choices = ['png', 'jpg', 'svg'], default = ['png'], description = 'Select the format to save your figure', section = 'section4') }}
output_file_name = {{ StringField(name = 'output_file_name', label = 'File Name', default = 'UpSet_plot', description = 'Enter a name/description to save your UpSet Plot', section = 'section4') }}
background = {{ IntField(name = 'background', label = 'Background', default = 20000, description = 'Human genes typically have a background of 20,000', section = 'section5') }}
significance = {{ ChoiceField(name = 'significance', label = 'Significance Level', choices = {'0.01': '0.01', '0.05': '0.05', '0.10': '0.10'}, default = '0.05', description = 'Choose a significance level', section = 'section5')}}
final_venn_file_names = [str(venn_file_name + '.' + file_type) for file_type in venn_file_format]
final_output_file_names = [str(output_file_name + '.' + file_type) for file_type in figure_file_format]
# +
#Color for UpSet plot
color_conversion = {
'Black': 'black',
'Blue': 'lightskyblue',
'Red': 'tomato',
'Green': 'mediumspringgreen',
'Grey': 'lightgrey',
'Orange': 'orange',
'Purple': 'plum',
'Yellow': 'yellow',
'Pink': 'lightpink'
}
color = color_conversion[color]
# +
# Displaying Figures
def figure_title(label, title):
display(HTML(f"<div style='font-size:2rem; padding;1rem 0;'><b>{label}</b>: {title}</div>"))
def figure_legend(label, title, content=""):
display(HTML(f"<div><b>{label}</b>: <i>{title}</i>. {content} </div>"))
# +
# %%appyter code_exec
# Helper functions to conver the file upload or text input into gene lists
def file_to_list(str):
l1 = []
tsv_file = open(str)
read_tsv = csv.reader(tsv_file, delimiter = '\t')
for row in read_tsv:
l1.append(row[0])
tsv_file.close()
return l1
def text_to_list(str):
l1 = str.splitlines()
return l1
# +
# Add the appropriate gene lists to the dictionary
gsdict = {}
if gs1 != '':
l1 = file_to_list(gs1)
gsdict["Set 1"] = set(l1)
elif gs1Text != '''''':
l1 = text_to_list(gs1Text)
gsdict["Set 1"] = set(l1)
if gs2 != '':
l2 = file_to_list(gs2)
gsdict["Set 2"] = set(l2)
elif gs2Text != '''''':
l2 = text_to_list(gs2Text)
gsdict["Set 2"] = set(l2)
if gs3 != '':
l3 = file_to_list(gs3)
gsdict["Set 3"] = set(l3)
elif gs3Text != '''''':
l3 = text_to_list(gs3Text)
gsdict["Set 3"] = set(l3)
if gs4 != '':
l4 = file_to_list(gs4)
gsdict["Set 4"] = set(l4)
elif gs4Text != '''''':
l4 = text_to_list(gs4Text)
gsdict["Set 4"] = set(l4)
if gs5 != '':
l5 = file_to_list(gs5)
gsdict["Set 5"] = set(l5)
elif gs5Text != '''''':
l5 = text_to_list(gs5Text)
gsdict["Set 5"] = set(l5)
if gs6 != '':
l6 = file_to_list(gs6)
gsdict["Set 6"] = set(l6)
elif gs6Text != '''''':
l6 = text_to_list(gs6Text)
gsdict["Set 6"] = set(l6)
# -
# ## Venn Diagram
# +
# Generate the venn diagram
if venndiagram:
venn(gsdict, cmap = scheme)
for plot_name in final_venn_file_names:
pyplot.savefig(plot_name, bbox_inches = 'tight')
figure_title("Figure 1", "Venn diagram")
pyplot.show()
figure_legend("Figure 1", "Venn diagram", "This Venn diagram compares the inputted gene sets and displays the intersections between them.")
if len(gsdict) == 6:
pseudovenn(gsdict)
# -
# Download Venn Diagrams
for i, file in enumerate(final_venn_file_names):
display(FileLink(file, result_html_prefix=str('Download ' + venn_file_format[i] + ': ')))
# ## SuperVenn Diagram
# SuperVenn
if svenn:
figure_title("Figure 2", "SuperVenn")
supervenn(list(gsdict.values()), list(gsdict.keys()), sets_ordering= 'minimize gaps', widths_minmax_ratio=0.1, min_width_for_annotation=annotations)
figure_legend("Figure 2", "SuperVenn", "The numbers on the right represent the set sizes and the numbers on the top show how many sets the intersection is part of. The overlapping portions of the colored bars correspond to set intersections.")
# ## UpSet Plot
# UpSet Plots
if upset:
df = from_contents(gsdict)
plot(df, orientation = orient.lower(), facecolor = color, show_counts = counts, show_percentages = percent)
for plot_name in final_output_file_names:
pyplot.savefig(plot_name, bbox_inches = 'tight')
figure_title("Figure 3", "UpSet Plot")
pyplot.show()
figure_legend("Figure 3", "UpSet Plot", "This UpSet plot displays the set intersections as a matrix with the cardinalities shown as bars.")
# Download UpSet Plots
for i, file in enumerate(final_output_file_names):
display(FileLink(file, result_html_prefix = str('Download ' + figure_file_format[i] + ': ')))
# ## List of Set Intersections
#Linking to Enrichr
def enrichr_link(gene_list):
ENRICHR_URL = 'http://amp.pharm.mssm.edu/Enrichr/addList'
genes_str = '\n'.join(gene_list)
description = 'Example Gene List'
payload = {
'list': (None, genes_str),
'description': (None, description)
}
response = requests.post(ENRICHR_URL, files=payload)
if not response.ok:
raise Exception('Error analyzing gene list')
time.sleep(0.5)
data = json.loads(response.text)
short_id = data['shortId']
return [str(short_id)]
def get_venn_sections(sets):
num_combinations = 2 ** len(sets)
bit_flags = [2 ** n for n in range(len(sets))]
flags_zip_sets = [z for z in zip(bit_flags, sets)]
combo_sets = []
for bits in range(num_combinations - 1, 0, -1):
include_sets = [s for flag, s in flags_zip_sets if bits & flag]
exclude_sets = [s for flag, s in flags_zip_sets if not bits * flag]
combo = set.intersection(*include_sets)
combo = set.difference(combo, *exclude_sets)
tag = ''.join([str(int((bits & flag) > 0)) for flag in bit_flags])
combo_sets.append((tag, combo))
return combo_sets
sets = list(gsdict.values())
def generate_name(combos):
tag_list = []
for pair in combos:
bits = pair[0]
inter = '('
diff = '('
for i in range(len(bits)):
j = i+1
set_name = 'Set ' + str(j)
if bits[i] == '1':
inter += set_name
inter += ' & '
else:
diff += set_name
diff += ' U '
final_inter = inter[:-3]
final_inter += ')'
final_diff = diff[:-3]
final_diff += ')'
if final_diff != ')':
final_name = final_inter + ' - ' + final_diff
else:
final_name = final_inter[1:-1]
tag_list.append(final_name)
return tag_list
# +
# Generates visibility booleans for dropdown menu
def generate_visibility(options):
bools = []
temp = []
for x in range (len(options)):
temp.append(False)
for x in range(len(options)):
visible = temp.copy()
visible[x] = True
bools.append(visible)
return bools
# -
# Creates the options for the dropdown menu
def make_options(tuples, names):
bools = generate_visibility(tuples)
dropdown = []
for x in range (len(tuples)):
option = dict(
args = [{'visible': bools[x]}],
label = names[x],
method = "update"
)
dropdown.append(option)
return dropdown
def create_enrichr_link(l1):
results = enrichr_link(l1)
final_str = str('https://amp.pharm.mssm.edu/Enrichr/enrich?dataset='+ results[0])
return final_str
# +
# Add Enrichr Links
def add_links():
l1 = []
for pair in get_venn_sections(sets):
if len(pair[1]) >= 5:
temp = pair
new_tuple = temp + tuple(create_enrichr_link(pair[1]).split(' '))
l1.append(new_tuple)
return l1
new_venn_sections = add_links()
# -
def set_to_list(l1):
l2 = []
l2.append('Size: ' + str(len(list(l1[1]))))
l2.append('Access your complete Enrichment results here: ' + str(l1[2]))
for elem in l1[1]:
l2.append(elem)
return l2
# +
# Create Figure for Set Intersection Item Dropdown
fig = go.Figure()
for pair in new_venn_sections:
fig.add_trace(
go.Table(
header = dict(
values = ['Intersection Listing'],
line_color = '#001C55',
fill_color = '#001C55',
align = ['left', 'center'],
font=dict(color='white', size=16)
),
cells = dict(
values = [set_to_list(pair)],
line_color = 'white',
fill_color = '#f5f5f5',
align = ['left', 'center'],
font = dict(color = 'darkslategray', size = 14)
)
)
)
## Make Dropdown
fig.update_layout(
updatemenus = [
dict(
buttons=list(make_options(get_venn_sections(sets), generate_name(get_venn_sections(sets)))),
direction = "down",
pad = {"r": 10, "t": 10},
showactive = True,
x = 0,
xanchor = "left",
y = 1.2,
yanchor = "top"
),
]
)
figure_title("Table 1", "List of Set Intersections")
fig.show()
display(HTML(f"<div><b>Explanation of Symbols</b>: <br><i>A - B </i> - subtraction of set B from set A</br> <br><i>A & B </i> - intersection of sets A and B</br> <br><i>A U B </i> - union of sets A and B</br> </div>"))
figure_legend("Table 1", "List of Set Intersections", "This table shows the elements contained in each set intersection. A link to Enrichr for further enrichment analysis is provided. Various intersections can be found using the dropdown menu.")
# -
# ## Fisher's Exact Test Calculations
# +
# Pair the Gene Sets
matching = []
gene_sets = list(gsdict.keys())
for i in range (len(gene_sets)-1):
for j in range (i+1, len(gene_sets)):
matching.append((gene_sets[i], gene_sets[j]))
# +
# Generates values for Fisher's Exact Test
def generate_values(s1, s2):
inter = len(s1 & s2)
not_A_not_B = background - (len(s1) + len(s2) - inter)
in_A_not_B = len(s1) - inter
not_A_in_B = len(s2) - inter
total = not_A_not_B + not_A_in_B + in_A_not_B + inter
oddsratio, pvalue = stats.fisher_exact([[not_A_not_B, in_A_not_B], [not_A_in_B, inter]])
if pvalue < significance:
sig_result = 'This result is <b>significant</b> at p < ' + str(significance)
else:
sig_result = 'This result is <b>not significant</b> at p < ' + str(significance)
values1 = [['<b>Not in B</b>', '<b>In B</b>', '<b>Marginal Column Totals</b>', '<b>p-value</b>: ' + "{:.3e}".format(pvalue), '<b>Odds Ratio</b>: ' + str(oddsratio), '<b>Result</b>: ' + sig_result], [not_A_not_B, not_A_in_B, not_A_not_B+not_A_in_B], [in_A_not_B, inter, in_A_not_B+inter], [not_A_not_B+in_A_not_B, not_A_in_B+inter, str(total) + ' (Grand Total)']]
return values1
# +
# Create figure and adds all tables
fig2 = go.Figure()
for pair in matching:
fig2.add_trace(
go.Table(
header = dict(
values = ['', '<b>Not in A</b>', '<b>In A</b>', '<b>Marginal Row Totals</b>'],
line_color = '#001C55',
fill_color = '#001C55',
align = ['left', 'center'],
font=dict(color='white', size=12)
),
cells = dict(
values = generate_values(gsdict[pair[0]], gsdict[pair[1]]),
line_color = 'white',
fill_color = [['#f5f5f5', '#f5f5f5', '#f5f5f5', 'white', 'white', 'white']*4],
align = ['left', 'center'],
font = dict(color = 'darkslategray', size = 11)
)
)
)
# -
# Generates names for dropdown menu
def generate_names():
names = []
for pair in matching:
s = pair[0] + ' & ' + pair[1]
names.append(s)
return names
# +
# Generates figure with dropdown menu
names = generate_names()
fig2.update_layout(
updatemenus = [
dict(
buttons=list(make_options(matching, names)),
direction = "down",
pad = {"r": 10, "t": 10},
showactive = True,
x = 0,
xanchor = "left",
y = 1.2,
yanchor = "top"
),
]
)
figure_title("Table 2", "Fisher's Exact Test")
fig2.show()
figure_legend("Table 2", "Fisher's Exact Test", "This table shows the results of Fisher's Exact Test. Using the items in the contigency table, the p-value and odds ratio is calculated. The p-value is then compared against the desired significance level. The overlap between various sets can be seen using the dropdown menu.")
# -
# ## Heatmap of Fisher's Exact Test Results
def reverse(tuples):
new_tup = ()
for k in reversed(tuples):
new_tup += (k, )
return new_tup
def check_sig(s1, s2):
inter = len(s1 & s2)
not_A_not_B = background - (len(s1) + len(s2) - inter)
in_A_not_B = len(s1) - inter
not_A_in_B = len(s2) - inter
total = not_A_not_B + not_A_in_B + in_A_not_B + inter
oddsratio, pvalue = stats.fisher_exact([[not_A_not_B, in_A_not_B], [not_A_in_B, inter]])
if pvalue == 0:
return pvalue
else:
num = -math.log(pvalue, 10)
return num
def heatmap_values(gene_sets):
values = []
x_axis = gene_sets
y_axis = gene_sets
for i in range (len(x_axis)):
row = []
for j in range (len(y_axis)):
t = (x_axis[i], y_axis[j])
if t in matching:
row.append(check_sig(gsdict[x_axis[i]], gsdict[y_axis[j]]))
elif reverse(t) in matching:
row.append(check_sig(gsdict[y_axis[j]], gsdict[x_axis[i]]))
else:
row.append(None)
values.append(row)
return values
fig3 = go.Figure(data = go.Heatmap(
z = heatmap_values(gene_sets),
x = gene_sets,
y = gene_sets,
hoverongaps = False))
figure_title("Figure 4", "Heatmap of Fisher's Exact Test Results")
fig3.show()
figure_legend("Figure 4", "Heatmap of Fisher's Exact Test Results", "This figure displays the results of all Fisher's Exact Tests calculated. The -log(p-values) is shown in the heatmap. Each axis displays which sets are being compared and sets that cannot be compared are given a value of None.")
|
appyters/CompareSets/CompareSets.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
# +
# #!{sys.executable} -m pip install psycopg2
# +
# #!{sys.executable} -m pip install boto3
# -
import psycopg2
import pandas as pd
import boto3
'''Select email, first_order
from ecomm_analytics.keen_us_user_profiles
where first_order >= '10/1/19'
and email in %s;''' %(items,)
# +
conn = psycopg2.connect(user = 'awsuser',
password = '<PASSWORD>',
host = 'aws-pdx-hackathon.c8vcduzakdyw.us-west-2.redshift.amazonaws.com',
dbname = 'dev',
port = 5439)
cursor = conn.cursor()
cursor.execute(query)
results = pd.DataFrame(cursor.fetchall())
#results.columns = #["Order", "Reason"]#["Email", "Orders", "Revenue"]
cursor.close()
conn.close()
# -
import ipywidgets as widgets
print(widgets.Button.on_click.__doc__)
from IPython.display import display
# +
button = widgets.Button(description="Get Data!")
output = widgets.Output()
def on_button_clicked(b):
output.clear_output()
with output:
query= '''Select * from usage_tracker;'''
conn = psycopg2.connect(user = 'awsuser',
password = '<PASSWORD>',
host = 'aws-pdx-hackathon.c8vcduzakdyw.us-west-2.redshift.amazonaws.com',
dbname = 'dev',
port = 5439)
cursor = conn.cursor()
cursor.execute(query)
results = pd.DataFrame(cursor.fetchall())
#results.columns = #["Order", "Reason"]#["Email", "Orders", "Revenue"]
cursor.close()
conn.close()
print (results)
button.on_click(on_button_clicked)
# +
button2 = widgets.Button(description="Press Me")
output2 = widgets.Output()
def on_button2_clicked(b):
output2.clear_output()
print ("Button 2")
button2.on_click(on_button2_clicked)
# -
display(button, output)
display(button2, output2)
# +
import psycopg2
import json
import logging
import boto3
logger = logging.getLogger()
logger.setLevel(logging.INFO)
iam1 = boto3.client('iam')
def get_iam_usage():
user_list = get_user_dao()
#print(user_list)
for user_name in user_list:
#print(user_name[0])
response_1 = iam1.generate_service_last_accessed_details(
Arn='arn:aws:iam::132865025212:user/'+user_name[0]
)
#print(response_1)
response_dict = json.loads(json.dumps(response_1))
response_2 = iam1.get_service_last_accessed_details(
JobId=response_dict['JobId']
)
response_2_dict = json.loads(json.dumps(response_2, default=str))
if response_2_dict['ServicesLastAccessed']:
svcs_last_accessed_list = response_2_dict['ServicesLastAccessed']
#print(svcs_last_accessed_list)
for svcs_last_accessed in svcs_last_accessed_list:
if svcs_last_accessed['TotalAuthenticatedEntities'] > 0:
update_last_accessed_dao(user_name,svcs_last_accessed['ServiceNamespace'],svcs_last_accessed['LastAuthenticated'])
return "Successfully updated last accessed dates!!"
def update_last_accessed_dao(user_name, service, last_accessed):
conn = None
cur = None
conn_string = ('dbname=dev user=awsuser password=<PASSWORD>1 host=aws-pdx-hackathon.c8vcduzakdyw.us-west-2.redshift.amazonaws.com port=5439')
conn = psycopg2.connect(conn_string)
cur = conn.cursor()
cur.execute('update usage_tracker set last_use_date = %s where user_name = %s and service = %s',(last_accessed,user_name,service,))
conn.commit()
if cur is not None:
cur.close()
if conn is not None:
conn.close()
def get_user_dao():
conn = None
cur = None
conn_string = ('dbname=dev user=awsuser password=<PASSWORD> host=aws-pdx-hackathon.c8vcduzakdyw.us-west-2.redshift.amazonaws.com port=5439')
conn = psycopg2.connect(conn_string)
cur = conn.cursor()
cur.execute('select distinct user_name from usage_tracker')
response = cur.fetchall()
if cur is not None:
cur.close()
if conn is not None:
conn.close()
return response
# +
def get_report():
conn = psycopg2.connect(user = 'awsuser',
password = '<PASSWORD>',
host = 'aws-pdx-hackathon.c8vcduzakdyw.us-west-2.redshift.amazonaws.com',
dbname = 'dev',
port = 5439)
query= '''Select * from usage_tracker order by user_name, service;'''
cursor = conn.cursor()
cursor.execute(query)
results = pd.DataFrame(cursor.fetchall())
results.columns = ["User", "Service","Status","Last Accessed"]
cursor.close()
conn.close()
return results
def shrinkwrap_users():
iam2 = boto3.resource('iam')
conn = psycopg2.connect(user='awsuser',
password='<PASSWORD>',
host='aws-pdx-hackathon.c8vcduzakdyw.us-west-2.redshift.amazonaws.com',
dbname='dev',
port=5439)
get_everything = "SELECT user_name, service FROM usage_tracker WHERE last_use_date < getdate()-30 AND status='Granted';"
cursor = conn.cursor()
cursor.execute(get_everything)
results = cursor.fetchall()
for expired_permission in results:
user = iam2.User(expired_permission[0])
response = user.detach_policy(PolicyArn=service_policies[expired_permission[1]])
update_status_query = '''UPDATE usage_tracker SET status='Declined' WHERE user_name='%s' AND service='%s';''' % (expired_permission[0], expired_permission[1])
cursor.execute(update_status_query)
conn.commit()
cursor.close()
conn.close()
return results
# +
button3 = widgets.Button(description="Create User")
rep_button=widgets.Button(description="Get Report")
sw_button=widgets.Button(description="Shrink Wrap")
lamb_button = widgets.Button(description="Sync Records")
text3 = widgets.Text(value='Enter User Name', description='User', disabled=False)
check1 = widgets.Checkbox(value=False, description='S3 service')
check2 = widgets.Checkbox(value=False, description='Redshift service')
check3=widgets.Checkbox(value=False, description='Lambda service')
output2 = widgets.Output()
rep_output = widgets.Output()
sw_output = widgets.Output()
lamb_output = widgets.Output()
iam = boto3.resource('iam')
client = boto3.client('iam')
# def on_text_change(change):
# output2.clear_output()
# with output2:
# print(change['new'])
# text3.on_submit(on_text_change)
def on_button3_clicked(b):
output2.clear_output()
with output2:
print(text3.value)
client.create_user(UserName=text3.value)
user = iam.User(text3.value)
conn = psycopg2.connect(user='awsuser',
password='<PASSWORD>',
host='aws-pdx-hackathon.c8vcduzakdyw.us-west-2.redshift.amazonaws.com',
dbname='dev',
port=5439)
cursor = conn.cursor()
if check1.value:
# user = iam.User(text3.value)
response = user.attach_policy(PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess')
insert_new_user = '''insert into usage_tracker values ('%s','%s','%s',getdate());''' % (text3.value, 's3' , 'Granted')
cursor.execute(insert_new_user)
conn.commit()
print(text3.value+' user has S3 service granted')
else:
insert_new_user = '''insert into usage_tracker values ('%s','%s','%s',getdate());''' % (text3.value, 's3' , 'Declined')
cursor.execute(insert_new_user)
conn.commit()
print(text3.value+' user has S3 service declined')
if check2.value:
# user = iam.User(text3.value)
response = user.attach_policy(PolicyArn='arn:aws:iam::aws:policy/AmazonRedshiftFullAccess')
insert_new_user = '''insert into usage_tracker values ('%s','%s','%s',getdate());''' % (text3.value, 'redshift' , 'Granted')
cursor.execute(insert_new_user)
conn.commit()
print(text3.value+' user has Redshift service granted')
else:
insert_new_user = '''insert into usage_tracker values ('%s','%s','%s',getdate());''' % (text3.value, 'redshift' , 'Declined')
cursor.execute(insert_new_user)
conn.commit()
print(text3.value+' user has Redshift service declined')
if check3.value:
# user = iam.User(text3.value)
response = user.attach_policy(PolicyArn='arn:aws:iam::aws:policy/AWSLambdaFullAccess')
insert_new_user = '''insert into usage_tracker values ('%s','%s','%s',getdate());''' % (text3.value, 'lambda' , 'Granted')
cursor.execute(insert_new_user)
conn.commit()
print(text3.value+' user has Lambda service granted')
else:
insert_new_user = '''insert into usage_tracker values ('%s','%s','%s',getdate());''' % (text3.value, 'lambda' , 'Declined')
cursor.execute(insert_new_user)
conn.commit()
print(text3.value+' user has Lambda service declined')
cursor.close()
conn.close()
def on_rep_button_clicked(b):
rep_output.clear_output()
with rep_output:
print('Getting report...')
print(get_report())
def on_sw_button_clicked(b):
sw_output.clear_output()
with sw_output:
print('Shrink Wrapping...')
print(shrinkwrap_users())
def on_lamb_button_clicked(b):
lamb_output.clear_output()
with lamb_output:
print('Syncing records on demand...')
get_iam_usage()
print('Done!')
# service_policies = {'redshift': 'arn:aws:iam::aws:policy/AmazonRedshiftFullAccess',
# 's3': 'arn:aws:iam::aws:policy/AmazonS3FullAccess',
# 'lambda': 'arn:aws:iam::aws:policy/AWSLambdaFullAccess'}
def changed(b):
output2.clear_output()
with output2:
print(check1.value)
# user = iam.User(text3.value)
# response = user.attach_policy(PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess')
# print(text3.value+' user has S3 service granted')
def changed1(b):
output2.clear_output()
with output2:
user = iam.User(text3.value)
response = user.attach_policy(PolicyArn='arn:aws:iam::aws:policy/AmazonRedshiftFullAccess')
print(text3.value+' user has Redshift service granted')
def changed2(b):
output2.clear_output()
with output2:
user = iam.User(text3.value)
response = user.attach_policy(PolicyArn='arn:aws:iam::aws:policy/AWSLambdaFullAccess')
print(text3.value+' user has Lambda service granted')
button3.on_click(on_button3_clicked)
rep_button.on_click(on_rep_button_clicked)
sw_button.on_click(on_sw_button_clicked)
lamb_button.on_click(on_lamb_button_clicked)
# check1.observe(changed)
# check2.observe(changed1)
# check3.observe(changed2)
# -
display(button3, text3, check1, check2, check3, output2, rep_button, rep_output, sw_button, sw_output, lamb_button, lamb_output)
|
ShrinkWrapDemo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1 - Simple Sentiment Analysis
#
# In this series we'll be building a *machine learning* model to detect sentiment (i.e. detect if a sentence is positive or negative) using PyTorch and TorchText. This will be done on movie reviews using the IMDb dataset.
#
# In this first notebook, we'll start very simple to understand the general concepts whilst not really caring about good results. Further notebooks will build on this knowledge, to actually get good results.
#
# We'll be using a **recurrent neural network** (RNN) which reads a sequence of words, and for each word (sometimes called a _step_) will output a _hidden state_. We then use the hidden state for subsequent word in the sentence, until the final word has been fed into the RNN. This final hidden state will then be used to predict the sentiment of the sentence.
#
# 
# ## Preparing Data
#
# One of the main concepts of TorchText is the `Field`. These define how your data should be processed. In our sentiment classification task we have to sources of data, the raw string of the review and the sentiment, either "pos" or "neg".
#
# We use the `TEXT` field to handle the review and the `LABEL` field to handle the sentiment.
#
# The parameters of a `Field` specify how the data should be processed.
#
# Our `TEXT` field has `tokenize='spacy'`, which defines that the "tokenization" (the act of splitting the string into discrete "tokens") should be done using the [spaCy](https://spacy.io) tokenizer. If no `tokenize` argument is passed, the default is simply splitting the string on spaces.
#
# `LABEL` is defined by a `LabelField`, a special subset of the `Field` class specifically for handling labels. We will explain the `tensor_type` argument later.
#
# For more on `Fields`, go [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py).
#
# We also set the random seeds for reproducibility.
# +
import torch
from torchtext import data
SEED = 1234
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
TEXT = data.Field(tokenize='spacy')
LABEL = data.LabelField(tensor_type=torch.FloatTensor)
# -
# Another handy feature of TorchText is that it has support for common datasets used in NLP.
#
# The following code automatically downloads the IMDb dataset and splits it into the canonical train/test splits as `torchtext.datasets` objects. It uses the `Fields` we have previously defined.
# +
from torchtext import datasets
train, test = datasets.IMDB.splits(TEXT, LABEL)
# -
# We can see how many examples are in each split by checking their length.
print('len(train):', len(train))
print('len(test):', len(test))
# We can check the fields of the data, hoping that it they match the `Fields` given earlier.
print('train.fields:', train.fields)
# We can also check an example.
print('vars(train[0]):', vars(train[0]))
# The IMDb dataset only has train/test splits, so we need to create a validation set. We can do this with the `.split()` method.
#
# By default this splits 70/30, however by passing a `split_ratio` argument, we can change the ratio of the split, i.e. a `split_ratio` of 0.8 would mean 80% of the examples make up the training set and 20% make up the validation set.
#
# We also pass our random seed to the `random_state` argument, ensuring that we get the same train/validation split each time.
# +
import random
train, valid = train.split(random_state=random.seed(SEED))
# -
# Again, we'll view how many examples are in each split.
print('len(train):', len(train))
print('len(valid):', len(valid))
print('len(test):', len(test))
# Next, we have to build a _vocabulary_. This is a effectively a look up table where every unique word in your _dictionary_ (every word that occurs in all of your examples) has a corresponding _index_ (an integer).
#
# 
#
# We do this as our machine learning model cannot operate on strings, only numbers. Each _index_ is used to construct a _one-hot_ vector for each word. A one-hot vector is a vector where all of the elements are 0, except one, which is 1, and dimensionality is the total number of unique words in your vocabulary.
#
# The number of unique words in our training set is over 100,000, which means that our one-hot vectors will be 100,000 dimensions! This will make training slow and possibly won't fit onto your GPU (if you're using one).
#
# There are two ways effectively cut down our vocabulary, we can either only take the top $n$ most common words or ignore words that appear less than $n$ times. We'll do the former, only keeping the top 25,000 words.
#
# What do we do with words that appear in examples but we have cut from the vocabulary? We replace them with a special _unknown_ or _unk_ token. For example, if the sentence was "This film is great and I love it" but the word "love" was not in the vocabulary, it would become "This film is great and I unk it".
TEXT.build_vocab(train, max_size=25000)
LABEL.build_vocab(train)
# Why do we only build the vocabulary on the training set? When testing any machine learning system you do not want to look at the test set in any way. We do not include the validation set as we want it to reflect the test set as much as possible.
print('len(TEXT.vocab):', len(TEXT.vocab))
print('len(LABEL.vocab):', len(LABEL.vocab))
# Why is the vocab size 25002 and not 25000? One of the addition tokens is the _unk_ token and the other is a _pad_ token.
#
# 
#
# When we feed sentences into our model, we feed a _batch_ of them at a time, i.e. more than one at a time, and all sentences in the batch need to be the same size. Thus, to ensure each sentence in the batch is the same size, any shorter than the largest within the batch are padded.
#
# We can also view the most common words in the vocabulary.
print(TEXT.vocab.freqs.most_common(20))
# We can also see the vocabulary directly using either the `stoi` (**s**tring **to** **i**nt) or `itos` (**i**nt **to** **s**tring) method.
print(TEXT.vocab.itos[:10])
# We can also check the labels, ensuring 0 is for negative and 1 is for positive.
print(LABEL.vocab.stoi)
# The final step of preparing the data is creating the iterators.
#
# `BucketIterator` first sorts of the examples using the `sort_key`, here we use the length of the sentences, and then partitions them into _buckets_. When the iterator is called it returns a batch of examples from the same bucket. This will return a batch of examples where each example is a similar length, minimizing the amount of padding.
# +
BATCH_SIZE = 64
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train, valid, test),
batch_size=BATCH_SIZE,
sort_key=lambda x: len(x.text),
repeat=False)
# -
# ## Build the Model
#
# The next stage is building the model that we'll eventually train and evaluate.
#
# There is a small amount of boilerplate code when creating models in PyTorch, note how our `RNN` class is a sub-class of `nn.Module` and the use of `super`.
#
# Within the `__init__` we define the _layers_ of the module. Our three layers are an _embedding_ layer, our RNN, and a _linear_ layer.
#
# The embedding layer is used to transform our sparse one-hot vector (sparse as most of the elements are 0) into a dense embedding vector (dense as the dimensionality is a lot smaller). This embedding layer is simply a single fully connected layer. The theory is that words that have similar impact on the sentiment are mapped close together in this dense vector space. For more information about word embeddings, see [here](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/).
#
# The RNN layer is our RNN which takes in our dense vector and the previous hidden state $h_{t-1}$, which it uses to calculate the next hidden state, $h_t$.
#
# Finally, the linear layer takes the final hidden state and feeds it through a fully connected layer, transforming it to the correct output dimension.
#
# 
#
# The `forward` method is called when we feed examples into our model.
#
# Each batch, `x`, is a tensor of size _**[sentence length, batch size]**_. That is a batch of sentences, each having each word converted into a one-hot vector.
#
# You may notice that this tensor should have another dimension due to the one-hot vectors, however PyTorch conveniently stores a one-hot vector as it's index value.
#
# The input batch is then passed through the embedding layer to get `embedded`, where now each one-hot vector is converted to a dense vector. `embedded` is a tensor of size _**[sentence length, batch size, embedding dim]**_.
#
# `embedded` is then fed into the RNN. In some frameworks you must feed the initial hidden state, $h_0$, into the RNN, however in PyTorch, if no initial hidden state is passed as an argument it defaults to a tensor of all zeros.
#
# The RNN returns 2 tensors, `output` of size _**[sentence length, batch size, hidden dim]**_ and `hidden` of size _**[1, batch size, embedding dim]**_. `output` is the concatenation of the hidden state from every time step, whereas `hidden` is simply the final hidden state. We verify this using the `assert` statement. Note the `squeeze` method, which is used to remove a dimension of size 1.
#
# Finally, we feed the last hidden state, `hidden`, through the linear layer to produce a prediction.
# +
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
#x = [sent len, batch size]
embedded = self.embedding(x)
#embedded = [sent len, batch size, emb dim]
output, hidden = self.rnn(embedded)
#output = [sent len, batch size, hid dim]
#hidden = [1, batch size, hid dim]
assert torch.equal(output[-1,:,:], hidden.squeeze(0))
return self.fc(hidden.squeeze(0))
# -
# We now create an instance of our RNN class.
#
# The input dimension is the dimension of the one-hot vectors, which is equal to the vocabulary size.
#
# The embedding dimension is the size of the dense word vectors, this is usually around the square root of the vocab size.
#
# The hidden dimension is the size of the hidden states, this is usually around 100-500 dimensions, but depends on the vocab size, embedding dimension and the complexity of the task.
#
# The output dimension is usually the number of classes, however in the case of only 2 classes the output value is between 0 and 1 and thus can be 1-dimensional, i.e. a single scalar.
# +
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
HIDDEN_DIM = 256
OUTPUT_DIM = 1
model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)
# -
# ## Train the Model
# Now we'll set up the training and then train the model.
#
# First, we'll create an optimizer. This is the algorithm we use to update the parameters of the module. Here, we'll use _stochastic gradient descent_ (SGD). The first argument is the parameters will be updated by the optimizer, the second is the learning rate, i.e. how much we'll change the parameters by when we do an update.
# +
import torch.optim as optim
optimizer = optim.SGD(model.parameters(), lr=1e-3)
# -
# Next, we'll define our loss function. In PyTorch this is commonly called a criterion.
#
# The loss function here is _binary cross entropy with logits_.
#
# The prediction for each sentence is an unbound real number, as our labels are either 0 or 1, we want to restrict the number between 0 and 1, we do this using the _sigmoid function_, see [here](https://en.wikipedia.org/wiki/Sigmoid_function).
#
# We then calculate this bound scalar using binary cross entropy, see [here](https://rdipietro.github.io/friendly-intro-to-cross-entropy-loss/).
criterion = nn.BCEWithLogitsLoss()
# PyTorch has excellent support for NVIDIA GPUs via CUDA. `torch.cuda.is_available()` returns `True` if PyTorch detects a GPU.
#
# Using `.to`, we can place the model and the criterion on the GPU.
# +
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.to(device)
criterion = criterion.to(device)
# -
# Our criterion function calculates the loss, however we have to write our function to calculate the accuracy.
#
# This function first feeds the predictions through a sigmoid layer, squashing the values between 0 and 1, we then round them to the nearest integer. This rounds any value greater than 0.5 to 1 (a positive sentiment).
#
# We then calculate how many rounded predictions equal the actual labels and average it across the batch.
# +
import torch.nn.functional as F
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(F.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum()/len(correct)
return acc
# -
# The `train` function iterates over all examples, a batch at a time.
#
# `model.train()` is used to put the model in "training mode", which turns on _dropout_ and _batch normalization_. Although we aren't using them in this model, it's good practice to include it.
#
# For each batch, we first zero the gradients. Each parameter in a model has a `grad` attribute which stores the gradient calculated by the `criterion`. PyTorch does not automatically remove (or zero) the gradients calculated from the last gradient calculation so they must be manually cleared.
#
# We then feed the batch of sentences, `batch.text`, into the model. Note, you do not need to do `model.forward(batch.text)`, simply calling the model works. The `squeeze` is needed as the predictions are initially size _**[batch size, 1]**_, and we need to remove the dimension of size 1.
#
# The loss and accuracy are then calculated using our predictions and the labels, `batch.label`.
#
# We calculate the gradient of each parameter with `loss.backward()`, and then update the parameters using the gradients and optimizer algorithm with `optimizer.step()`.
#
# The loss and accuracy is accumulated across the epoch, the `.item()` method is used to extract a scalar from a tensor which only contains a single value.
#
# Finally, we return the loss and accuracy, averaged across the epoch. The len of an iterator is the number of batches in the iterator.
#
# You may recall when initializing the `LABEL` field, we set `tensor_type=torch.FloatTensor`. This is because TorchText sets tensors to be `LongTensor`s by default, however our criterion expects both inputs to be `FloatTensor`s. As we have manually set the `tensor_type` to be `FloatTensor`s, this conversion is done for us.
#
# Another method would be to do the conversion inside the `train` function by passing `batch.label.float()` instad of `batch.label` to the criterion.
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
# `evaluate` is similar to `train`, with a few modifications as you don't want to update the parameters when evaluating.
#
# `model.eval()` puts the model in "evaluation mode", this turns off _dropout_ and _batch normalization_. Again, we are not using them in this model, but it is good practice to include it.
#
# Inside the `no_grad()`, no gradients are calculated which speeds up computation.
#
# The rest of the function is the same as `train`, with the removal of `optimizer.zero_grad()`, `loss.backward()`, `optimizer.step()`.
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
# We then train the model through multiple epochs, an epoch being a complete pass through all examples in the split.
# +
N_EPOCHS = 5
for epoch in range(N_EPOCHS):
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
print(f'Epoch: {epoch+1:02}, Train Loss: {train_loss:.3f}, Train Acc: {train_acc*100:.2f}%, Val. Loss: {valid_loss:.3f}, Val. Acc: {valid_acc*100:.2f}%')
# -
# You may have noticed the loss is not really decreasing and the accuracy is poor. This is due to several issues with the model which we'll improve in the next notebook.
#
# Finally, the metric you actually care about, the test loss and accuracy.
# +
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f}, Test Acc: {test_acc*100:.2f}%')
# -
# ## Next Steps
#
# In the next notebook, the improvements we will make are:
# - different optimizer
# - use pre-trained word embeddings
# - different RNN architecture
# - bidirectional RNN
# - multi-layer RNN
# - regularization
#
# This will allow us to achieve ~85% accuracy.
|
1 - Simple Sentiment Analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: torch_zy
# language: python
# name: torch_zy
# ---
# +
import torch
import torchvision.datasets as datasets
import os
import foolbox
import torchvision.models as models
import numpy as np
import cv2
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
BATCH_SIZE = 64
datapath = '/home/user/datasets/ImageNet/'
traindir = os.path.join(datapath, 'train')
labeldir = '/home/user/datasets/ImageNet/class_to_idx.txt'
train_dataset = datasets.ImageFolder(
traindir,
transforms.Compose([
# transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
# transforms.Normalize(mean=[0.485, 0.456, 0.406],
# std=[0.229, 0.224, 0.225])
])
)
# train_loader = torch.utils.data.DataLoader(
# train_dataset, batch_size=BATCH_SIZE, shuffle=False,
# num_workers=1, pin_memory=True, sampler=None)
# -
resnet101 = models.resnet101(pretrained=True).eval()
if torch.cuda.is_available():
resnet101 = resnet101.cuda()
else:
print('===============')
mean = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1))
std = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1))
fmodel = foolbox.models.PyTorchModel(
resnet101, bounds=(0, 1), num_classes=1000, preprocessing=(mean, std))
from scipy import ndimage
import tensorflow as tf
from abdm.abdm import ABDM
transform = transforms.Compose([transforms.ToTensor()])
# +
img_id=[] #ori images ID list
img_ori=[] #ori images list
img_adv=[] #adv images list
img_label=[] #ori images labels list
stn_image=[] #stn images list
wrong_oriimg=0
right_advimg=0
wrong_advimg=0
right_stnimg=0
wrong_stnimg=0
list=[1007112, 1007113,1007114, 1007115, 1007116,1007117,1007118,1007119,1007120,1007121,1007122,1007123, 1007126,1007127, 1007128, 1007129, 1007132, 1007138, 1007143, 1007145, 1007147, 1007153, 1007167, 1007171,1007172,1007173,1007174,1007175,1007176, 1007177,1007178, 1007179, 1007185, 1007187, 1007194, 1007196, 1007197, 1007198, 1007202, 1007204, 1007207, 1007212, 1007213, 1007215, 1007221, 1007223, 1007226, 1007229, 1007232, 1007234, 1007236, 1007242, 1007243, 1007245, 1007247, 1007250, 1007258, 1007261, 1007262, 1007266, 1007271, 1007273, 1007276, 1007278, 1007280, 1007283, 1007285, 1007289, 1007294, 1007300, 1007307, 1007310, 1007312, 1007313, 1007315, 1007317, 1007321, 1007323, 1007327, 1007330, 1007332, 1007336, 1007337, 1007341, 1007342, 1007347, 1007359, 1007360, 1007363, 1007366, 1007369, 1007370, 1007374, 1007378, 1007380, 1007382, 1007386, 1007389, 1007392, 1007400 ]
for num in list:
image, target = train_dataset[num]
image= np.array(image)
print('predicted class', np.argmax(fmodel.predictions(image)),', ground truth class',target)
tempclass1=str(np.argmax(fmodel.predictions(image)))
tempclass2=str(target)
if(tempclass1!=tempclass2):
wrong_oriimg=wrong_oriimg+1
continue
#dp_attack = foolbox.attacks.FGSM(fmodel)
dp_attack = foolbox.attacks.deepfool.DeepFoolAttack(fmodel, distance=foolbox.distances.Linfinity)
#dp_attack = foolbox.attacks.PGD(fmodel, distance=foolbox.distances.Linfinity)
adversarial = dp_attack(image, target)
try:
print('adversarial class', np.argmax(fmodel.predictions(adversarial)))
except:
wrong_advimg=wrong_advimg+1
print('error')
continue
else:
right_advimg=right_advimg+1
print('adversarial class', np.argmax(fmodel.predictions(adversarial)))
#===============abdm start (0.0)=========================================
im=adversarial
im = transform(im).numpy()
im = transform(im).numpy()
image_show=im
#im=im.resize(3,224,224)
print('ori image shape is :',im.shape)
print("===========================================================")
im = im.reshape(1, 224, 224, 3)
im = im.astype('float32')
#print('img-over')
out_size = (224, 224)
batch = np.append(im, im, axis=0)
batch = np.append(batch, im, axis=0)
num_batch = 3
x = tf.placeholder(tf.float32, [None, 224, 224, 3])
x = tf.cast(batch, 'float32')
print('begin---')
with tf.variable_scope('abdm_0'):
n_fc = 6
w_fc1 = tf.Variable(tf.Variable(tf.zeros([224 * 224 * 3, n_fc]), name='W_fc1'))
initial = np.array([[0.5, 0, 0], [0, 0.5, 0]])
initial = initial.astype('float32')
initial = initial.flatten()
b_fc1 = tf.Variable(initial_value=initial, name='b_fc1')
h_fc1 = tf.matmul(tf.zeros([num_batch, 224 * 224 * 3]), w_fc1) + b_fc1
print(x, h_fc1, out_size)
h_trans = ABDM(x, h_fc1, out_size)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
y = sess.run(h_trans, feed_dict={x: batch})
stnimg_temp=transform(y[0]).numpy()
adv_class=str(np.argmax(fmodel.predictions(stnimg_temp)))
orilabel=str(target)
print('adversarial class', np.argmax(fmodel.predictions(stnimg_temp)))
print('ori class', orilabel)
if(adv_class==orilabel):
# put images and labels into list
img_ori.append(image)
img_adv.append(adversarial)
img_label.append(target)
img_id.append(num)
stn_image.append(stnimg_temp)
print(len(img_id))
right_stnimg=right_stnimg+1
else:
print('can not use this img')
wrong_stnimg=wrong_stnimg+1
continue
ori_right=(100-wrong_oriimg)/100
adv_right=(wrong_oriimg+wrong_advimg)/100
stn_right=right_stnimg/100
stn_right2=right_stnimg/(right_stnimg+wrong_stnimg)
print('clean image Accuracy: %.2f%%' % (ori_right * 100))
print('adv image Accuracy: %.2f%%' % (adv_right * 100))
print('stn image Accuracy: %.2f%%' % (stn_right * 100 ))
print('stn image Accuracy: %.2f%%' % (stn_right2 * 100 ))
# -
|
abdm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ! pip install -U git+https://github.com/IINemo/isanlp.git@discourse
# +
from isanlp import PipelineCommon
from isanlp.processor_remote import ProcessorRemote
from isanlp.ru.processor_mystem import ProcessorMystem
from isanlp.ru.converter_mystem_to_ud import ConverterMystemToUd
SERVER = '' # put the address here
address_syntax = (SERVER, 3344)
address_rst = (SERVER, 3335)
ppl = PipelineCommon([
(ProcessorRemote(address_syntax[0], address_syntax[1], '0'),
['text'],
{'sentences': 'sentences',
'tokens': 'tokens',
'lemma': 'lemma',
'syntax_dep_tree': 'syntax_dep_tree',
'postag': 'ud_postag'}),
(ProcessorMystem(delay_init=False),
['tokens', 'sentences'],
{'postag': 'postag'}),
(ConverterMystemToUd(),
['postag'],
{'morph': 'morph',
'postag': 'postag'}),
(ProcessorRemote(address_rst[0], address_rst[1], 'default'),
['text', 'tokens', 'sentences', 'postag', 'morph', 'lemma', 'syntax_dep_tree'],
{'rst': 'rst'})
])
# -
text = (
"Новости о грядущей эмиссии в США обвалили доллар и подняли цену золота. При этом рост количества "
"долларов пока не зафиксирован. Со швейцарским франком ситуация противоположная: стало известно, ч"
"то в феврале денежная масса Швейцарии увеличилась на 3.5%, однако биржевой курс франка и его покуп"
"ательная способность за неделю выросли."
)
# +
# %%time
result = ppl(text)
# -
result['rst']
vars(result['rst'][0])
def extr_pairs(tree, text):
pp = []
if tree.left:
pp.append([text[tree.left.start:tree.left.end], text[tree.right.start:tree.right.end], tree.relation, tree.nuclearity])
pp += extr_pairs(tree.left, text)
pp += extr_pairs(tree.right, text)
return pp
extr_pairs(result['rst'][0], result['text'])
# +
import sys
sys.path.append('../')
sys.path.append('../../')
from _isanlp_rst.src.isanlp_rst.export.to_rs3 import ForestExporter
exporter = ForestExporter(encoding='utf8')
exporter(result['rst'], 'example.rs3')
# -
# Output:
#
# 
|
examples/usage.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### One test polytomous analysis NRT
# ___
#
# Using exercises spreadsheets from Designing and Analyzing Language Tests by Oxford.
#
# The purpose of this notebook is to compute the total score for each student and his or her percentage correct score, and then to calculate the various descriptive statistics. We will also calculate $IF*$, $ID*ul$, $r(item-total)$ for NRT with polytomous items, and then to interpret results.
#
# <br>
#
# #### General Setup
# ___
# import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as ss
# styling for plots
plt.style.use('seaborn-whitegrid')
plt.rcParams['figure.figsize'] = (14,5)
# <br>
#
# #### Load the data
# ___
test = pd.read_excel('Data/one_test_polytomous_NRT.xlsx', nrows=30)
test.tail()
results = pd.read_excel('Data/one_test_polytomous_NRT.xlsx')
max_score = results.iloc[-1:]
max_score
# check the dataset info
results.info()
# The dataset contains polytomous test results for 20 students.
# calculate total correct answers and add it to the dataframe
test['Total'] = test.loc[:, test.columns != 'Student'].sum(axis=1)
test.head()
test.tail()
# <br>
#
# #### Descriptive stats
# ___
# calculate pandas stats and converting it to a dataframe
stats = pd.DataFrame(test['Total'].describe())
stats
# +
# renaming the std to std(sample) and add std for population
stats.loc['std(sample)'] = stats.loc['std']
stats.loc['std(pop)'] = test['Total'].std(ddof=0)
# renaming the min and max
stats.loc['high score'] = stats.loc['max']
stats.loc['low score'] = stats.loc['min']
stats.loc['n'] = stats.loc['count']
# adding other stats
stats.loc['mode'] = test['Total'].mode().tolist()
stats.loc['var(sample)'] = test['Total'].var()
stats.loc['var(pop)'] = test['Total'].var(ddof=0)
stats.loc['range'] = stats.loc['high score'] - stats.loc['low score'] + 1
stats.loc['Q'] = (stats.loc['75%'] - stats.loc['25%']) / 2
stats.loc['skewness'] = test['Total'].skew()
n = stats.loc['n']
stats.loc['SES'] = np.sqrt((6* n[0] * (n[0]-1)) / ((n[0]-2) * (n[0]+1) * (n[0]+3)))
stats.loc['skew/SES'] = stats.loc['skewness'] / stats.loc['SES']
stats.loc['kurtosis'] = test['Total'].kurt()
stats.loc['SEK'] = np.sqrt((4*(n[0]**2-1)*stats.loc['SES'][0]**2) /((n[0]-3)*(n[0]+5)))
stats.loc['kurt/SEK'] = stats.loc['kurtosis'] / stats.loc['SEK']
# removing not needed lines
stats.drop(['std', 'min', 'max', 'count'], axis=0, inplace=True)
stats
# -
# round all stats to two decimal points and changing the order
stats = np.round(stats, 3)
stats = stats.reindex(index = ['mean','mode','25%', '50%', '75%', 'high score', 'low score',
'range', 'std(pop)', 'std(sample)', 'var(pop)', 'var(sample)', 'Q', 'n',
'skewness', 'SES', 'skew/SES','kurtosis', 'SEK', 'kurt/SEK'])
stats.index.name = 'Statistics'
stats
# <br>
#
# #### Plotting.
# ___
#
# +
# histograms and frequency polygon
fig, [ax0, ax1] = plt.subplots(1,2)
fig.suptitle('Distribution of Scores', y=1.02, weight='bold', fontsize=13)
# total scores
ax0.hist(test['Total'], bins=40)
ax0.set(title='Histogram of Total Scores',
xlabel='Scores',
ylabel='Frequency')
ax0.axvline(stats.loc['mean'][0], linestyle='--', c='red', label='Mean')
ax0.axvline(stats.loc['mode'][0], linestyle='--', c='purple', label='Mode')
ax0.axvline(stats.loc['50%'][0], linestyle='--', c='orange', label='Median')
# total scores
ax1.plot(test['Total'],marker='.', linestyle='solid', markersize=20, markerfacecolor='lightyellow')
ax1.set(title='Frequency Polygon for Total Scores',
xlabel='Scores',
ylabel='Frequency')
# display legend
ax0.legend(frameon=True, fancybox=True, shadow=True)
# save the plot
plt.savefig('Data/distribution_of_polytomous_scores_NRT.png', bbox_inches='tight');
# -
# <br>
#
# #### Standard Scores.
# ___
# calculating z and T scores
test['z'] = np.round((test['Total'] - stats.loc['mean'][0])/stats.loc['std(pop)'][0],1)
test['T'] = test['z'] * 10 + 50
test.head()
# +
# create stats for z and T
stats_for_scores = pd.DataFrame({'mean': [test['z'].mean(), test['T'].mean()],
'std(pop)': [test['z'].std(ddof=0), test['T'].std(ddof=0)]})
stats_for_scores = stats_for_scores.T
stats_for_scores.columns = ['z', 'T']
# add it to the rest of the stats
stats = stats.join(np.round(stats_for_scores,3))
# -
# <br>
#
# #### Item analysis
# ___
#
# 1. Item facility
# sort scored in descending order
sorted_scores = test.sort_values('Total', ascending=False, kind='stable')
sorted_scores.head()
# +
# calculate total IF, upper and lower IF for each quize
IF = pd.DataFrame({'IF*': sorted_scores.drop(['Student', 'Total', 'z', 'T'], axis=1).mean() / max_score.values[0][1:]}).T
IF_upper = pd.DataFrame({'IF*(upper)': sorted_scores.drop(['Student', 'Total', 'z', 'T'], axis=1)[:10].mean() / max_score.values[0][1:]}).T
IF_lower = pd.DataFrame({'IF*(lower)': sorted_scores.drop(['Student', 'Total', 'z', 'T'], axis=1)[-10:].mean() / max_score.values[0][1:]}).T
# concat them into one dataframe
item_facility = pd.concat([IF, IF_upper, IF_lower])
item_facility.index.name = 'Item facility'
item_facility
# -
# <br>
#
# 2. Item discrimination
# compute discrimination
IDul = pd.DataFrame({'ID*(UL)': item_facility.loc['IF*(upper)'] - item_facility.loc['IF*(lower)']}).T
r_it = pd.DataFrame({'r(item-total)': np.round(test.drop(['Student', 'Total', 'z', 'T'], axis=1).corrwith(test['Total'],method='pearson'), 3)}).T
# concat the results into one dataframe
discrimination = pd.concat([IDul, r_it])
discrimination.index.name = 'Item discrimination'
discrimination
# <br>
#
# #### Interpretation
# ___
#
# 1. Item facility
# highlight the questions for revision based on IF
IF.style.apply(lambda x: ["background: yellow" if .29999999 > v or v > 0.6999999 else "" for v in x], axis = 1)
# create a list of questions for revesion
quest_IF = IF.apply(lambda x: [v if .30 > v or v > 0.70 else "" for v in x]).any()
rev_IF = pd.DataFrame({'IF*': list(IF.columns[quest_IF])})
rev_IF
# <br>
#
# 2. Item discrimination
#
# highlight the questions for revision
IDul.style.apply(lambda x: ["background: yellow" if v < 0.3999999999 else "" for v in x], axis = 1)
r_it.style.apply(lambda x: ["background: yellow" if v < 0.2999999999 else "" for v in x], axis = 1)
# create a list of questions for revesion
quest_UDul = IDul.apply(lambda x: [v if v < 0.3999999999 else "" for v in x]).any()
rev_IDul = pd.DataFrame({'ID*(UL)': list(IDul.columns[quest_UDul])})
rev_IDul
quest_rit = r_it.apply(lambda x: [v if v < 0.2999999999 else "" for v in x]).any()
rev_rit = pd.DataFrame({'r(item-total)': list(r_it.columns[quest_rit])})
rev_rit
# join all questions flagged for revision into one dataframe
flagged = rev_IDul.join([rev_rit,rev_IF]).T
flagged.index.name = 'Flagged'
flagged = flagged.reindex(index=['IF*', 'ID*(UL)', 'r(item-total)'])
flagged
# <br>
#
# #### Save the results to an excel file
# ___
# +
# write and save all dataframes to the excel file
writer = pd.ExcelWriter('Data/one_test_polytomous_analysis_NRT.xlsx', engine='xlsxwriter')
sorted_scores.to_excel(writer, index = False)
item_facility.to_excel(writer, startrow=len(test)+3, index=True)
discrimination.to_excel(writer, startrow=len(test)+len(item_facility)+5, index=True)
flagged.to_excel(writer, startrow=len(test)+len(item_facility)+len(discrimination) + 7, index=True)
stats.to_excel(writer, startrow=len(test)+len(item_facility)+len(discrimination) + len(flagged) + 10, index=True)
# insert the image into the worksheet
workbook = writer.book
worksheet = writer.sheets['Sheet1']
worksheet.insert_image('H49', 'Data/distribution_of_polytomous_scores_NRT.png')
# styling
column_settings = [{'header': column} for column in test.columns]
(max_row, max_col) = test.shape
worksheet.add_table(0, 0, max_row, max_col - 1, {'columns': column_settings})
writer.save()
# -
# <br>
#
# ___
# #### End.
|
Data analytics/Language tests/One test polytomous analysis NRT.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="hx6IdS7O6LJ7" colab_type="text"
# # 日本語ウィキペディアのダンプファイルをダウンロード。
# + id="m-Mtjjrx1Bwv" colab_type="code" colab={}
# 日本語のウィキペディアの一部(約100万の記事)をダウンロードします (~270MB)
# !wget https://dumps.wikimedia.org/jawiki/20190620/jawiki-20190620-pages-articles1.xml-p1p106175.bz2
# ※ 日本語のウィキペディアの全ての記事が入っているダンプファイル(~3GB)はこちらのリンクからダウンロードできます: https://dumps.wikimedia.org/jawiki/latest/jawiki-latest-pages-articles.xml.bz2
# + [markdown] id="ewR9jAL1E0qL" colab_type="text"
# # wikiextractorのインストールと、ウィキペディア記事のデータ抽出。(約40分)
# + id="YrA_ujruE6lg" colab_type="code" colab={}
# 圧縮しているウィキペディアのダンプファイルから記事のテキストデータを抽出するwikiextractorツールをインストールします。
# !git clone https://github.com/attardi/wikiextractor.git
# このツールはpipからもライブラリーとしてインストールすることができます。(!pip install wikiextractor)
# ローカルのフォルダーに抽出したウィキペディア記事のデータを書き込んで、分かりやすいファイル名に変更します
# !python ./wikiextractor/WikiExtractor.py jawiki-20190620-pages-articles1.xml-p1p106175.bz2 -o japanese_extracted_articles -b 500M --no_templates --filter_disambig_pages
# !mv japanese_extracted_articles/AA/wiki_00 japanese_wikipedia_extracted_articles.txt
# + [markdown] id="5o9wlm-5E8Ya" colab_type="text"
# # Mecab形態素解析ツールのインストールと必要なライブラリーのインポート。
# + id="RMLRmM0djMPh" colab_type="code" colab={}
# Mecab形態素解析ツールをインストール
# !pip install mecab-python3
# + id="dEzMCx6qOiKd" colab_type="code" colab={}
# 必要なライブラリーのインポート
import MeCab
from collections import Counter
import codecs
import nltk
import sqlite3
import re
# + [markdown] id="OJtmZRqFKXQ-" colab_type="text"
# # テキストの単語出現頻度計算関数の定義とその計算。
# + id="VtWC-xC_7p4x" colab_type="code" colab={}
stopwords = ['する', 'なる', 'ない', 'これ', 'それ', 'id', 'ja', 'wiki',
'wikipedia', 'id', 'doc', 'https', 'org', 'url', 'いう', 'ある',
'curid', 'あれ', 'それら', 'これら', 'それそれ', 'それぞれ',
'title', 'その後', '一部', '前', 'よる', '一つ', 'ひとつ', '他',
'その他', 'ほか', 'そのほか', 'いる']
word_categories = ['名詞', '動詞', '形容詞']
word_categories_to_avoid = ['非自立', '接尾', 'サ変接続', '数']
# 与えられたテキストの単語出現頻度の計算
def count_word_frequencies(text):
all_nouns_verbs_adjs = []
tagger = MeCab.Tagger()
for line in text:
node = tagger.parseToNode(line)
while(node):
lemma = node.feature.split(',')[6].lower() # 辞書形
pos = node.feature.split(',')[0].lower() # 品詞情報
pos2 = node.feature.split(',')[1].lower() # 品詞情報2
if lemma != '':
if lemma == '*' and node.surface != "":
lemma = node.surface
if (pos in word_categories and
pos2 not in word_categories_to_avoid and
lemma not in stopwords):
all_nouns_verbs_adjs.append(lemma)
node = node.next
if node is None:
break
return Counter(all_nouns_verbs_adjs)
# + [markdown] id="kDYmZjUDqbPo" colab_type="text"
# # ローカルのデータベースの作成とテキスト情報などの保存
# + id="a6xxQxBchHC1" colab_type="code" colab={}
# ウィキペディア記事のテキスト情報そのままを保存用のテーブル
create_article_text_table_sql = """
drop table if exists article_text;
create table article_text (
id integer primary key autoincrement not null,
article_id integer not null,
title text not null,
article text not null,
article_url text not null
);
"""
# ウィキペディア全体の単語出現頻度情報を保存用のテーブル
create_wikipedia_word_frequencies_table_sql = """
drop table if exists wikipedia_word_frequencies;
create table wikipedia_word_frequencies (
word text primary key not null,
frequency integer not null
);
"""
# ウィキペディアの各記事の単語出現頻度情報を保存用のテーブル
create_article_word_frequencies_table_sql = """
drop table if exists article_word_frequencies;
create table article_word_frequencies (
id integer primary key autoincrement not null,
article_id integer not null,
word text not null,
frequency integer not null
);
"""
# ウィキペディアの各記事のtfidf値を保存用のテーブル
create_article_word_tfidfs_table_sql = """
drop table if exists article_word_tfidfs;
create table article_word_tfidfs (
id integer primary key autoincrement not null,
article_id integer not null,
word text not null,
tfidf_score integer not null
);
"""
DB_PATH = "japanese_wikipedia_analysis.db"
# + id="myhRzGVhhN2V" colab_type="code" colab={}
def initialize_database(db_path):
db_connection = sqlite3.connect(db_path)
database_initialization = [
create_article_text_table_sql,
create_wikipedia_word_frequencies_table_sql,
create_article_word_frequencies_table_sql,
create_article_word_tfidfs_table_sql
]
for sql_query in database_initialization:
db_connection.executescript(sql_query)
db_connection.commit()
db_connection.close()
initialize_database(DB_PATH)
# + [markdown] id="DaPPpvX-GJOV" colab_type="text"
# # ウィキペディア全体のテキストファイルを記事に分割し、各記事のテキストと単語出現頻度の情報を保存する。
# + id="ikup8fnQ5LoA" colab_type="code" colab={}
def save_article(db_path, id, url, title, text):
db_connection = sqlite3.connect(db_path)
insert_statement = u"""
INSERT INTO article_text (article_id, article_url, title, article)
VALUES (?, ?, ?, ?)"""
db_connection.executemany(insert_statement, [(id, url, title, text)])
db_connection.commit()
db_connection.close()
def calculate_article_word_frequencies(db_path, article_id, text):
db_connection = sqlite3.connect(db_path)
insert_statement = u"""
INSERT INTO article_word_frequencies (article_id, word, frequency)
VALUES (?, ?, ?)"""
article_word_frequencies = count_word_frequencies([text]).items()
db_connection.executemany(insert_statement,
[(article_id, pair[0], pair[1]) for pair in article_word_frequencies])
db_connection.commit()
db_connection.close()
# テキストファイルを処理しながら、各記事に分割し、単語出現頻度を計算する
def parse_articles(db_path, file):
article_header = re.compile(r'^<doc id=\"([0-9]+)\" url=\"(.*)\" title=\"(.*)\">$')
article_footer = re.compile(r'^</doc>$')
# それぞれの<doc>...</doc>の間のテキストが各記事になっています
with open(file, 'r') as wikipedia_dump:
article_text = ''
article_id = 0
article_url = ''
article_title = ''
for line in wikipedia_dump:
if not line:
continue
header_found = article_header.search(line)
footer_found = article_footer.search(line)
if header_found:
article_id = header_found.group(1)
article_url = header_found.group(2)
article_title = header_found.group(3)
continue
elif footer_found:
save_article(db_path, article_id, article_url, article_title, article_text)
calculate_article_word_frequencies(db_path, article_id, article_text)
article_text = ''
article_id = 0
article_url = ''
article_title = ''
else:
article_text += "\n" + line
parse_articles(DB_PATH, "japanese_wikipedia_extracted_articles.txt")
# + [markdown] id="7AujdA9HGRrS" colab_type="text"
# # ウィキペディア全体の単語出現頻度情報を保存
# + id="zNxdsDr_hh4J" colab_type="code" colab={}
def calculate_wikipedia_word_frequencies(db_path):
# Open the database
db_connection = sqlite3.connect(db_path)
insert_statement = u"""
INSERT INTO wikipedia_word_frequencies (word, frequency) VALUES (?, ?)"""
with codecs.open("japanese_wikipedia_extracted_articles.txt", "r",'utf-8') as full_wiki:
db_connection.executemany(insert_statement,
[(pair[0], pair[1]) for pair in count_word_frequencies(full_wiki).items()])
# Commit the changes and close.
db_connection.commit()
db_connection.close()
calculate_wikipedia_word_frequencies(DB_PATH)
# + [markdown] id="VjxArz3LMPq0" colab_type="text"
# # TF-IDF値を計算する関数の定義
# + id="g2EQbldeMP5K" colab_type="code" colab={}
from math import log
def tf_idf(word, doc_word_frequencies, corpus_word_frequencies, vocabulary_size):
return tf(word, doc_word_frequencies) * idf(word, corpus_word_frequencies, vocabulary_size)
def tf(word, doc_word_frequencies):
return log(1 + doc_word_frequencies[word])
def idf(word, corpus_word_frequencies, vocabulary_size):
if word not in corpus_word_frequencies or corpus_word_frequencies[word] == 0:
return 1
else:
return log(vocabulary_size / corpus_word_frequencies[word])
# + [markdown] id="IXS3lOIcGXUC" colab_type="text"
# # 各記事の単語のtfidf値を計算と保存する(>1時間)
# + id="A9GClv92CiEB" colab_type="code" colab={}
def retrieve_articles_wordfreqs_by_id(db_path, article_id):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT word, frequency
FROM article_word_frequencies
WHERE article_id = {seq}""".format(seq=str(article_id))
result = []
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result.append(row)
db_connection.close()
return result
def retrieve_wikipedia_wordfreqs(db_path, words_list):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT word, frequency
FROM wikipedia_word_frequencies
WHERE word IN (\"{seq}\")""".format(seq='","'.join(words_list))
result = []
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result.append(row)
db_connection.close()
return result
wikipedia_frequencies = all_nouns_verbs_adjs
def retrieve_wikipedia_vocabulary_size(db_path):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT COUNT(DISTINCT(word))
FROM wikipedia_word_frequencies"""
result = 0
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result = row[0]
db_connection.close()
return result
wikipedia_vocabulary_size = retrieve_wikipedia_vocabulary_size(DB_PATH)
def save_article_tfidfs(db_path, article_id):
insert_statement = u"""
INSERT INTO article_word_tfidfs (article_id, word, tfidf_score)
VALUES (?, ?, ?)"""
article_word_frequencies = dict(retrieve_articles_wordfreqs_by_id(db_path, article_id))
article_word_tfidfs_tuples = [(article_id, word, tf_idf(word, article_word_frequencies, wikipedia_frequencies, wikipedia_vocabulary_size)) for word in article_word_frequencies.keys()]
db_connection = sqlite3.connect(db_path)
db_connection.executemany(insert_statement, article_word_tfidfs_tuples)
db_connection.commit()
db_connection.close()
def calculate_articles_tfidfs(db_path):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT DISTINCT(article_id) FROM article_word_frequencies"""
cursor = db_connection.execute(retrieve_statement)
articles = []
for row in cursor:
articles.append(row[0])
db_connection.close()
for article_id in articles:
save_article_tfidfs(db_path, article_id)
return result
calculate_articles_tfidfs(DB_PATH)
# + [markdown] id="3EVUZ-VFPlSX" colab_type="text"
# # ヘルパー関数の定義
# + id="_lYLiz5SPk1E" colab_type="code" colab={}
# article_idの記事の単語出現頻度情報をデータベースから読み込む
def retrieve_articles_wordfreqs_by_id(db_path, article_id):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT word, frequency
FROM article_word_frequencies
WHERE article_id = {seq}""".format(seq=str(article_id))
result = []
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result.append(row)
db_connection.close()
return result
# ランダムにamount_articlesの記事をデータベースから読み込む
def retrieve_random_articles(db_path, amount_articles):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT article_id, title, article
FROM article_text
ORDER BY RANDOM() LIMIT """ + str(amount_articles)
result = []
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result.append(row)
return result
# article_idの記事をデータベースから読み込む
def retrieve_articles_by_ids(db_path, article_ids_list):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT article_id, title, article
FROM article_text
WHERE article_id IN ({seq})""".format(seq=','.join(article_ids_list))
result = []
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result.append(row)
return result
# article_titles_listの記事をデータベースから読み込む
def retrieve_articles_by_titles(db_path, article_titles_list):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT article_id, title, article
FROM article_text
WHERE title IN (\"{seq}\")""".format(seq='","'.join(article_titles_list))
result = []
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result.append(row)
return result
# 指定されたIDの記事の単語出現頻度情報をデータベースから読み込む
def retrieve_articles_wordfreqs_by_id(db_path, article_id):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT word, frequency
FROM article_word_frequencies
WHERE article_id = {seq}""".format(seq=str(article_id))
result = []
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result.append(row)
db_connection.close()
return result
# 指定されたタイトルの記事の格単語とそのtfidf値をデータベースから読み込む
def retrieve_articles_words_tfidfs_by_title(db_path, article_title):
db_connection = sqlite3.connect(db_path)
retrieve_statement = u"""
SELECT
word,
tfidf_score
FROM article_word_tfidfs
INNER JOIN article_text
ON article_text.article_id = article_word_tfidfs.article_id
WHERE title = \"{seq}\"
ORDER BY tfidf_score DESC;
""".format(seq=article_title)
result = []
cursor = db_connection.execute(retrieve_statement)
for row in cursor:
result.append(row)
return resutl
# + id="iUj_RYMbQhG0" colab_type="code" colab={}
# ランダムに読み込んだ600件の記事ID
six_hundred_random_articles = [
'15440', '48605', '97947', '93611', '9345', '15240', '44495', '73378', '29438', '105238', '30927', '53640', '24155', '16388', '62926', '99988',
'105031', '63578', '15975', '105200', '10959', '2916', '25306', '100171', '64296', '87272', '655', '39045', '9882', '99104', '11836', '103448',
'100706', '46769', '5698', '91613', '34683', '2009', '98916', '82199', '96534', '42074', '46525', '86848', '4376', '87836', '61109', '23894',
'46551', '8580', '85456', '63773', '56844', '28672', '76188', '51948', '35791', '94852', '33394', '19173', '44734', '11243', '104952', '98372',
'39161', '97470', '105888', '43787', '79526', '92471', '71389', '76790', '10113', '98822', '29032', '31035', '71037', '70350', '62673', '79612',
'69329', '98759', '29391', '46890', '5270', '4015', '14061', '91990', '39171', '38310', '17703', '26351', '73463', '32801', '85657', '36473',
'56036', '59475', '80541', '75385', '43304', '75902', '65163', '2160', '34027', '101328', '99787', '77979', '33838', '37300', '71870', '28833',
'101072', '60008', '10817', '38461', '56193', '99743', '54179', '68782', '102308', '99242', '58054', '76002', '99845', '11579', '22268', '28195',
'73700', '24341', '52919', '47208', '23030', '6032', '3259', '34742', '85950', '52057', '87398', '87515', '17596', '104078', '8765', '69760',
'28743', '102245', '24170', '27917', '38795', '67501', '80972', '81837', '51431', '28953', '11541', '28066', '67014', '72834', '62063', '55171',
'42553', '72389', '104465', '996', '27759', '18708', '788', '71057', '43', '9946', '6405', '32749', '93255', '41615', '75802', '23958', '80370',
'22475', '56061', '98034', '79627', '40664', '103406', '18015', '79357', '96109', '51472', '1407', '40450', '19255', '42494', '51933', '58464',
'62683', '42788', '53284', '15769', '57347', '78889', '104672', '41921', '96299', '29146', '58826', '60446', '57672', '26751', '47341', '89190',
'59086', '8458', '83688', '15250', '57614', '63120', '88327', '105227', '63947', '56114', '86277', '97687', '67566', '53527', '94202', '30510',
'29298', '1141', '68031', '101086', '32043', '61914', '46464', '21415', '5580', '59604', '59779', '20689', '60200', '24634', '22223', '59525',
'102003', '54280', '16410', '55488', '11316', '72981', '45245', '24471', '33880', '69195', '46738', '92207', '75672', '105012', '71034', '86891',
'105846', '53905', '2819', '57681', '56451', '97783', '79576', '63061', '58991', '102999', '8385', '90767', '65215', '80039', '8165', '9255',
'57294', '24463', '23993', '50346', '26214', '34620', '66393', '80143', '79695', '86538', '40795', '5486', '45192', '2364', '74829', '17724',
'14849', '82345', '90376', '100555', '59575', '75381', '6423', '51596', '92150', '1008', '45999', '6027', '76978', '59333', '25758', '63831',
'61470', '4292', '805', '100886', '30471', '37969', '90659', '27857', '3762', '37457', '75108', '72829', '1251', '66628', '7373', '4979',
'17030', '40239', '38354', '13813', '2264', '93274', '26003', '90258', '66521', '12135', '65007', '59893', '77958', '16544', '63864', '24669',
'92463', '67671', '9046', '33033', '1221', '100188', '8255', '4639', '41076', '48870', '17395', '12516', '20503', '54274', '98195', '87347',
'101120', '13649', '77670', '100414', '1929', '105199', '53789', '57956', '7079', '46059', '20132', '21751', '39519', '91745', '54276', '9867',
'15878', '51559', '37235', '63144', '103037', '28642', '34667', '14090', '67137', '35430', '81894', '29789', '64427', '47238', '8757', '25046',
'71370', '1872', '36144', '869', '70451', '78354', '56752', '92323', '104375', '82298', '72040', '40294', '55279', '22682', '33613', '2433',
'57654', '10278', '76223', '26610', '38805', '90158', '10845', '4586', '105417', '94988', '83845', '79097', '50223', '76484', '24613', '90746',
'1834', '89419', '13679', '33452', '71476', '8535', '93329', '80573', '100066', '795', '46053', '65721', '54796', '51411', '75101', '85756',
'100863', '55421', '59800', '2706', '49940', '10687', '33194', '38376', '32910', '36938', '99280', '24176', '6108', '1530', '61890', '29106',
'30107', '85588', '78859', '82961', '44806', '83704', '33233', '81674', '88561', '33346', '22383', '12974', '13149', '82394', '47593', '7086',
'70752', '79314', '71824', '27348', '56837', '483', '14592', '11369', '100281', '51893', '66472', '3130', '100259', '83466', '67251', '786',
'29289', '77015', '103124', '67900', '105221', '34287', '83598', '55234', '1969', '58163', '55083', '41483', '4952', '42207', '12827', '34554',
'33742', '39553', '56041', '71923', '49543', '59083', '16484', '30947', '34219', '6124', '5067', '4783', '18112', '16137', '50516', '94644',
'26756', '20712', '38371', '44809', '3898', '35419', '37239', '13913', '65177', '16907', '22725', '32854', '97439', '7823', '90311', '20801',
'68840', '20145', '28710', '33826', '50104', '13302', '48102', '72616', '64795', '98879', '102759', '45726', '68458', '63728', '1577', '5372',
'35087', '14509', '88670', '30344', '84740', '15095', '57071', '39983', '41248', '31955', '4637', '104157', '104410', '11229', '24752', '72480',
'57253', '26239', '57305', '47046', '6942', '74589', '19206', '102740', '14086', '105098', '29852', '23707', '77355', '105856', '63242', '45972',
'19158', '68190', '89030', '66555', '71440', '57680', '73001', '27485', '54610', '85610', '7821', '12159', '79770', '76601', '39708', '33783',
'81392', '97379', '48891', '39678', '88120', '63076', '49849']
|
exercises/introduction-to-nlp-wikipedia-preprocessing-japanese.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sklearn
# !python -m pip install sklearn
# +
# %load_ext autoreload
# %autoreload 2
# -
import pandas as pd
# !dir .\files\auto-mpg.csv
pd_data = pd.read_csv('./files/auto-mpg.csv', header = None)
pd_data.info()
pd_data.shape
pd_data.columns = ['mpg','cylinders','displacement','horsepower','weight',
'acceleration','model year','origin','name']
pd_data
x = pd_data[['weight']]
y = pd_data[['mpg']]
x.shape, y.shape
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(x,y)
lr.coef_, lr.intercept_
# y = -0.00767661x + 46.31736442
lr.score(x,y) # 정확도
from sklearn.model_selection import train_test_split
train_test_split(x,y)
lr.coef_, lr.intercept_
|
auto-mpg data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# # Reading and cleaning bike trip history data
#
# This notebook includes initial exploration of the NOAA weather data. Specifically, it was used to:
# 1. Develop code for cleaning NOAA data prior to merge with tripdata.
# + deletable=true editable=true
import pandas as pd
import numpy as np
# + deletable=true editable=true
df = pd.read_csv('./../data/external/903571.csv', na_values=-9999, parse_dates=[2])
# -
df.head()
# + deletable=true editable=true
df.info()
# + [markdown] deletable=true editable=true
# Based on the number of missing values (and rarity of weather events WT02, WT04, and WT06 which are heavy fog, ice pellets/sleet, and glaze or rime), we will only use the following features:
# - PRCP: Precipitation
# - SNOW: Snowfall
# - SNWD: Snow depth
# - TMAX: Max temperature
# - TMIN: Min temperature
# - AWND: Average daily wind speed
# - WSF2: Fastest 2-minute wind speed
# - WSF5: Fastest 4-second wind speed
# - WT01: Fog, ice fog, or freezing fog (may include heavy fog)
# - WT08: Smoke or haze
# + deletable=true editable=true
df = df[['DATE','PRCP','SNOW','SNWD','TMAX','TMIN','AWND','WSF2','WSF5', 'WT01', 'WT08']]
# + deletable=true editable=true
df = df.set_index('DATE')
# + deletable=true editable=true
df.head()
# + deletable=true editable=true
df.info()
# -
# Next,we fill the null weather type values with 0's.
df = df.fillna({'WT01':0, 'WT08':0})
df.info()
# + [markdown] deletable=true editable=true
# To finish cleaning the weather data, all we need to do is fill missing wind speed values. We'll do so by averaging the forward and backfill values (under the intuition that wind speed is essentially continuous, so the mean value is a reasonable estimate).
# + deletable=true editable=true
def get_forward_back_avg(series):
forward = series.ffill()
back = series.bfill()
if np.sum(forward - back) == 0:
print 'No change for {}'.format(series.name)
average = (forward + back)/2.0
return average
# + deletable=true editable=true
for col in df.columns:
df[col] = get_forward_back_avg(df[col])
|
notebooks/2-buj201-exploring-NOAA-data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + gradient={"editing": false, "id": "5df6ba91-28ef-45a3-9901-450ecd17b5a8", "kernelId": ""}
# ! pip install catboost
# ! pip install -U imbalanced-learn
# + [markdown] gradient={"editing": false, "id": "d5d9c45f-de7a-485b-a1ca-d5a9180a408c", "kernelId": ""}
# # Environment Setup
# + gradient={"editing": false, "id": "70002dc5-619d-4ecd-8fb2-5e53f85403e4", "kernelId": ""}
# Essential modules for data manipulation
import pandas as pd
import numpy as np
# Custom modules to assist the commom data exploration and preparation tasks
import src.data.sets as datasets
# Custom module to produced Performance metrics
import src.models.performance as perf
# Classifiers
from catboost import CatBoostClassifier
# Time related modules
from datetime import datetime
import pytz
# Declare variables to store name of timezone
tz_SYD = pytz.timezone('Australia/Sydney')
# + [markdown] gradient={"editing": false, "id": "6ad8ceb0-235a-4808-b985-0fc1a6add5b6", "kernelId": ""}
# # 3. Modelling
# + [markdown] gradient={"editing": false, "id": "639b4ad3-1ed5-4863-b0e1-c32a36bb89c7", "kernelId": ""}
# ## 3.1 Collect processed data
# + gradient={"editing": false, "id": "c52935bd-8fc9-4c02-b794-4dc84373ef3c", "kernelId": ""}
# Load data set(s) into dataframe(s)
X_train, y_train, X_val, y_val, X_test, y_test = datasets.load_sets()
# + gradient={"editing": false, "id": "bdeb5172-1bad-4164-a87f-0ff2733e0370", "kernelId": ""}
# Print the shape of the loaded datasets to verify that correct data has been loaded
print("Train Dataframe (rows, columns): ", X_train.shape)
print("Validation Dataframe (rows, columns): ", X_val.shape)
print("Test Dataframe (rows, columns): ", X_test.shape)
# + [markdown] gradient={"editing": false, "id": "2549d748-965c-48bd-8b71-075b0cc3b629", "kernelId": ""}
# ## 3.2 CatBoost Model
# + [markdown] gradient={"editing": false, "id": "fa758d3d-c394-40ec-b9b2-876ce097c1d9", "kernelId": ""}
# ### 3.2.1 Training with default hyperparameters
# + gradient={"editing": false, "id": "ef16de86-a383-45f1-beab-3ecea2982bbe", "kernelId": ""}
print(datetime.now(tz_SYD))
# Instantiate CatBoost Classifier with default Hyperparams
clf_cb1=CatBoostClassifier (task_type='GPU', loss_function='MultiClass', random_state=8)
# Fit CatBoost Classifier
clf_cb1.fit(X_train,y_train)
print(datetime.now(tz_SYD))
# Score CatBoost Classifier
perf.score_models(X_train, y_train, X_val, y_val, X_test, y_test, None, False, "multiclass", clf_cb1)
print(datetime.now(tz_SYD))
# + [markdown] gradient={"editing": false, "id": "5ebf445a-f070-4134-b706-abc4d84a8624", "kernelId": ""}
# ### 3.2.2 Training with Auto-Class-Weights=Balanced
# + gradient={"editing": false, "id": "25ac2be7-e069-4055-91f1-0c3b2a7aed5e", "kernelId": ""}
print(datetime.now(tz_SYD))
# Instantiate CatBoost Classifier with auto_class_weights=Balanced
clf_cb2=CatBoostClassifier (task_type='GPU', loss_function='MultiClass', auto_class_weights='Balanced', random_state=8)
# Fit CatBoost Classifier
clf_cb2.fit(X_train,y_train)
print(datetime.now(tz_SYD))
# Score CatBoost Classifier
perf.score_models(X_train, y_train, X_val, y_val, X_test, y_test, None, False, "multiclass", clf_cb2)
print(datetime.now(tz_SYD))
# + gradient={"editing": false, "id": "faae3cab-3204-49d9-82fe-eb1f19182c71", "kernelId": "", "source_hidden": false} jupyter={"outputs_hidden": false}
|
notebooks/modelling-CatBoost.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Experimentation with conv layers:
#
import numpy as np
import pandas as pd
import torch.nn as nn
import os
from os.path import isfile, join
from pydicom import dcmread
import matplotlib.pyplot as plt
fpath = 'C:/Users/cathx/repos/covid19/train'
links = []
for folder in os.listdir(fpath):
link = fpath + '/' + folder
for img in os.listdir(link):
imgs = link + '/' + img
for k in os.listdir(imgs):
links.append(imgs + '/' + k)
links[0]
img = dcmread(links[4095])
plt.imshow(img.pixel_array, cmap=plt.cm.gray)
onlyimgs = [f for f in os.listdir(fpath) if isfile(join(fpath, f))]
|
Notebooks/Experiment.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Before your start:
# - Read the README.md file
# - Comment as much as you can and use the resources (README.md file)
# - Happy learning!
# +
# %matplotlib inline
# import numpy and pandas
# -
# # Challenge 1 - Analysis of Variance
#
# In this part of the lesson, we will perform an analysis of variance to determine whether the factors in our model create a significant difference in the group means. We will be examining a dataset of FIFA players. We'll start by loading the data using the code in the cell below.
# +
# Run this code:
fifa = pd.read_csv('fifa.csv')
# -
# Let's examine the dataset by looking at the `head`.
# +
# Your code here:
# -
# Player's values are expressed in millions of euros. We would like this column to be numeric. Therefore, let's create a numeric value column. Do this by stripping all non-numeric characters from each cell. Assign this new data to `ValueNumeric`. There is no need to multiply the value to be expressed in millions.
# +
# Your code here:
# -
# #### We'd like to determine whether a player's preffered foot and position have an impact on their value.
#
# Using the `statsmodels` library, we are able to produce an ANOVA table without munging our data. Create an ANOVA table with value as a function of position and preferred foot. Recall that pivoting is performed by the `C` function.
#
# Hint: For columns that have a space in their name, it is best to refer to the column using the dataframe (For example: for column `A`, we will use `df['A']`).
# +
# Your code here:
# -
# What is your conclusion from this ANOVA?
# +
# Your conclusions here:
# -
# After looking at a model of both preffered foot and position, we decide to create an ANOVA table for nationality. Create an ANOVA table for numeric value as a function of nationality.
# +
# Your code here:
# -
# What is your conclusion from this ANOVA?
# # Challenge 2 - Linear Regression
#
# Our goal with using linear regression is to create a mathematical model that will enable us to predict the outcome of one variable using one or more additional independent variables.
#
# We'll start by ensuring there are no missing values. Examine all variables for all missing values. If there are missing values in a row, remove the entire row.
# +
# Your code here:
# -
# Using the FIFA dataset, in the cell below, create a linear model predicting value using stamina and sprint speed. create the model using `statsmodels`. Print the model summary.
#
# Hint: remember to add an intercept to the model using the `add_constant` function.
# +
# Your code here:
# -
# Report your findings from the model summary. In particular, report about the model as a whole using the F-test and how much variation is predicted by the model using the r squared.
# +
# Your conclusions here:
# -
# Next, create a second regression model predicting value using potential. Create the model using `statsmodels` and print the model summary. Remember to add a constant term.
# +
# Your code here:
# -
# Report your findings from the model summary. In particular, report about the model as a whole using the F-test and how much variation is predicted by the model using the r squared.
# +
# Your conclusions here:
# -
# Plot a scatter plot of value vs. potential. Do you see a linear relationship?
# +
# Your code here:
# -
|
module-2/lab-correlation-tests-with-scipy/your-code/main.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=["naas"]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=[]
# # Twitter - Get user data
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Twitter/Twitter_Get_user_data.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=[]
# **Tags:** #twitter #ifttt #naas_drivers #snippet #content #dataframe
# + [markdown] papermill={} tags=["naas"]
# **Author:** [<NAME>](https://github.com/dineshh912)
# + [markdown] papermill={} tags=[]
# ## Input
# + [markdown] papermill={} tags=[]
# ### Import library
# + papermill={} tags=[]
import tweepy
import pandas as pd
# + [markdown] papermill={} tags=[]
# ### How to get API keys?
# + [markdown] papermill={} tags=[]
# [Twitter API Documentation](https://developer.twitter.com/en/docs/getting-started)
# + [markdown] papermill={} tags=[]
# ### Variables
# + papermill={} tags=[]
# API Credentials
consumer_key = "<KEY>"
consumer_secret = "<KEY>XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
# + papermill={} tags=[]
user_list = ["JupyterNaas", "Spotify", "ProjectJupyter"]
# + [markdown] papermill={} tags=[]
# ## Model
# + [markdown] papermill={} tags=[]
# ### Authentication
# + papermill={} tags=[]
try:
auth = tweepy.AppAuthHandler(consumer_key, consumer_secret)
api = tweepy.API(auth)
except BaseException as e:
print(f"Authentication has been failed due to -{str(e)}")
# + [markdown] papermill={} tags=[]
# ### Below function will retrive only the information about the user.
# + papermill={} tags=[]
def getUserInfo(user_id):
# Define a pandas dataframe to store the date:
user_info_df = pd.DataFrame(columns = ['twitter_id', 'name', 'screen_name', 'description', 'tweet_count', 'friends_count',
'followers_count', 'favourites_count', 'verified', 'created_at']
)
# Collect userinformation using get_user
for user in user_id:
info = api.get_user(user) # Get user information request
twitter_id = info.id
name = info.name
screen_name = info.screen_name
description = info.description
tweet_count = info.statuses_count
friends_count = info.friends_count
followers_count = info.followers_count
favourites_count = info.favourites_count
verified = info.verified
created_at = info.created_at
user_info = [twitter_id, name, screen_name, description, tweet_count, friends_count,
followers_count, favourites_count, verified, created_at]
user_info_df.loc[len(user_info_df)] = user_info
return user_info_df
# + [markdown] papermill={} tags=[]
# ## Output
# + [markdown] papermill={} tags=[]
# ### Get user data
# + papermill={} tags=[]
df = getUserInfo(user_list)
|
Twitter/Twitter_Get_user_data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Nicompag/ISYS5002_portfolio/blob/main/Payslip.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="kF8ldRydTDPx"
# We need to program to print out a payslip for sales people. Consider 'Ram' who has a salary of \$25000. They have sold goods worth $20000 and earns 2% commision on the sales. They fall into the 10% tax bracket and so before making payments we need to deduct 10% form his total payout.
# + colab={"base_uri": "https://localhost:8080/"} id="2UenEHfcTuLz" outputId="b2bb9c3f-f768-4dfb-c6d8-78666c891029"
salary = 25000
sales = 20000
commission = 0.02 * sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of Ram')
print('Salary', salary,'Commission',commission,'Tax', tax)
print('Total pay',pay)
# + [markdown] id="QzcJGzRBUDz2"
# Consider 'Radha' who has a salary of \$30000. They have sold goods worth $40000 and earns 2.5% commision on the sales. They fall into the 10% tax bracket and so before making payments we need to deduct 10% form his total payout.
# + id="wFmuD9ujUgDW" colab={"base_uri": "https://localhost:8080/"} outputId="87e4a86c-a82b-44c5-a17a-8df61315fe85"
salary = 30000
sales = 40000
commission = 0.025 * sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of Radha')
print('Salary', salary,'Commission',commission,'Tax', tax)
print('Total pay',pay)
# + [markdown] id="UFGObxq7UlvE"
# What did we change from Ram to Radha?
# + id="zMEi9q1dUr36"
# + [markdown] id="7PrlmDmYV6iX"
# Make what we changed as inputs (parameters) to a function
# + id="cJwgf8nZWDF6"
def payslip (name, salary, sales, rate, tax_rate):
commission = rate * sales
tax = (salary + commission) * tax_rate
pay = salary + commission - tax
print('Payslip of {}'.format(name))
print('Salary:', salary,'Commission:',commission,'Tax:', tax)
print('Total pay:',pay)
# + colab={"base_uri": "https://localhost:8080/"} id="3ocjUqwW7Gve" outputId="7eccf41c-98a5-4fa0-a860-7786bd1d7ae1"
payslip('Radha', 30000, 40000, 0.025, 0.1)
|
Payslip.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Inequality of underrepresnted groups in PyData leadership
#
# ### Or what's up at the top of the PyData ecosystem?
#
# So, I am the host of [Open Source Directions](https://www.quansight.com/open-source-directions), the webinar series (and, *yes*, soon podcast) about the roadmaps of projects in the PyData / Scientific Python space. I would like for the series to be a welcoming venue for the less-often-heard voices of the PyData community. Unfortunately, by focusing on projects and their lead/core developers, I (and other project leads) believe that we are reinforcing existing biases and overrepresentation.
#
# Yes, there are steps we can take to address the diveristy issues on Open Source Directions. We have adjusted our processes and procedures, with more significant changes to come in the future. I personally welcome any and all feedback in this regard. Feel free to reach out to me publicly or privately with your ideas and concerns.
#
# But this post is not about that. This post is to show that we in the NumFOCUS and PyData community have vast inequalities in representation in the leadership of our projects.
#
#
# ## How should equality be measured?
#
# One of my big pet peeves is that people usually discuss diversity & equality as a ratio between men/women. This is a terrible way to talk about this problem for a couple reasons:
#
# 1. There are many other axes of inequality, including economic class, social class, racial, ethnic, religious,
# educational, sexuality, and more!
# 2. Ratios only work for binary systems, and gender is not binary.
# 3. Equality & inequality should be bounded from $[0, 1]$, i.e. they should be fractional measures of how
# equal a system is. A ratio of men/women is instead bounded from $[0, \infty]$, where both $0$ and $\infty$
# are perfectly unequal, and $1$ represents perfect equality.
#
# I understand why people use men/women ratios.
#
# * They are deceptively easy to understand.
# * They probably track with other systemic inequalities.
# * In terms of population differences, it is (again deceptively) close to pairity, with only [0.3-0.5% of the U.S. population identifying as transgender](https://www.nytimes.com/2015/06/09/upshot/the-search-for-the-best-estimate-of-the-transgender-population.html).
# * It is universal in the sense that, while economic and racial differences changes from country to country, gender identity is often viewed as having the same ratio everywhere (it doesn't).
# * It is very difficult to obtain reasonable data on inequality along avenues other than gender. This makes gender the easiest metric to actually analyze.
#
# The above points do not make such ratios any less wrong.
#
# In my [PyData Carolinas Keynote](https://youtu.be/fFKztg_ZRB4) ([slides](https://docs.google.com/presentation/d/148_jbdBCC4WjGmhY8s1wo8G3Yn2dZ1bc-PDBeg_Kp-w/edit?usp=sharing)) from a few years ago, I presented (what I feel is) a much better, information-theoritic, entropy-based model of equality & inequality. For a 3-gendered partitioning scheme (female, male, nonbinary), the Generalized Entropy Inequality measure (GEI, $G$) reduces to,
#
# $G = \ln(3) - H$
#
# Where $H$ is our friend the Shannon entropy, or
#
# $H = -\sum_{i=1}^S p_i \ln p_i$
#
# $G$ has much better mathematical properties than a simple ratio. However, it is still not normalized onto the range of $[0, 1]$. To do this we need to subtract the minimal inequality (i.e. where the distribution matches the population at large, say $G(P)$) and divide by size of the domain. Thus we have a normalized inequality measure $|G|$ that is:
#
# $|G| = \frac{\ln(3)- H - G(P)}{\ln(3) - G(P)} = 1 - \frac{H}{\ln(3) - G(P)}$
#
# ## Methodology
#
# In order to show quantitatively how unequal the leadership of PyData is, I have gone through the NumFOCUS fiscally sponsored projects and tried to determine gender of their leadership teams according to the following rules:
#
# 1. First, I looked at their websites. If they listed as steering comittee (as with Jupyter) or a core team (as with conda-forge), I would use this as the representative body.
# 2. If no leadership team was clearly posted, I would go to the repository and look at their top contributors. Yes, this subjects the project to the "tyranny of code contrbution." I defined top contributors as those with more than 150 commits.
# 3. I tried to be as inclusive as possible, including all past members and subcommittee members (such as in pandas).
#
# If you find problems with my counting, please put a PR into this repo that updates the `data.json` file! I welcome all fixes.
#
# I am picking on NumFOCUS here because doing so easily, discretely, and representitvely limits the number of projects we have to analyze. Also, by virtue of being a NumFOCUS project, we can say that these projects are "important." Furthermore, I know that NumFOCUS can see this analysis in the spirit of working toward to more inclusive tomorrow that it is given.
#
# ## Results
# +
# %matplotlib inline
import json
import numpy as np
import matplotlib.pyplot as plt
# +
def G(female=0.0, male=0.0, nonbinary=0.0):
total = female + male + nonbinary
p_i = np.array([female, male, nonbinary]) / total
H_i = p_i * np.log(p_i)
H_i[p_i == 0.0] = 0.0
H = - H_i.sum()
return np.log(3) - H
def norm_G(female=0.0, male=0.0, nonbinary=0.0, G_P=0.0):
total = female + male + nonbinary
p_i = np.array([female, male, nonbinary]) / total
H_i = p_i * np.log(p_i)
H_i[p_i == 0.0] = 0.0
H = - H_i.sum()
return 1.0 - H/(np.log(3) - G_P)
USA_population = G(female=49.75, male=49.75, nonbinary=0.5)
with open('data.json') as f:
data = json.load(f)
project_inequalities = {}
for project, kwargs in data.items():
project_inequalities[project] = norm_G(G_P=USA_population, **kwargs)
# +
proj_ins = sorted(project_inequalities.items(), key=lambda x: -x[1])
cm = plt.get_cmap('viridis')
projects = [x[0] for x in proj_ins]
y_pos = np.arange(len(projects))
norm_Gs = [x[1] for x in proj_ins]
colors = list(map(cm, norm_Gs))
plt.rcParams['font.size'] = 14.0
fig, ax = plt.subplots()
fig.set_figheight(8.5)
ax.barh(y_pos, norm_Gs, align='center', color=colors)
ax.set_yticks(y_pos)
ax.set_yticklabels(projects)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('$|G|$ (unitless)')
t = ax.set_title('Inequality, lower is better')
# -
# Note that while some projects are more equal than others, no project has zero inequality. Also, just to be perfectly clear, all projects skew toward overrepresenting men. Furthermore, six projects have only men in leadership roles.
#
# It is important to note again at this point that gender is only one axis of diversity, albeit an important axis. Still, keep in mind that projects which completely lack gender diversity may be representitive along different equality measures (such as racial or ethnic).
#
# ## Some Further Thoughts
#
# ### Active vs. Passive Diversity Problems
#
# In discussing these issues with [Chris "CJ" Wright](https://billingegroup.github.io/people/cwright.html), the president of Columbia University's [qSTEM](https://www.facebook.com/QSTEMColumbia/) group (their LGBTQ+ STEM organization), and a close friend, there is a difference between *active* and *passive* diversity issues, where these terms are defined as:
#
# * **Active:** There is a toxic member of a community - or a toxic culture as a whole - which prevents an
# equitable system of representation from evolving or destroys existing equitable systems.
# * **Passive:** There are significant biases, in which the agents in the system will be unable to make major
# strides toward equality by simply continuing the existing day-to-day activities.
#
# I believe (anecdotally) that PyData has passive diversity issues with respect to project leadership. Over the years, we have made a ton of progress towards equality (thanks to <NAME> and other members of [DISC](https://numfocus.org/programs/diversity-inclusion) and many, many others) on issues such as conference attendees, conference speakers, keynotes, board members, etc. However, this has not translated down to project leadership.
#
# ### Nonbinary vs Having More Specific Categories
#
# Here, I lumped all non-male & non-female people into a single "nonbinary" category. However, other organizations provide more categories. For instance, the University of California provides [six categories for gender identification on their applications](https://jonathanturley.org/2015/07/29/university-of-california-gives-students-six-gender-identity-categories/). However, accurately knowing the percentage of the U.S. population that falls into each of these categories is effectively statistically impossible. My domestic partner (who is a Public Health professor) tells me that in most cases they have trouble tracking such data as it relates to population-level health issues.
#
# For the analysis here, adding more categories with zero-values would simply make the projects look even more unequal. (The $\ln(3)$ would become $\ln(6)$.) There is no reason to do this as the point of underrepresentation in leadership can be made well enough with only 3 categories.
#
# ### Quotas & Role of Media
#
# In terms of Open Source Directions and other podcasts and webinars, the idea of having "diversity quotas" occasionally comes up. These arise in the form of rules such as,
#
# > Don't have an overrepresented guest or project on the show unless they can also bring an underrepesented voice too.
#
# These sorts of edicts rub me (and other prominent members of our community) the wrong way. The main, personal argument against having quotas is that we are a technical community, and folks want to judge and to be judged on their technical merits. Yes, there can be oppression in merritocracy as in other systems of government, but people don't want to be invited to the party just because they are the token $X$, $Y$, or $Z$. Guests should have a genuine knowledge and interest in the discussion topic at hand, and should be valued for that knowledge. You know, people should be valued as individuals and not because they make a project's inequality score go down.
#
# Additionally, as a tech media outlet, there is the question of bias in our reporting on Open Source Directions. If the underlying system we are reporting on is not equal (it isn't), and we distort that perception are we being fair? Are we perpetuating an unequal system by reporting on the system as it is? I don't have good answers to these questions. I will say that [NPR had a 3rd party study performed](https://www.wnyc.org/story/235598-conclusions-nprs-liberal-bias/) a few years back on their alleged liberal bias. Interestingly (and spoiler alert), the conclusion was that listeners would percieve a slight bias in NPR based on their own point of view. I interpret this result as saying that NPR is so middle-of-the-road that you can be disappointed in them whenever they non-negatively report on an issue from a perspecive you don't agree with.
#
# I believe that ultimately the correct path is to have greater representation of currently underepresented groups in the projects themseleves. However, on Open Source Directions, I do not feel that forcing quotas is a productive path forward. Instead, we are encouraging projects to bring on guests from their development communities that are underrepresented. We are also going to be asking projects (as approriate) what they are doing with respect to diversity in their development community. This is in an effort to bring greater awareness about these diversity issues.
#
# ### Things Can Change!
#
# While it is easy to feel hopeless about these diversity issues, I am heartened by tweets such as the following by <NAME> (co-founder of Anaconda, Inc.):
#
# <blockquote class="twitter-tweet" data-partner="tweetdeck"><p lang="en" dir="ltr">We are looking to hire some OSS devs <a href="https://twitter.com/anacondainc?ref_src=twsrc%5Etfw">@anacondainc</a> for Numba, Dask, Pandas, Arrow dev. If there are underrepresented candidates for these roles that we should talk to, please let me know!</p>— <NAME> (@pwang) <a href="https://twitter.com/pwang/status/1062176775059525637?ref_src=twsrc%5Etfw">November 13, 2018</a></blockquote>
# <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
#
# Again ancedotally, I believe that there is the will to address the disparities out there in the PyData ecosystem. We just need to channel it in a productive and inclusive direction.
|
nf-project-inequality.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### 1. What does one mean by the term "machine learning"?
# > Machine learning we can call mathematical-statistical based algorithm which calculation allows us to make accurate predictions based on data available.
#
#
# ### 2.Can you think of 4 distinct types of issues where it shines?
# * Chat-bots (Customer Service Automation)
# * Customer segmentation
# * Price predictions
# * Fraudelent transactions
#
#
# ### 3.What is a labeled training set, and how does it work?
# > Labeled training set is a part of the dataset, which is prepared for traning the Machine Learning model. Labels in training set are values stored in columns of a dependent feature. In order for Supervised Machine Learning model to learn, it needs to be trained on the training set with true labels.
#
#
# ### 4.What are the two most important tasks that are supervised?
# * Regression
# * Classification
#
#
# ### 5.Can you think of four examples of unsupervised tasks?
# * Clustering
# * Dimensionality Reduction
# * Association Rules (finding relationship between vairables in dataset)
# * Visualization (perception tasks)
#
#
# ### 6.State the machine learning model that would be best to make a robot walk through various unfamiliar terrains?
# > Reinforced Learning.
#
#
# ### 7.Which algorithm will you use to divide your customers into different groups?
# > For example DBSCAN. It is one of the most universal and most accurate in overall unsupervised clustering algorithm.
#
#
# ### 8.Will you consider the problem of spam detection to be a supervised or unsupervised learning problem?
# > Supervised ML problem. We feed the model with labeled data and based on it machine learn what is spam and what is not. It is a binary classification problem.
#
#
# ### 9.What is the concept of an online learning system?
# > Machine Learning System which receives data in small batches of streaming data.
#
#
# ### 10.What is out-of-core learning, and how does it differ from core learning?
# > Out-of-core learning is reffering to processing data, which can't fit into single computer memory (like the core learning).
#
#
# ### 11.What kind of learning algorithm makes predictions using a similarity measure?
# > Instance based algorithms, like KNN.
#
#
# ### 12.What's the difference between a model parameter and a hyperparameter in a learning algorithm?
# * Parameters -> obligatory parameters of the model, that must be set using the traning data.
# * Hyperparameters -> adjustable parameters, which are destined to tune model for better performance.
#
#
# ### 13.What are the criteria that model-based learning algorithms look for? What is the most popular method they use to achieve success? What method do they use to make predictions?
# > Two criteria:
# * How bad model is in making the predictions?
# * How much complex is the model?
# Model-based algorithms tries to optimize model parameters with multiple runs of the cost function to find the it minimal value.
#
#
# ### 14.Can you name four of the most important Machine Learning challenges?
# * Overfitting,
# * Underfitting,
# * Not enough data,
# * Not representative data.
#
#
# ### 15.What happens if the model performs well on the training data but fails to generalize the results to new situations? Can you think of three different options?
# > Overfitting. First solution we have is to get more data, resign from some 'too fitted' hyperparameters (or change their values), check out condition of the dataset (clean it, remove outliers).
#
#
# ### 16.What exactly is a test set, and why would you need one?
# > Part of the original data set we use for testing purpose. We check how model performs on not seen data.
#
#
# ### 17.What is a validation set's purpose?
# > With validation we test the best model for the problem
#
#
# ### 18.What precisely is the train-dev kit, when will you need it, how do you put it to use?
# > Most times, we divide dataset into two sets - train (80%) and test (20%). But in case if we want to comapare performance of a different models or we are training Deep Learning models (which are especially hungry of a new data), we can split the original dataset to 3 parts - train (60%), dev/validation (20%) and test (20%).
#
#
# ### 19.What could go wrong if you use the test set to tune hyperparameters?
# > Data leakage. Ready for production model can perform very well on the test and train data, but will be biased and probably will perform a lot worse on the new data, which it not seen.
#
|
Machine Learning Assignments/Assignment_1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework #5 Solutions
# ### Portfolio Theory and Risk Management I
# ## Imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from arch import arch_model
from arch.univariate import GARCH, EWMAVariance
from sklearn import linear_model
import scipy.stats as stats
from statsmodels.regression.rolling import RollingOLS
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
pd.set_option("display.precision", 4)
sns.set(rc={'figure.figsize':(15, 10)})
# ## Data
# +
factors = pd.read_excel('../data/factor_pricing_data.xlsx', sheet_name = 1)
factors = factors.set_index('Date')
factors.head()
# -
# ## 2 The Factors
# **2.1:** Analyze the factors, similar to how you analyzed the three Fama-French factors in Homework 4. You now have three additional factors, so let’s compare there univariate statistics.
# +
def stats_dates(df, dates, annual_fac=12):
stats_df = pd.DataFrame(data=None, index = ['Mean', 'Vol', 'Sharpe', 'VaR (.05)'])
for d in dates:
for col in df.columns:
df_ = df.loc[d[0]:d[1], col]
stats_df[col + ' ' + d[0] + '-' + d[1]] = [df_.mean()*annual_fac,
df_.std()*np.sqrt(annual_fac),
(df_.mean()/df_.std())*np.sqrt(annual_fac),
df_.quantile(.05)]
return stats_df
def summary_stats(df, annual_fac=12):
ss_df = (df.mean() * annual_fac).to_frame('Mean')
ss_df['Vol'] = df.std() * np.sqrt(annual_fac)
ss_df['Sharpe'] = ss_df['Mean'] / ss_df['Vol']
return round(ss_df, 4)
# -
# Entire period:
summary_stats(factors)
# **2.2a:** Does each factor have a positive risk premium (positive expected excess return)?
#
# Over the entire period all of the factors have a positive risk premium.
# Periods explored in HW 4:
stats_dates(factors, [['1926','1980'],['1981','2001'],['2002','2021']])
# 2015-Present:
stats_dates(factors, [['2015','2021']])
# **2.2b:** How have the factors performed since the time of the case, (2015-present)?
#
# RMW and UMD are the only factors apart from the market factor that have had positive risk premia. Value (HML) has notably underperformed.
# **2.3:** Report the correlation matrix across the six factors.
factors.corr()
# **2.3a:** Does the construction method succeed in keeping correlations small?
#
# Yes, correlations between factors are kept relatively small. The largest correlation is 0.6576, which is much higher than the other correlations.
# **2.3b:** Fama and French say that HML is somewhat redundant in their 5-factor model. Does this seem to be the case?
#
# Yes, HML is highly correlated to CMA (this is the 0.66 correlation).
# **2.4:** Report the tangency weights for a portfolio of these 6 factors.
# +
def compute_tangency(df_tilde, diagonalize_Sigma=False):
Sigma = df_tilde.cov()
# N is the number of assets
N = Sigma.shape[0]
Sigma_adj = Sigma.copy()
if diagonalize_Sigma:
Sigma_adj.loc[:,:] = np.diag(np.diag(Sigma_adj))
mu_tilde = df_tilde.mean()
Sigma_inv = np.linalg.inv(Sigma_adj)
weights = Sigma_inv @ mu_tilde / (np.ones(N) @ Sigma_inv @ mu_tilde)
# For convenience, I'll wrap the solution back into a pandas.Series object.
omega_tangency = pd.Series(weights, index=mu_tilde.index)
return omega_tangency, mu_tilde, Sigma_adj
omega_tangency, mu_tilde, Sigma = compute_tangency(factors)
omega_tangency.to_frame('Tangency Weights')
# -
# **2.4a:** Which factors seem most important? And Least?
#
# CMA, RMW, and the MKT seem like the most important factors as they have the largest weights. SMB, HML, and UMD have lower weights so we could say that they seem less important.
# **2.4b:** Are the factors with low mean returns still useful?
summary_stats(factors)
# Yes, CMA has one of the lower mean returns but the highest allocation.
# **2.4c:** Re-do the tangency portfolio, but this time only include MKT, SMB, HML, and UMD. Which factors get high/low tangency weights now?
# +
omega_tangency2, mu_tilde2, Sigma2 = compute_tangency(factors[['MKT','SMB','HML','UMD']])
omega_tangency2.to_frame('Tangency Weights')
# -
# HML has the highest tangency weight once we remove CMA. This makes sense as CMA had the largest weight before, and is quite correlated to HML.
#
# SMB has a very small weight now.
#
# We can conclude that the importance of these styles is very much based on correlation between the factors.
# ## 3 Testing Modern LPMs
# +
portfolios = pd.read_excel('../data/factor_pricing_data.xlsx', sheet_name = 2)
portfolios = portfolios.set_index('Date')
portfolios.head()
# -
CAPM = ['MKT']
FF_3F = ['MKT','SMB','HML']
FF_5F = ['MKT','SMB','HML','RMW','CMA']
AQR = ['MKT','HML','RMW','UMD']
# **3.1:** Test the AQR 4-Factor Model using the time-series test. (We are not doing the cross-sectional regression tests.)
def ts_test(df, factor_df, factors, test, annualization=12):
res = pd.DataFrame(data = None, index = df.columns, columns = [test + r' $\alpha$', test + r' $R^{2}$'])
for port in df.columns:
y = df[port]
X = sm.add_constant(factor_df[factors])
model = sm.OLS(y, X).fit()
res.loc[port] = [model.params[0] * annualization, model.rsquared]
return res
# **3.1a:** For each regression, report the estimated $\alpha$ and $R^{2}$.
# +
AQR_test = ts_test(portfolios, factors, AQR, 'AQR')
AQR_test
# -
# **3.1b:** Calculate the mean-absolute-error of the estimated alphas.
print('AQR MAE: ' + str(round(AQR_test[r'AQR $\alpha$'].abs().mean(), 4)))
# **3.2:** Test the CAPM, FF 3-Factor Model and the the FF 5-Factor Model. Report the MAE statistic for each of these models and compare it with the AQR Model MAE. Which model fits best?
# +
factor_tests = ts_test(portfolios, factors, CAPM, 'CAPM').join(ts_test(portfolios, factors, FF_3F, 'Fama-French 3F'))\
.join(ts_test(portfolios, factors, FF_5F, 'Fama-French 5F'))
factors_MAE = factor_tests[[r'CAPM $\alpha$',
r'Fama-French 3F $\alpha$',
r'Fama-French 5F $\alpha$']].abs().mean().to_frame('MAE')
factors_MAE.index = ['CAPM','Fama-French 3F','Fama-French 5F']
factors_MAE.loc['AQR'] = AQR_test[r'AQR $\alpha$'].abs().mean()
factors_MAE
# -
# CAPM fits the best as it has the lowest MAE.
# **3.3**: Does any particular factor seem especially important or unimportant for pricing? Do you think Fama and French should use the Momentum Factor?
#
# The market factor seems very important for pricing as all models include it and the CAPM performs the best. I think Fama and French should consider using the momentum factor as AQR uses it and their model performs better in terms of MAE.
# **3.4:** This does not matter for pricing, but report the average (across n estimations) of the time-series regression r-squared statistics. Do this for each of the three models you tested. Do these models lead to high time-series r-squared stats? That is, would these factors be good in a Linear Factor Decomposition of the assets?
# +
factors_r2 = factor_tests[[r'CAPM $R^{2}$',
r'Fama-French 3F $R^{2}$',
r'Fama-French 5F $R^{2}$']].mean().to_frame(r'$R^{2}$')
factors_r2.index = ['CAPM','Fama-French 3F','Fama-French 5F']
factors_r2.loc['AQR'] = AQR_test[r'AQR $R^{2}$'].mean()
factors_r2
# -
# These models do not lead to high time-series $R^{2}$ stats. Thus, they would not be good in a Linear Factor Decomposition of the assets.
# ## 4 Extensions
# +
def ts_betas(df, factor_df, factors, intercept=False):
if intercept == True:
res = pd.DataFrame(data = None, index = df.columns, columns = ['alpha'])
res[factors] = None
else:
res = pd.DataFrame(data = None, index = df.columns, columns = factors)
for port in df.columns:
y = df[port]
if intercept == True:
X = sm.add_constant(factor_df[factors])
else:
X = factor_df[factors]
model = sm.OLS(y, X).fit()
res.loc[port] = model.params
return res
def cross_section(df, factor_df, factors, ts_int=True, annualization=12):
betas = ts_betas(df, factor_df, factors, intercept=ts_int)
res = pd.DataFrame(data = None, index = betas.index, columns = factors)
res['Predicted'] = None
res['Actual'] = None
for port in res.index:
res.loc[port, factors] = betas.loc[port]
prem = (betas.loc[port] * factor_df[factors]).sum(axis=1).mean() * annualization
res.loc[port,['Predicted','Actual']] = prem, df[port].mean() * annualization
return res
def cross_premia(df_cs, factors):
y = df_cs['Actual'].astype(float)
X = df_cs[factors].astype(float)
return sm.OLS(y,X).fit().params.to_frame('CS Premia')
def cross_premia_mae(df_cs, factors, model):
y = df_cs['Actual'].astype(float)
X = df_cs[factors].astype(float)
print(model + ' MAE: ' + str(round(sm.OLS(y,X).fit().resid.abs().mean(), 4)))
return
# +
CAPM_cs = cross_section(portfolios, factors, CAPM, ts_int=True)
FF_3F_cs = cross_section(portfolios, factors, FF_3F, ts_int=True)
FF_5F_cs = cross_section(portfolios, factors, FF_5F, ts_int=True)
AQR_cs = cross_section(portfolios, factors, AQR, ts_int=True)
AQR_cs.head()
# -
# **4.1a:** Report the time-series premia of the factors (just their sample averages) and compare to the cross-sectionally estimated premia of the factors. Do they differ substantially?
(factors.mean()*12).to_frame('TS Premia')
# Fama-French 3 Factor Premia:
cross_premia(FF_3F_cs, FF_3F)
# Fama-French 5 Factor Premia:
cross_premia(FF_5F_cs, FF_5F)
# AQR Premia:
cross_premia(AQR_cs, AQR)
# The MKT and RMW factors are similar to the sample averages, but the other cross-sectionally estimated premia vary quite a bit.
# **4.1b:** Report the MAE of the cross-sectional regression residuals for each of the four models. How do they compare to the MAE of the time-series alphas?
cross_premia_mae(CAPM_cs, CAPM, 'CAPM')
cross_premia_mae(FF_3F_cs, FF_3F, 'FF 3 Factor')
cross_premia_mae(FF_5F_cs, FF_5F, 'FF 5 Factor')
cross_premia_mae(AQR_cs, AQR, 'AQR')
# **4.2:**
def OOS_prediction(df, factor_df, factors, window):
res = pd.DataFrame(data = None, index = df.columns, columns = [r'$R^{2}$'])
exp_means = factor_df[factors].expanding().mean()
factors2 = factors.copy()
factors2.append('const')
factor_df2 = factor_df.copy()
factor_df2['const'] = 1
for port in df.columns:
model = RollingOLS(df[port], factor_df2[factors2], window = window, min_nobs = window).fit()
port_betas = model.params.dropna()
r_hat = (port_betas[factors] * exp_means.loc[port_betas.index]).sum(axis=1).shift(1).dropna()
exp_means_predict = df[port].expanding().mean().shift(1).loc[r_hat.index]
res.loc[port] = 1 - ((r_hat - df.loc[r_hat.index, port])**2).sum()\
/ ((exp_means_predict - df.loc[r_hat.index, port])**2).sum()
res[r'$R^{2}$'] = res[r'$R^{2}$'].astype(float)
return res
# **4.2a:** Report the OOS r-squared for each of the n security forecasts.
# +
port_R2 = OOS_prediction(portfolios, factors, AQR, 60)
port_R2
# +
plt.bar(port_R2.index, port_R2[r'$R^{2}$'])
plt.xticks(rotation='vertical')
plt.ylabel(r'OOS $R^{2}$')
plt.show()
# -
port_R2.describe()
# **4.2b:** Does the LPM do a good job of forecasting monthly returns? For which asset does it perform best? And worst?
#
# The LPM does a very poor job forecasting monthly returns. It performs best for ships and worst for software.
port_R2.sort_values(r'$R^{2}$')
# **4.2c:** Re-do the exercise using a window of 36 months. And 96 months. Do either of these windows work better?
# +
port_R2_36 = OOS_prediction(portfolios, factors, AQR, 36)
port_R2_36
# -
port_R2_36.describe()
# +
port_R2_96 = OOS_prediction(portfolios, factors, AQR, 96)
port_R2_96
# -
port_R2_96.describe()
# No, neither of these windows perform much better. OOS $R^{2}$ is still about zero.
# **4.2d:**
# +
port_R2_CAPM = OOS_prediction(portfolios, factors, CAPM, 60)
port_R2_CAPM
# -
port_R2_CAPM.describe()
# +
port_R2_FF5 = OOS_prediction(portfolios, factors, FF_5F, 60)
port_R2_FF5
# -
port_R2_FF5.describe()
# CAPM performs best out of all the models, but it still not valuable for prediction.
|
solutions/Solution_HW5_2021_JMD.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tensorflow
# language: python
# name: tensorflow
# ---
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)]
y = (iris["target"] == 2).astype(np.float64)
svm_clf = Pipeline ((
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C = 1, loss = "hinge")),
))
svm_clf.fit(X, y)
svm_clf.predict([[5.5, 1.7]])
from sklearn.datasets import make_moons
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_svm_clf = Pipeline ((
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge"))
))
polynomial_svm_clf.fit(X, y)
#kernel
from sklearn.svm import SVC
poly_kernel_svm_clf = Pipeline((
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
))
poly_kernel_svm_clf.fit(X, y)
#regression
from sklearn.svm import LinearSVR
svm_reg = LinearSVR(epsilon = 1.5)
svm_reg.fit(X, y)
|
ml/svm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from scipy.misc import imread
img = imread("DSC00064-740x442.jpg")
# -
a = np.asarray(img)
a.tofile('foo.csv',sep=',',format='%10.5f')
img.shape
print(img.flatten())
data = img.flatten()
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# +
print(data.reshape(442, 740, 3))
#out = csv.writer(open("myfile.csv","w"), delimiter=',',quoting=csv.QUOTE_ALL)
#out.writerow(data.reshape(442, 740, 3))
# -
import numpy as np
np.save("data.npy", data)
np.load("data.npy")
# +
lol = [(1,2,3),(4,5,6),(7,8,9)]
item_length = len(data)
with open('test.csv', 'wb') as test_file:
file_writer = csv.writer(test_file)
for i in range(item_length):
file_writer.writerow([x[i] for x in data])
# -
|
pics/Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Final Capstone Projects
#
# Please refer to the [**Final Capstone Projects**](http://nbviewer.jupyter.org/github/jmportilla/Complete-Python-Bootcamp/tree/master/Final%20Capstone%20Projects/) folder to get all the info on final capstone project ideas and possible solutions!
|
notebooks/12-Final Capstone Python Project/01-Final Capstone Project.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tema: Jerarquía de Memoria y Caches
#
# ## Escuela de Ingeniería Eléctrica
# ### Universidad de Costa Rica
# ##### IE052 Estructuras de computadoras digitales II
# Brown, Belinda B61254
# Esquivel, Brandon B52571
# ## Contenido
# <ol>
# <li>Resumen</li>
# <li>Definiciones</li>
# <li>Localización</li>
# <li>Consideraciones:</li>
# - Colocación del bloque <br>
# - Identificación del bloque <br>
# <li>Ejemplos</li>
# <li>Propuesta de evaluación</li>
# <li>Referencias Bibliográficas</li>
# </ol>
#
#
#
# ## Resumen
# Este documento forma parte de un cuaderno colaborativo, en este caso se analiza el tema de Jerarquía de memoria y caches. Es importante cosiderar que toda la información proviene mayoritamiente del libro *Computer Architecture A quantitative approach (5TH Edition)* [2], así como del material brindado en clase.
# ## Definiciones
# ### Jerarquía de memoria
# Se define como la forma en la que la memoria de la computadora está orgnizada, esta organización se realiza por medio de niveles dado que al tener niveles de memoria se logra mejorar el rendimiento de la misma. La importancia de la jerarquía se debe a que una memoria tiene capacidad de almacenamiento, una velocidad relacionada con el microprocesador dado que las instrucciones que se ejecutan implican cálculos y decisiones, así como el costo por bit. <br> <br> Es importante tomar en cuenta las siguientes relaciones:
#
# | Relaciones | generales |
# |------|------|
# | *Mayor* | *Menor* |
# | Capacidad | Costo por bit |
# | Costo | Tiempo |
# | Capacidad | Velocidad |
# La siguiente tabla es producto de la relación de la información proveniente del libro *Computer Architecture A quantitative approach (5TH Edition)* [2].
# | Jerarquía de | memoria | de un servidor |
# |------|------|------|
# | *Nivel de memoria* | *Capacidad* | *Velocidad* |
# | Registros del CPU | 1000 bytes | 300 ps |
# | Caché L1 | 64 kB | 1 ns |
# | Caché L2 | 256 kB | 3 ns a 10 ns|
# | Caché L3 | 2 MB a 4 MB | 10 ns a 20 ns|
# | Referencia de memoria | 4 GB a 16 GB | 50 ns a 100 ns|
# | Disco Duro | 4 TB a 16 TB | 5 ms a 10 ms|
#
# <br>
#
# | Jerarquía de | memoria de un | dispositivo móvil personal |
# |------|------|------|
# | *Nivel de memoria* | *Capacidad* | *Velocidad* |
# | Registros del CPU | 500 bytes | 500 ps |
# | Caché L1 | 64 kB | 2 ns |
# | Caché L2 | 256 kB | 10 ns a 20 ns|
# | Referencia de memoria | 256 MB a 512 MB | 50 ns a 100 ns|
# | Disco Duro | 4 GB a 8 GB | 25 us a 50 us|
#
#
#
#
#
#
# ### Memoria Caché
# Es una memoria temporal, la cual es de rápido acceso. Esta memoria guarda los datos que fueron requeridos, o bien, utilizados recientemente. Es importante considerar los tiempos mostrados en las tablas anteriores dado que existen diferentes niveles por lo que la velocidad de acceso es distinta.
#
# <br>
#
# Importante considerar de manera general siguiente flujo de acceso:
#
# <br>
#
# ~~~~~
# CPU -> Nivel 1 (L1) Caché -> Nivel 2 (L2) Caché -> Nivel 3 (L3) Caché -> Memoria principal ...
# CPU <- <- <- <- Memoria principal ...
# ~~~~~
#
#
# ## Localización
# ### Tipos de localidad de referencia:
# Es importante clarificar la Ley de Amdahl la cual dicta el siguiente estatuto:
#
# ~~~
# "La mejora obtenida en el rendimiento de un sistema debido a la alteración de uno de sus componentes está limitada por la fracción de tiempo que se utiliza dicho componente" - Ley de Amdahl [1]
# ~~~
#
# Matemáticamente expresado como:
#
# ~~~~
# T_ex_new = T_ex_old * ( (1 - Frac_opt) + Frac_opt/Ac_op )
#
# ~~~~
#
#
# ```
# Ac_global = T_ex_old/T_ex_new
#
# ```
#
#
# * T_ex_new: Tiempo de ejecución nuevo
#
# * T_ex_old: Tiempo de ejecución antiguo
#
# * Frac_opt: Fracción mejorada
#
# * Ac_op: Aceleración mejorada
#
# * Ac_op: Aceleración global
#
#
#
# La localidad de referencia es alusivo de en dónde se encuentra lo que se requiere para poder proceder con una operación. Los tipos más conocidos de localidad son temporal y espacial, sin embargo existe uno adicional conocido como localidad algorítmica.
#
def Obtener_Ac_global(Fra_op, Ac_op):
T_ex_new = 1 * ( (1 - Fra_op) + Fra_op/Ac_op )
R_Ac_global = 1/T_ex_new
return R_Ac_global
# #### Localidad Temporal
# Un punto de referencia significa la ubicación específica de memoria en donde se encuentra la información que se necesita, como tendencia se espera que que esta mismo punto vuelva a ser requerido conforme las operaciones transcurren. Temporal quiere decir que depende del tiempo, una localidad temporal tiende a funcionar para almacenar la información con tiende a ser solicitada con mayor frecuencia con el fn de facilidarla con una mayor velocidad.
# #### Localidad Espacial
# Esta tipo de localidad hace referencia al hecho de que aquellas direcciones que se encuentran juntas o cerca tienden a ser requeridas en el futuro al mismo tiempo.
# #### Localidad Algorítmica
# Este tipo de localidad surge del comportamiento del algoritmo, considera cuando el programa accede de manera repetida la data o bien ejecuta bloques de código los cuales so distribuidos de de diversa manera en el espacio de memoria.
# ## Consideraciones
# Dentro de las consideraciones a tomar se tienen las siguientes: ¿Dónde coloca el bloque? y ¿Cómo se identifica?. Lo cual será explicado a continuación:
# ### Colocación del bloque
# Para esto se utiliza una función de Hash locual significa que una función cualquier es utilizada para mapear datos de gran tamaño arbitario a uno de menor tamaño. Para las caches lo que se realiza es un mapeo de la dirrección a un set de índices.
#
# <br>
# Se considera una función de mapeo de modo que la dirección del bloque viene dada por el modo del número de bloques o bien cantidad de sets que se poseen en la cache. Dicho de otra forma el bloque donde coloco la información viene dado por tomar la dirección que solicta el CPU y hacerle el modulo entre la cantidad de bloques totales de la cache:
#
# ~~~~
# Loc_Block = Dir_CPU % Total_Cache
#
# Siendo esto:
#
# &.## = Dir_CPU / Total_Cache
# @ = & * Total_Cache
# Loc_Block = Dir_CPU - @
# ~~~~
#
# * Loc_Block: Bloque donde coloca la información.
#
# <br>
#
# * Dir_CPU: Dirección solicidada por el CPU.
#
# <br>
#
# * Total_Cache: Cantidad de bloques totales de la caché
#
#
def modulo(dir_cpu, total_cache):
value_just_hole_part = dir_cpu // total_cache
another = value_just_hole_part * total_cache
loc_block = dir_cpu - another
return loc_block
# Dentro de esto, se puede considerar que existe fucniones de mapeo que son totalmente asociativas, mapeo directo y de set asociativo. De forma que totalmente asociativo refiere a que el bloque puede ir a cualquier dirección de la caché, el mapeo directo es como el ejemplo mencionado y el set asociativo funciona como el ejemplo mencionado solo que en vez de ir a una solo bloque se refiere a un set de bloques, de decir, un grupo de bloques a asociados a un índice.
#
# **Por ejemplo:**
#
# Si el CPU solciita la dirección 8 y se poseen 4 bloques totales en la cache, la localización de esta dirección sería igual a:
#
#
# +
direccion = modulo(9, 2)
print("La dirección de bloque donde se coloca la información es el: ", direccion)
# -
# ### Identificación del bloque
# En esta sección, se analiza como se puede indentificar un bloque en caché. La principl idea es saber si se encuentra o no, para esto se utiliza lo que es el ínidce y el tag. El tag se encarga de chequear lo bits más significativos.
#
#
# De modo general se tiene que:
#
# ~~~
#
# | Dirección de memoria |
#
# | Dirección del bloque | Offset |
#
# | Tag | Index | Offset |
#
# ~~~
#
# ## Ejemplos
# Tome en cuenta los siguientes ejemplos tomados como referencia del libro [2] solo que se ajustaron los valores con el fin de que se resuelvan a modo práctica.
#
# **Ejemplo extra 1:**
#
# Se considera una mejora que corra once veces más rápido que la máquina original, pero sólo es utilizable el 50 por 100 del tiempo. ¿Cuál es la aceleración global lograda al incorporar la mejora?
#
#
#
# *Respuesta:*
#
# +
aceleracion_g = Obtener_Ac_global(0.5, 11)
print("La aceleración global lograda es de: ", aceleracion_g)
# -
# **Ejemplo extra 2:**
#
# Supongamos que se quiere mejorar la velocidad de la CPU de nuestra máquina en un factor de siete (sin afectar al rendimiento de E/S) por siete veces el coste. Supongamos también que la CPU se utiliza el 70 por 100 del tiempo, y que el tiempo restante la CPU está esperando las E/S. Si la CPU supone un tercio del coste total del computador, ¿el incremento de la velocidad de la CPU en un factor de siete es una buena inversión desde un punto de vista coste/rendimiento?
#
# Calculando la aceleración:
#
#
#
# +
aceleracion_g = Obtener_Ac_global(0.7, 7)
print("La aceleración global lograda es de: ", aceleracion_g)
# -
# Se sigue que para el análisis de inversión:
#
#
#
# +
Inversion = 2/3 * 1 + 1/3 * 7
print("La inversión necesaria para llevar a cabo el proyecto sería de: ", Inversion)
# -
# R/ Considerando que la inversión es mayor a la mejora, se concluye que no es una buena opción desde el punto de vista coste/rendimiento.
# ## Propuesta de evaluación
# #### Ejercicio 1:
# Considere el siguiente ejercicio como una pregunta abierta de la cual se espera que a partir de la información presenta se pueda tomar una decisión.
#
# Gabi es una dueña de la empresa G+T, dadas las funciones que realiza Gabi en la empresa debe de cambiar de computadora. Pero ella no comprende muy bien ciertas especificaciones de la ficha de venta de la computadora, en especial las relativas a memoria.
#
# **Pregunta**
#
# Realice un ejemplo corto con el fin de explicarle a Gabi las diferenetes especificaciones que se relacionan con memoria en una computadora.
#
#
#
# #### Ejercicio 2:
# **Pregunta**
#
# Calcule el bloque donde se coloca la información, si la caché del servidor es la L2 y siguiendo la analogía de la explicación de colocación del bloque en su primer paso ( &.## = Dir_CPU / Total_Cache ) da 4.59.
# ## Referencias Bibliográficas
# [1] <NAME>., <NAME>., y Prieto,A.(2005). Arquitectura de Computadores.España: Ed. Paraninfo S.A./ Arquitectura de Computadores
#
# [2] Computer Organization and Design: The Hardware/Software Interface 5th Edition · <NAME>.; <NAME>VIER / 978-0-12-407726
|
Cache_And_Mem_Hierarchy/Cap_Cache_Mem_Hierarchy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Selecting by Callable
# This is the fourth entry in a series on indexing and selecting in pandas. In summary, this is what we've covered:
#
# * [Basic indexing, selecting by label and location](https://www.wrighters.io/indexing-and-selecting-in-pandas-part-1/)
# * [Slicing in pandas](https://www.wrighters.io/indexing-and-selecting-in-pandas-slicing/)
# * [Selecting by boolean indexing](https://www.wrighters.io/boolean-indexing-in-pandas/)
#
# In all of the discussion so far, we've focused on the three main methods of selecting data in the two main pandas data structures, ```Series``` and ```DataFrame```.
#
# * The array indexing operator, or ```[]```
# * The ```.loc``` selector, for selection using the label on the index
# * The ```.iloc``` selector, for selection using the location
#
# We noted in the last entry in the series that all three can take a boolean vector as indexer to select data from the object. It turns out that you can also pass in a callable. If you're not familiar with a callable, it can be a function, or object with a ```___call___``` method. When used for pandas selection, the callable needs to take one argument, which will be the pandas object, and return a result that will select data from the dataset. Why would you want to do this? We'll look at how this can be useful.
#
# In this series I've been grabbing data from the [Chicago Data Portal](https://data.cityofchicago.org). For this post, I've grabbed the [list of current employees](https://data.cityofchicago.org/Administration-Finance/Current-Employee-Names-Salaries-and-Position-Title/xzkq-xp2w) for the city. This includes full time and part time, salaried and hourly data.
#
# +
import pandas as pd
# you should be able to grab this dataset as an unauthenticated user, but you can be rate limited
# it also only returns 1000 rows (or at least it did for me without an API key)
df = pd.read_json("https://data.cityofchicago.org/resource/xzkq-xp2w.json")
# -
df.dtypes
df.describe()
df.shape
df = df.drop('name', axis=1) # no need to include personal info in this post
# ## Simple callables
# So we have some data, which is a subset of the total list of employees for the city of Chicago. The full dataset should be about 32,000 rows.
#
# Before we give a few examples, let's clarify what this callable should do. First, the callable will take one argument, which will be the ```DataFrame``` or ```Series``` being indexed. What you need to returns a valid value for indexing. This could be any value that we've already discussed in earlier posts.
#
# So, if we are using the array indexing operator, on a ```DataFrame``` you'll remember that you can pass in a single column, or a list of columns to select.
# +
def select_job_titles(df):
return "job_titles"
df[select_job_titles]
# +
def select_job_titles_typical_hours(df):
return ["job_titles", "typical_hours"]
df[select_job_titles_typical_hours].dropna()
# -
# We can also return a boolean indexer, since that's a valid argument.
# +
def select_20_hours_or_less(df):
return df['typical_hours'] <= 20
df[select_20_hours_or_less].head()
# -
# You can also use callables for both the first (row indexer) and second (column indexer) arguments in a ```DataFrame```.
df.loc[lambda df: df['typical_hours'] <= 20, lambda df: ['job_titles', 'typical_hours']].head()
# ### But why?
# OK, so this all seems kind of unnecessary because you could do this much more directly. Why write a separate function to provide another level of redirection?
#
# I have to admit that before writing this post, I don't think that I've used callable indexing much, if at all. But one use case where it's helpful is something that I do all the time. Maybe you do as well.
#
# Let's say we want to find departments with an average hourly pay rate below some threshold. Usually you'll do a group by followed by a selector on the resulting groupby ```DataFrame```.
temp = df.groupby('job_titles').mean()
temp[temp['hourly_rate'] < 20]
# But with a callable, you can do this without the temporary ```DataFrame``` variable.
df.groupby('job_titles').mean().loc[lambda df: df['hourly_rate'] < 20]
# One thing to note is that there's nothing special about these callables. They still have to return the correct values for the selector you are choosing to use. So for example, you can do this using ```loc```:
df.loc[lambda df: df['department'] == 'CITY COUNCIL']
# But you can't do this, because ```.iloc``` requires a boolean vector without an index (as we talked about in [the post on boolean indexing](https://www.wrighters.io/2021/01/04/boolean-indexing-in-pandas/).
# +
try:
df.iloc[lambda df: df['department'] == 'CITY COUNCIL']
except NotImplementedError as nie:
print(nie)
# instead, return just the boolean vector
df.iloc[lambda df: (df['department'] == 'CITY COUNCIL').to_numpy()]
# or
df.iloc[lambda df: (df['department'] == 'CITY COUNCIL').values]
# -
# Also, while I've used the ```DataFrame``` for all these examples, this works in ```Series``` as well.
# +
s = df['annual_salary']
s[lambda s: s < 30000]
# -
# In summary, indexing with a callable allows some flexibity for condensing some code that would otherwise require temporary variables. The thing to remember about the callable is that it will need to return a result that is acceptable in the same place as the callable.
#
# I hope you'll stay tuned for future updates. I'll plan to talk about the ```.where``` method of selection next.
|
pandas/pandas_indexing_4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Imports and Convenience Functions
# +
import warnings
import numpy as np
import phe as paillier
from sonar.contracts import ModelRepository,Model
from sonar.ipfs import IPFS
from syft.he.paillier.keys import KeyPair
from syft.nn.linear import LinearClassifier
from sklearn.datasets import load_diabetes
def get_balance(account):
return repo.web3.fromWei(repo.web3.eth.getBalance(account),'ether')
warnings.filterwarnings('ignore')
# -
# ## Setup Settings
# +
# this is the addresas
my_wallet_address = '0xc6E1D4501D3f354E89F048F096db4424168682a9'
# -
# ### Setting up the Experiment
# +
# for the purpose of the simulation, we're going to split our dataset up amongst
# the relevant simulated users
diabetes = load_diabetes()
y = diabetes.target
X = diabetes.data
validation = (X[0:5],y[0:5])
anonymous_diabetes_users = (X[6:],y[6:])
# we're also going to initialize the model trainer smart contract, which in the
# real world would already be on the blockchain (managing other contracts) before
# the simulation begins
andreas_repo = '0xd60e1a150b59a89a8e6e6ff2c03ffb6cb4096205'
stefan_repo = '0xf0DE3D665D777d12609D0524c89C6456A6658E5E'
# ATTENTION: copy paste the correct address (NOT THE DEFAULT SEEN HERE) from truffle migrate output.
repo = ModelRepository(andreas_repo, account=my_wallet_address,ipfs=IPFS(host='127.0.0.1'), web3_host='localhost') # blockchain hosted model repository
# -
# ## Step 1: Cure Diabetes Inc Initializes a Model and Provides a Bounty
pubkey,prikey = KeyPair().generate(n_length=1024)
diabetes_classifier = LinearClassifier(desc="DiabetesClassifier",n_inputs=10,n_labels=1)
initial_error = diabetes_classifier.evaluate(validation[0],validation[1])
diabetes_classifier.encrypt(pubkey)
diabetes_model = Model(owner='0xc6E1D4501D3f354E89F048F096db4424168682a9',
syft_obj = diabetes_classifier,
bounty = 0.00000000001,
initial_error = initial_error,
target_error = 10000
)
model_id = repo.submit_model(diabetes_model)
len(repo)
# ## Step 2: An Anonymous Patient Downloads the Model and Improves It
model_id = 10
model = repo[model_id]
repo[model_id].submit_gradient(my_wallet_address,anonymous_diabetes_users[0][0],anonymous_diabetes_users[0][0])
# ## Step 3: Cure Diabetes Inc. Evaluates the Gradient
repo[model_id]
old_balance = get_balance(my_wallet_address)
print(old_balance)
grad = model[0]
grad.id
model = repo[model_id]
new_error = model.evaluate_gradient(my_wallet_address,repo[model_id][0],prikey,pubkey,validation[0],validation[1])
new_error
new_balance = get_balance(my_wallet_address)
incentive = new_balance - old_balance
print(incentive)
# ## Step 4: Rinse and Repeat
model
# +
# for i,(addr, input, target) in enumerate(anonymous_diabetics):
# try:
# model = repo[model_id]
# # patient is doing this
# model.submit_gradient(addr,input,target)
# # Cure Diabetes Inc does this
# old_balance = get_balance(addr)
# new_error = model.evaluate_gradient(cure_diabetes_inc,model[i+1],prikey,pubkey,validation[0],validation[1],alpha=2)
# print("new error = "+str(new_error))
# incentive = round(get_balance(addr) - old_balance,5)
# print("incentive = "+str(incentive))
# except:
# "Connection Reset"
# -
|
notebooks/The Helium Demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 0. Import Packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
import os
import glob
import seaborn as sns
# ## 1. Load Dataset
data = pd.read_csv('dataset/B05_discharge_soh.csv')
df = pd.DataFrame(data)
df
print("Shape of data :", np.shape(data))
# ## 2. Visualization of capacity
# +
sns.set_style("darkgrid")
plt.figure(figsize=(12, 8))
plt.scatter(df['cycle'], df['capacity'])
plt.plot(df['cycle'], len(df['cycle'])*[1.4], color = 'red')
plt.ylabel('Capacity', fontsize = 15)
plt.xlabel('cycle', fontsize = 15)
plt.title('Discharge B0005', fontsize = 15)
plt.show()
# -
# ## 3. Calculation of SoH
# +
capacity = df['capacity']
# C = capacity[0]
SOH = capacity/2
print("Max of SOH :", np.max(SOH))
print("Min of SOH :", np.min(SOH))
# +
sns.set_style("darkgrid")
plt.figure(figsize=(12, 8))
plt.scatter(df['cycle'], SOH)
plt.plot(df['cycle'], len(df['cycle'])*[0.7], color = 'red')
plt.ylabel('SoH', fontsize = 15)
plt.xlabel('cycle', fontsize = 15)
plt.title('Discharge B0005', fontsize = 15)
plt.show()
|
1_Calculation_and_Visulaliztion_of_SoH/Simple_calculation_of_SoH.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 配置环境
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
print("Setup Complete")
# # 差分隐私处理
# ## Step 1 import packages
from DataSynthesizer.DataDescriber import DataDescriber
from DataSynthesizer.DataGenerator import DataGenerator
from DataSynthesizer.ModelInspector import ModelInspector
from DataSynthesizer.lib.utils import read_json_file, display_bayesian_network
# ## Step 2 user-defined parameteres
# +
# input dataset
file_name = "iris"
input_data = "./data/" + file_name + ".csv"
# location of two output files
mode = 'correlated_attribute_mode'
description_file = f"./out/{mode}/" + file_name + "_synthetic_description.json"
synthetic_data = f"./out/{mode}/" + file_name + "_synthetic_data.csv"
input_data, description_file, synthetic_data
# -
input_df = pd.read_csv(input_data)
input_df.head()
input_df.shape
# +
# An attribute is categorical if its domain size is less than this threshold.
# Here modify the threshold to adapt to the domain size of "education" (which is 14 in input dataset).
threshold_value = 20
# specify categorical attributes
# categorical_attributes = {'education': True}
categorical_attributes = {'Species': True}
# specify which attributes are candidate keys of input dataset.
# candidate_keys = {'ssn': True}
candidate_keys = {'Id': True}
# A parameter in Differential Privacy. It roughly means that removing a row in the input dataset will not
# change the probability of getting the same output more than a multiplicative difference of exp(epsilon).
# Increase epsilon value to reduce the injected noises. Set epsilon=0 to turn off differential privacy.
epsilon = 500
# The maximum number of parents in Bayesian network, i.e., the maximum number of incoming edges.
degree_of_bayesian_network = 2
# Number of tuples generated in synthetic dataset.
num_tuples_to_generate = input_df.shape[0] # Here input_df.shape[0] is the same as input dataset, but it can be set to another number.
# -
# ## Step 3 DataDescriber
#
# 1. Instantiate a DataDescriber.
# 2. Compute the statistics of the dataset.
# 3. Save dataset description to a file on local machine.
# + jupyter={"source_hidden": true}
describer = DataDescriber(category_threshold=threshold_value)
describer.describe_dataset_in_correlated_attribute_mode(dataset_file=input_data,
epsilon=epsilon,
k=degree_of_bayesian_network,
attribute_to_is_categorical=categorical_attributes,
attribute_to_is_candidate_key=candidate_keys)
describer.save_dataset_description_to_file(description_file)
# -
display_bayesian_network(describer.bayesian_network)
# ## Step 4 generate synthetic dataset
#
# 1. Instantiate a DataGenerator.
# 2. Generate a synthetic dataset.
# 3. Save it to local machine.
generator = DataGenerator()
generator.generate_dataset_in_correlated_attribute_mode(num_tuples_to_generate, description_file)
generator.save_synthetic_data(synthetic_data)
# ## Step 5 compare the statistics of input and sythetic data (optional)
#
# The synthetic data is already saved in a file by step 4. The ModelInspector is for a quick test on the similarity between input and synthetic datasets.
#
# ### 5.1 instantiate a ModelInspector.
#
# It needs input dataset, synthetic dataset, and attribute description.
input_df.head()
synthetic_df = pd.read_csv(synthetic_data)
synthetic_df.head()
synthetic_df.columns
# +
# Read attribute description from the dataset description file.
attribute_description = read_json_file(description_file)['attribute_description']
inspector = ModelInspector(input_df, synthetic_df, attribute_description)
# -
# ### 5.2 compare histograms between input and synthetic datasets.
for attribute in synthetic_df.columns:
print(attribute)
inspector.compare_histograms(attribute)
# ### 5.3 compare pairwise mutual information
inspector.mutual_information_heatmap()
# # 其他种类可视化的比较
iris_set_data = input_df.loc[input_df['Species'] == 'Iris-setosa']
iris_ver_data = input_df.loc[input_df['Species'] == 'Iris-versicolor']
iris_vir_data = input_df.loc[input_df['Species'] == 'Iris-virginica']
# +
# Histograms for each species
sns.distplot(a=iris_set_data['Petal Length (cm)'], label="Iris-setosa", kde=False)
sns.distplot(a=iris_ver_data['Petal Length (cm)'], label="Iris-versicolor", kde=False)
sns.distplot(a=iris_vir_data['Petal Length (cm)'], label="Iris-virginica", kde=False)
# Add title
plt.title("Histogram of Petal Lengths, by Species")
# Force legend to appear
plt.legend()
# -
iris_set_synthetic_data = synthetic_df.loc[synthetic_df['Species'] == 'Iris-setosa']
iris_ver_synthetic_data = synthetic_df.loc[synthetic_df['Species'] == 'Iris-versicolor']
iris_vir_synthetic_data = synthetic_df.loc[synthetic_df['Species'] == 'Iris-virginica']
# +
# Histograms for each species
sns.distplot(a=iris_set_synthetic_data['Petal Length (cm)'], label="Iris-setosa", kde=False)
sns.distplot(a=iris_ver_synthetic_data['Petal Length (cm)'], label="Iris-versicolor", kde=False)
sns.distplot(a=iris_vir_synthetic_data['Petal Length (cm)'], label="Iris-virginica", kde=False)
# Add title
plt.title("Histogram of Petal Lengths, by Species")
# Force legend to appear
plt.legend()
# -
|
notebooks/.ipynb_checkpoints/PrivDataVis-iris-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="Z8AKDFPlT4aR" outputId="ada37d25-8626-47ef-f931-ae5818e75b66"
# !git clone https://github.com/omidrk/RPcovidActiveLearning.git
# + colab={"base_uri": "https://localhost:8080/"} id="XDqeZR5hT8KD" outputId="a03bedeb-c854-4ccd-edbf-8c2ef9a3fcc9"
# !pip install captum
# + id="lWCa6DZ9T-B-"
import os
os.chdir('/content/RPcovidActiveLearning')
# + [markdown] id="vu0NTzgyW6l9"
# First network is trained with 1 epoch the 4 active learning loop
# + colab={"base_uri": "https://localhost:8080/"} id="7Za2txIDUDv3" outputId="45a087d7-cb89-45ec-9265-bcf2478ad292"
# !python main.py
# + [markdown] id="Py1jWIVvcony"
# Train network with 3 active learning loop and each time sample 10000 data and choose 100 for labeling.
# + colab={"base_uri": "https://localhost:8080/"} id="gCHh34NzZ2Tx" outputId="ef0cccab-3a4d-49e5-ab1d-6c07fcdff98a"
# !python main.py
|
Experiments/ActiveLearningSettingWithoutExplanation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tensorflow]
# language: python
# name: conda-env-tensorflow-py
# ---
# +
'''Code for fine-tuning Inception V3 for a new task.
Start with Inception V3 network, not including last fully connected layers.
Train a simple fully connected layer on top of these.
'''
import numpy as np
import keras
import random
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Dense, Activation, Flatten, Dropout
import inception_v3 as inception
import vgg16 as VGG
import prepare.collect as pc
'''
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9
set_session(tf.Session(config=config))
'''
N_CLASSES = 2
IMSIZE = (224, 224)
XML_DIR = "../data/annotations/xmls/"
IMG_DIR = "../data/images/"
VAL_RATIO = 0.3
# TO DO:: Replace these with paths to the downloaded data.
# Training directory
# train_dir = '../data/catdog/train'
# Testing directory
# test_dir = '../data/catdog/validation'
# Start with an Inception V3 model, not including the final softmax layer.
base_model = VGG.VGG16(weights='imagenet')
print ('Loaded vgg16 model')
# +
# Turn off training on base model layers
for layer in base_model.layers:
layer.trainable = False
# Add on new fully connected layers for the output classes.
# x = Dense(1024, activation='relu')(base_model.get_layer('fc2').output)
# x = Dropout(0.5)(x)
# predictions = Dense(N_CLASSES, activation='softmax', name='predictions')(x)
base_model_last = base_model.get_layer('flatten').output
x = Dense(4096, activation='relu', name='fc1-1')(base_model_last)
x = Dense(4096, activation='relu', name='fc1-2')(x)
predictions = Dense(N_CLASSES, activation='softmax', name='predictions')(x)
# y = Dense(4096, activation='relu', name='fc2-1')(base_model_last)
# y = Dense(4096, activation='relu', name='fc2-2')(y)
# aux_predictions = Dense(4, name='aux_predictions')(y)
model = Model(input=base_model.input, output=predictions)
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
# +
# Show some debug output
print (model.summary())
print ('Trainable weights')
#model.save_weights('catdog_pretrain.h5')
#print (model.trainable_weights)
# +
xmlFiles = pc.listAllFiles(XML_DIR)
infoList = list(map(lambda f:pc.getInfoTupleForXml(f,IMG_DIR) ,xmlFiles))
random.shuffle(infoList)
cutIndex = int(len(infoList)*VAL_RATIO)
train_files=infoList[:cutIndex]
val_files = infoList[cutIndex:]
# +
#print(val_files)
np.random.seed()
img_datagen = ImageDataGenerator(rescale=1./255)
def to_categorical(y, num_classes=None):
"""Converts a class vector (integers) to binary class matrix.
E.g. for use with categorical_crossentropy.
# Arguments
y: class vector to be converted into a matrix
(integers from 0 to num_classes).
num_classes: total number of classes.
# Returns
A binary matrix representation of the input.
"""
y = np.array(y, dtype='int').ravel()
if not num_classes:
num_classes = np.max(y) + 1
n = y.shape[0]
categorical = np.zeros((n, num_classes))
categorical[np.arange(n), y] = 1
return categorical
def my_load_img(img_path,img_datagen,size):
img = image.load_img(img_path, target_size=size)
x = image.img_to_array(img)
# x = img_datagen.img_to_array(img)
x = img_datagen.random_transform(x)
x = img_datagen.standardize(x)
#x = np.expand_dims(x, axis=0)
return x
def my_img_generator(files,img_datagen,batch_size):
# index_array = np.random.permutation(len(files))
index = 0
count = 0
img_datas=[]
img_labels=[]
while 1:
# create numpy arrays of input data
# and labels, from each line in the file
if count < batch_size:
img_datas.append(my_load_img(files[index][1],img_datagen,IMSIZE))
# lable=[0.0,0.0]
# lable[files[index][1]]=1.0
img_labels.append(files[index][2])
index=(index+1)%len(files)
count+=1
else:
count=0
#print(img_datas)
one_hot_labels=to_categorical(img_labels, num_classes=2)
yield (np.array(img_datas),np.array(one_hot_labels))
img_datas = []
img_labels = []
# random.shuffle(files)
batch_size=32
# t = next(my_img_generator(train_files,img_datagen,batch_size))
# model.load_weights('catdog_pretrain_nf.h5')
# train_data
# train_data.shape
my_train_generator = my_img_generator(train_files,img_datagen,batch_size)
my_val_generator = my_img_generator(val_files,img_datagen,batch_size)
#train_datagen = ImageDataGenerator(rescale=1./255)
# train_generator = train_datagen.flow_from_directory(
# train_dir, # this is the target directory
# target_size=IMSIZE, # all images will be resized to 299x299 Inception V3 input
# batch_size=batch_size,
# class_mode='categorical')
#test_datagen = ImageDataGenerator(rescale=1./255)
# test_generator = test_datagen.flow_from_directory(
# test_dir, # this is the target directory
# target_size=IMSIZE, # all images will be resized to 299x299 Inception V3 input
# batch_size=batch_size,
# class_mode='categorical')
# print(a[1].shape)
# print(a[1])
# +
# my_train_generator = my_img_generator(train_files,img_datagen,32)
# my_val_generator = my_img_generator(val_files,img_datagen,32)
# model.fit_generator(
# my_train_generator,
# samples_per_epoch=128,
# nb_epoch=10,
# validation_data=test_datagen,
# verbose=2,
# nb_val_samples=128)
model.fit_generator(
my_train_generator,
samples_per_epoch=128,
nb_epoch=10,
validation_data=my_val_generator,
verbose=2,
nb_val_samples=128)
# +
#model.load_weights('catdog_pretrain_nf.h5')
# Data generators for feeding training/testing images to the model.
train_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir, # this is the target directory
target_size=IMSIZE, # all images will be resized to 299x299 Inception V3 input
batch_size=32,
class_mode='categorical')
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir, # this is the target directory
target_size=IMSIZE, # all images will be resized to 299x299 Inception V3 input
batch_size=32,
class_mode='categorical')
model.fit_generator(
train_generator,
samples_per_epoch=128,
nb_epoch=10,
validation_data=test_generator,
verbose=2,
nb_val_samples=128)
#model.save_weights('catdog_pretrain_nf.h5') # always save your weights after training or during training
img_path = '../data/sport3/validation/hockey/img_2997.jpg'
#img_path = '../data/catdog/test/2.jpg'
img = image.load_img(img_path, target_size=IMSIZE)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = inception.preprocess_input(x)
preds = model.predict(x)
print('Predicted:', preds)
#classes= model.predict_classes(x)
#print('Classes:', classes)
# +
#model.load_weights('catdog_pretrain.h5')
#img_path = '../data/sport3/validation/hockey/img_2997.jpg'
img_path = '../data/cat2.jpg'
img_path = '../data/catdog/test/58.jpg'
img = image.load_img(img_path, target_size=IMSIZE)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = inception.preprocess_input(x)
preds = model.predict(x)
print('Predicted:', preds)
#classes= model.predict_classes(x)
#print('Classes:', classes)
|
code/notebook - class_working.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + _cell_guid="a3cb0ee3-7bca-4b2b-8a27-be198d18818e" _uuid="075ab0f3fc310e293828b3681f1d80642f88c106" language="html"
# <style>
# .h1_cell, .just_text {
# box-sizing: border-box;
# padding-top:5px;
# padding-bottom:5px;
# font-family: "Times New Roman", Georgia, Serif;
# font-size: 125%;
# line-height: 22px; /* 5px +12px + 5px */
# text-indent: 25px;
# background-color: #fbfbea;
# padding: 10px;
# }
#
# hr {
# display: block;
# margin-top: 0.5em;
# margin-bottom: 0.5em;
# margin-left: auto;
# margin-right: auto;
# border-style: inset;
# border-width: 2px;
# }
# </style>
# -
# <h1>
# <center>
# Module 4 - Gothic author identification
# </center>
# </h1>
# <div class=h1_cell>
# <p>
# This week we are going to take on the task of identifying authors of gothic novels. Our authors to choose from are these three:
# <ol>
# <li>EAP - <NAME> (https://en.wikipedia.org/wiki/Edgar_Allan_Poe): American writer who wrote poetry and short stories that revolved around tales of mystery and the grisly and the grim. Arguably his most famous work is the poem - "The Raven" and he is also widely considered the pioneer of the genre of the detective fiction.</li>
# <p>
# <li>HPL - HP Lovecraft (https://en.wikipedia.org/wiki/H._P._Lovecraft): Best known for authoring works of horror fiction, the stories that he is most celebrated for revolve around the fictional mythology of the infamous creature "Cthulhu" - a hybrid chimera mix of Octopus head and humanoid body with wings on the back.</li>
# <p>
# <li>MWS - <NAME> (https://en.wikipedia.org/wiki/Mary_Shelley): Seemed to have been involved in a whole panoply of literary pursuits - novelist, dramatist, travel-writer, biographer. She is most celebrated for the classic tale of Frankenstein where the scientist Frankenstein a.k.a "The Modern Prometheus" creates the Monster that comes to be associated with his name.</li>
# </ol>
# <p>
# What we have is a table of sentences from their books. Each sentence is labeled with the author who wrote it. Given a new sentence our job is to predict who the author is of that sentence.
# <p>
# The sentences are all jumbled up, i.e., we do not have paragraph or chapter level info.
# <p>
# <h2>Why is this interesting?</h2>
# <p>
# One application of this style of analysis is in literary studies. An ancient book is found but the author is unknown. Or perhaps the author is known but there is a suspicion that someone else ghost wrote it. Or even looking at plagiarism: some portions of a book by author X look like they were lifted from author Y.
# <p>
# Let's bring in the table and look at it.
# </div>
# +
import pandas as pd
gothic_table = pd.read_csv('https://docs.google.com/spreadsheets/d/e/2PACX-1vQqRwyE0ceZREKqhuaOw8uQguTG6Alr5kocggvAnczrWaimXE8ncR--GC0o_PyVDlb-R6Z60v-XaWm9/pub?output=csv',
encoding='utf-8')
# -
gothic_table.head()
len(gothic_table)
# <h2>
# Let's devise a plan
# </h2>
# <div class=h1_cell>
# <p>
# This looks similar to our tweet problem in prior weeks. We are given some text. The text has a label. Our goal is to build a model that will predict the label using the content of the text.
# <p>
# <ul>
# <li>Instead of using a bag of hashtags, let's use a bag of words.
# <p>
# <li>Naive Bayes worked well for us in tweet problem so let's try it again here.
# <p>
# <li>I want to do a bit of wrangling of the text in a sentence, more than we did for the tweets.
# </ul>
# <p>
# I'll tackle the wrangling first.
# </div>
# <h2>
# Wrangling a sentence into words
# </h2>
# <div class=h1_cell>
# <p>
# Once we start dealing with English, the complexity goes up a notch. One problem is that we will find words with apostrophes. For example, contractions like "I'll go", "it's easy", "won't quit". Or possesives like "John's game", "Tess' party".
# <p>
# A second problem is that we typically want to remove words that are so common they are useless in differentiating.
# Let's start with this second problem first. We will start to use the nltk package this week. nltk is like pandas in that it has lots of functions for doing a wide array of NLP tasks. For now, I know that nltk has a built-in set of words that are very common. They are called "stop words" (https://en.wikipedia.org/wiki/Stop_words). The general idea is that we want to delete these words from a sentence before doing any analysis.
# <p>
# Here they are.
# </div>
from nltk.corpus import stopwords # see more at http://xpo6.com/list-of-english-stop-words/
swords = stopwords.words('english')
swords
# <div class=h1_cell>
# <p>
# If you scroll through them, you will see the contractions. But with a big caveat: You will see pieces of the contraction but not the full contraction. For instance, you see "ll" I assume from "I'll". You see "doesn" I assume from "doesn't". What does this mean? It means that some other wrangling tool must be applied before we start looking for stop words. That tool is called a word tokenizer. nltk also has a sentence tokenizer but we don't need that - some nice person already broke the books into sentences for us. The word tokenizer takes a sentence as input and produces a list of words. nltk has several word tokenizers built in. Let's look at 2 of them below.
# <p>
# </div>
# +
from nltk.tokenize import WordPunctTokenizer
word_punct_tokenizer = WordPunctTokenizer() #instantiate class
from nltk.tokenize import TreebankWordTokenizer
treeb_tokenizer = TreebankWordTokenizer() #instantiate class
# -
# <div class=h1_cell>
# <p>
# I am going to have a bake-off between the 2 tokenizers. What I want is to use a tokenizer and then remove the stop words from the tokenized list: a tokenizer produces a list of words. I'll try a test sentence out with each tokenizer and get its list of words. I'll then run through the stop words to see how many I remove.
# </div>
# +
#First up: the punctuation tokenizer
test_sentence = "I'll say it's 6 o'clock!"
word_tokes = word_punct_tokenizer.tokenize(test_sentence)
for item in word_tokes:
print(item)
# -
# <div class=h1_cell>
# <p>
# You can see it treats an apostrophe as a separate "word". So "I'll" becomes three words: "I", "'", "ll". This feels like what we want to match against stop words. Let's do that now and see how many we match.
# <p>
# </div>
# +
#How many matches in stop words?
for word in swords:
c = word_tokes.count(word)
if c > 0:
print(word)
# -
# <div class=h1_cell>
# <p>
# I like the first 3. Would have preferred "oclock" instead of "o", "'", "clock". And we still have the apostrophes in the list - 3 of them. We can deal with them later.
# <p>
# Next batter up.
# <p>
# </div>
#http://www.nltk.org/_modules/nltk/tokenize/treebank.html
word_tokes = treeb_tokenizer.tokenize(test_sentence)
for item in word_tokes:
print(item)
# <div class=h1_cell>
# <p>
# Not looking good. We will not match "'ll" nor "'s" I predict. However, I do like that o'clock stays together.
# <p>
# </div>
for word in swords:
c = word_tokes.count(word)
if c > 0:
print(word)
# <div class=h1_cell>
# <p>
# Only one word removed. The winner, at least in terms of stop word matching, is the punct tokenizer. So I'll use that.
# <p>
# As an aside, there is nothing magical about tokenizers. You can see their code with a little digging. They mostly are made up of a bunch of re pattern matches. Nothing stopping you from extending a tokenizer to your own taste. For instance, would not be hard to change one-word contractions into their full two-word equivalent using the re sub method.
# <p>
# Aside part 2: you can check out how various nltk tokenizers do on sentences you type in here:
# http://textanalysisonline.com/nltk-word-tokenize
# </div>
# <h2>
# Challenge 1
# </h2>
# <div class=h1_cell>
# <p>
# I'd like you to work on a function `sentence_wrangler`. It will take a raw sentence from a row and tokenize it. It will then remove the following from that word list:
# <p>
# <ul>
# <li>The stop words we have been using.
# <p>
# <li>Words that contain any punctuation (see string package).
# <p>
# </ul>
# <p>
# Have it return 2 lists for debugging: the list of wrangled words and the list of removed words.
# </div>
import string
punctuation = string.punctuation
punctuation
def sentence_wrangler(sentence, swords, punctuation):
word_tokes = word_punct_tokenizer.tokenize(sentence)
removed = []
wrangled = []
for word in word_tokes:
word = word.lower()
if word in swords or all(char in punctuation for char in word):
removed.append(word)
else:
wrangled.append(word)
return(wrangled, removed)
sentence_wrangler(test_sentence, swords, punctuation)
# <div class=h1_cell>
# <p>
# Ok, let's try it on first 10 sentences in the table. I'll print out the raw sentence and then the words I remove.
# <p>
# If you are matching my results, move on to challenge 2.
# </div>
for i in range(10):
text = gothic_table.loc[i, 'text']
print(text+'\n')
print(' '.join(sentence_wrangler(text, swords, punctuation)[1]).encode('ascii'))
print('='*10)
# <h1>
# Challenge 2
#
# </h1>
# <div class=h1_cell>
# Fill out `all_words` below to produce the bag of words. Use your sentence_wrangler.
# <p>
# Remember that we now have 3 predicted values. So you will need to follow each word with a list of 3 numbers. Make the first number in list a count of EAP, the second number a count of HPL and the third number the count of MWS.
# </div>
def all_words(table, swords, punctuation):
all_author_words = {}
for i, row in table.iterrows():
wrangled_sentence = sentence_wrangler(row['text'], swords, punctuation)[0]
for word in wrangled_sentence:
if word not in all_author_words.keys():
all_author_words[word] = [0, 0, 0]
if row['author'] == 'EAP':
all_author_words[word][0]+=1
elif row['author'] == 'HPL':
all_author_words[word][1]+=1
else:
all_author_words[word][2]+=1
return all_author_words
bag_of_words = all_words(gothic_table, swords, punctuation)
len(bag_of_words) #unique words
# <h2>
# Do you match my length?
# </h2>
# <div class=h1_cell>
# If not, your `sentence_wrangler` is not matching mine I suspect.
# </div>
sorted(bag_of_words.items())[:5]
# <h2>
# Do you match my content?
# </h2>
# <div class=h1_cell>
# If not, you might have list ordering screwed up in `all_words`.
# </div>
# <h1>
# Challenge 3
# </h1>
# <div class=h1_cell>
# Let's take a look at words that are odd. Build a list of keys in the bag of words that contain at least one character that is not a letter. I am calling these odd words.
# </div>
#build odd_words
odd_words = []
for key in bag_of_words.iterkeys():
if key.encode('utf-8').isalpha() != True:
odd_words.append(key)
len (odd_words)
odd_words
# </h1>
# <div class=h1_cell>
# These are words that slipped through our `sentence_wrangler`.
# You can look the byte codes up, e.g., google for "\xe9". I suppose we could add further wrangling to `sentence_wrangler` at this point to clean up even more punctuation, but I am ready to move on.
# </div>
# <h2>
# Challenge 4
# </h2>
# <div class=h1_cell>
# <p>
# Get ready for Naive Bayes. What are we missing? We have bag_of_words that gives us the triple values we need. We are missing `P(O)`: the total count of the sentences for each author. Build that now in `total_count`.
# </ul>
# </div>
total_count = [0, 0, 0]
for i, row in gothic_table.iterrows():
if row['author'] == 'EAP':
total_count[0] += 1
elif row['author'] == 'HPL':
total_count[1] += 1
else:
total_count[2] += 1
total_count
# <h2>
# Challenge 5
# </h2>
# <div class=h1_cell>
# <p>
# Ok, let's get to it. Define `naive_bayes_gothic`. Fill in my function below and match my results. As last week, I expect your function to return the 3 probabilities for each of EAP, HPL, MWS.
# </div>
# added in the needed arguments to call sentence_wranger
def naive_bayes_gothic(raw_sentence, bag, counts, swords, punctuation):
valuesInText = sentence_wrangler(raw_sentence, swords, punctuation)[0]
countValues = [1] * len(counts)
# probValues = [0] * len(counts)
for i in range(len(valuesInText)):
if valuesInText[i] in bag.keys():
for j in range(len(counts)):
countValues[j] *= bag[valuesInText[i]][j] / float(counts[j])
total = 0
for val in counts:
total += val
probValues = [count / float(total) for count in counts]
finalValues = [0] * len(counts)
for i in range(len(counts)):
finalValues[i] = countValues[i] * probValues[i]
return tuple(finalValues)
for i in range(5):
print(naive_bayes_gothic(gothic_table.loc[i, 'text'], bag_of_words, total_count, swords, punctuation))
print(gothic_table.loc[i, 'author'])
#
# <div class=h1_cell>
# Not bad. Five out of five correct. Notice all those zeros. We are doing well because we are finding a word that only appears in the book of a specific author. For example, I can see under challenge 2 that `abaft` only appears in MWS. Hence, P(abaft|EAP) and P(abaft|HPL) will both be 0. That will zero-out the numerator (no matter how many other words do match) and return 0 as result.
# </div>
# <h2>
# Challenge 6
# </h2>
# <div class=h1_cell>
# <p>
# Generate your predictions, get actuals and zip it up. I ended up timing prediction generation but it took only 13 seconds. Gotta love NB and use of fast dictionary look-up in Python.
# </div>
import time
# +
start = time.time()
predictions = []
for i,row in gothic_table.iterrows():
if i%1000 == 0: print('did 1000')
pair = naive_bayes_gothic(gothic_table.loc[i, 'text'], bag_of_words, total_count, swords, punctuation)
if pair[0] >= pair[1] and pair[0] >= pair[2]:
predictions.append(0)
elif pair[1] > pair[0] and pair[1] >= pair[2]:
predictions.append(1)
else:
predictions.append(2)
end = time.time()
print(end - start) # in seconds
# -
# <div class=h1_cell>
# <p>
# Go ahead and build `zipped`.
# <p>
#
# </div>
#build zipped
actuals = []
for i in range(len(gothic_table)):
if gothic_table.loc[i, 'author'] == "EAP":
actuals.append(0)
elif gothic_table.loc[i, 'author'] == "HPL":
actuals.append(1)
else:
actuals.append(2)
zipped = zip(predictions,actuals)
zipped[:20]
correct = 0
for i in range(len(zipped)):
if zipped[i] == (0,0) or zipped[i] == (1,1) or zipped[i] == (2,2):
correct += 1
1.0*correct/len(zipped)
# <h2>
# Go Naive Bayes!
# </h2>
# <div class=h1_cell>
# <p>
# I'm claiming 94% accuracy. I like it.
# </div>
# <h2>
# Multinomial versus Bernoulli
# </h2>
# <div class=h1_cell>
# <p>
# We are using Multinomial Naive Bayes because we are counting how many times a word occurs for an author. We could also use Bernoulli Naive Bayes where we look for features that are true or false, e.g., a sentence is greater than 10 words in length. This paper discusses the difference between the two: http://www.kamalnigam.com/papers/multinomial-aaaiws98.pdf.
# </div>
# <h2>
# Write your bag_of_words out to file
# </h2>
# <div class=h1_cell>
# Here is code I used to write my bag_of_words out to file. I then read it back in just to make sure of round-trip. You should do the same. You will need this file for the midterm.
# +
import json
with open('bag_of_words.txt', 'w') as file:
file.write(json.dumps(bag_of_words))
# -
bag2 = json.load(open("bag_of_words.txt")) # making sure I can read it in again
sorted(bag2.items())[:5]
bag2 == bag_of_words
|
UpperDivisionClasses/Data_Science/week4/all_goth_handout.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Simple Linear Regression using ML
#Importing packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
#Importing the dataset
dataset=pd.read_csv("C:/Users/Bharathi/Downloads/salary_data.csv")
dataset.shape
dataset.head()
dataset.info()
X=dataset.iloc[:,:-1].values
Y=dataset.iloc[:,1].values
sb.distplot(dataset['YearsExperience'])
sb.scatterplot(dataset['YearsExperience'],dataset['Salary'])
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test=train_test_split(X,Y,test_size=0.2, random_state=0)
#fitting simple linear regression to the training set
from sklearn.linear_model import LinearRegression
regressor=LinearRegression()
regressor.fit(X_train, y_train)
regressor.score(X_test,y_test)
# predicting the test set results
y_pred=regressor.predict(X_test)
y_pred
regressor.predict([[1.5]])
#Visualising the Training Set Results
plt.scatter(X_train, y_train, color='red')
plt.plot(X_train,regressor.predict(X_train),color='blue')
plt.title('Salary vs Experience(training set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
#visualising the Test set results
plt.scatter(X_test, y_test, color='red')
plt.plot(X_train,regressor.predict(X_train),color='blue')
plt.title('Salary vs Experience(training set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
|
Simple linear regression using ML.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
data = np.loadtxt("data.csv",delimiter=',')
#len(data[0]),len(data[1])
M = len(data[0])
x = data[:,0:(M-1)]
y = data[:,M-1:]
z = np.ones((len(data),1))
new_d = np.hstack((x,z))
new_d = np.hstack((new_d,y))
def cost(points,m,c):
costt = 0
M =len(points)
for i in range(M):
y = points[i,(len(points)-1)]
x_total=0
for j in range((len(points)-1)):
x_total += m[j]*points[i,j]
costt += (1/M)*((y-m*x_total)**2)
return costt
# # Generic Gradient Descent Code Start
#iterate through all the data points and find the slope
#for generic gradient descent you have only one array of m and the last c have a coeff of 1
def step_grad(points,learning_rate,m):
m_slope =np.zeros(len(points[0]))
M = len(points)
for i in range(M):
# x = points[i,0] this will not work we have to cal x_total (m1x1+m2x2+......)
x_total =0
for j in range((len(points[0])-1)):
x_total += m[j]*points[i,j]
y = points[i,(len(points[0])-1)]
l=0
for k in range(len(points[0])-1):
m_slope[l] += (-2/M)*(y- m*x_total)*points[i,k]
l=l+1
#c_slope += (-2/M)*(y-m*x-c)
new_m=list([0 for j in range(len(points[0])-1)])
a=0
for i in range(len(m)):
new_m[i]=m[i]-learning_rate*m_slope[i]
#new_m = m - learning_rate * m_slope
# new_c = c - learning_rate * c_slope
return new_m
#as said in gd defnition we have to start m&c with any random value
#substract the slope from the m or c till num_iter
def gd(points,learning_rate,num_iter):
m =np.zeros((len(points)-1))
#c=0
for i in range(num_iter):
m = step_grad(points,learning_rate,m)
#use cost function not for calculating gd but for getting yourself idea of which way code is going!
#print(i,"cost:")
print(i,"cost:",cost(points,m,c))
return m
#we have to load data and send it to gd function to figure out m & c
#gd requiires learning rate & number of iter
#generic data requires addition of 1 as the coeff of c to each row
def run():
data = np.loadtxt("data.csv",delimiter=',')
M = len(data[0])
x = data[:,0:(M-1)]
y = data[:,M-1:]
z = np.ones((len(data),1))
new_d = np.hstack((x,z))
new_d = np.hstack((new_d,y))
learning_rate = 0.0001
num_iter = 100
m = gd(new_d,learning_rate,num_iter)
print(m)
m = run()
|
.ipynb_checkpoints/genericGradientDescentTry2-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# WordCount for K8s
# -
from pyspark.sql import SparkSession
inputFile = "hdfs:///data/ghEmployees.txt"
outputFile = "hdfs:///tmp/jwcsturm.txt"
#create a SparkSession without local master and app name
spark = (SparkSession.builder.getOrCreate())
# read file
spark.sparkContext.setLogLevel("ERROR")
input = spark.sparkContext.textFile(inputFile)
counts = input.flatMap(lambda line : line.split(" ")).map(lambda word : [word, 1]).reduceByKey(lambda a, b : a + b)
# write the result to hdfs
counts.saveAsTextFile(outputFile)
print(counts.collect())
spark.stop()
|
exercises/WordCountK8s.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ylee267/colab-rnn/blob/master/Untitled0.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="AlcfZypAcuvL" colab_type="code" outputId="c3ed53a3-ffff-4f84-8d06-4ed78e72a308" colab={"base_uri": "https://localhost:8080/", "height": 252}
from __future__ import print_function
import json
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import Adam
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
path = 'yelp_100_3.txt'
text = open(path).read().lower()
print('corpus length:', len(text))
char_indices = json.loads(open('char_indices.txt').read())
indices_char = json.loads(open('indices_char.txt').read())
chars = sorted(char_indices.keys())
print(indices_char)
#chars = sorted(list(set(text)))
print('total chars:', len(chars))
#char_indices = dict((c, i) for i, c in enumerate(chars))
#indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 256
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
print('Vectorization...')
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(1024, return_sequences=True, input_shape=(maxlen, len(chars))))
model.add(LSTM(512, return_sequences=False))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = Adam(lr=0.002)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.load_weights("transfer_weights")
def sample(preds, temperature=.6):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
# train the model, output generated text after each iteration
for iteration in range(1, 60):
print()
print('-' * 50)
print('Iteration', iteration)
x = np.zeros((1, maxlen, len(chars)))
preds = model.predict(x, verbose=0)[0]
model.fit(X, y, batch_size=128, epochs=1)
start_index = random.randint(0, len(text) - maxlen - 1)
#start_index = char_indices["{"]
for diversity in [0.2, 0.4, 0.6, 0.8]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
#print(next_index)
#print (indices_char)
next_char = indices_char[str(next_index)]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
model.save_weights("transfer_weights")
|
Untitled0.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Tony607/mmdetection_object_detection_demo/blob/master/mmdetection_train_custom_data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="An830Vi4rMz6"
# # [How to train an object detection model with mmdetection](https://www.dlology.com/blog/how-to-train-an-object-detection-model-with-mmdetection/) | DLology blog
# + colab_type="code" id="IUB0TkVJJ751" colab={}
# You can add more model configs like below.
MODELS_CONFIG = {
'faster_rcnn_r50_fpn_1x': {
'config_file': 'configs/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712.py'
},
'cascade_rcnn_r50_fpn_1x': {
'config_file': 'configs/cascade_rcnn_r50_fpn_1x.py',
},
'retinanet_r50_fpn_1x': {
'config_file': 'configs/retinanet_r50_fpn_1x.py',
}
}
# + [markdown] colab_type="text" id="Sr1Gn9OTrfKr"
# ## Your settings
# + colab_type="code" id="GGTdkzBJJ71E" colab={}
# TODO: change URL to your fork of my repository if necessary.
git_repo_url = 'https://github.com/Tony607/mmdetection_object_detection_demo'
# Pick the model you want to use
# Select a model in `MODELS_CONFIG`.
selected_model = 'faster_rcnn_r50_fpn_1x' # 'cascade_rcnn_r50_fpn_1x'
# Total training epochs.
total_epochs = 8
# Name of the config file.
config_file = MODELS_CONFIG[selected_model]['config_file']
# + [markdown] colab_type="text" id="9FZrzBIazgqm"
# ## Install Open MMLab Detection Toolbox
# Restart the runtime if you have issue importing `mmdet` later on.
#
# + colab_type="code" id="tsAkdXVP99NC" colab={}
import os
from os.path import exists, join, basename, splitext
# %cd /content
project_name = os.path.abspath(splitext(basename(git_repo_url))[0])
mmdetection_dir = os.path.join(project_name, "mmdetection")
if not exists(project_name):
# clone "depth 1" will only get the latest copy of the relevant files.
# !git clone -q --recurse-submodules --depth 1 $git_repo_url
print("Update mmdetection repo")
# !cd {mmdetection_dir} && git checkout master && git pull
# dependencies
# !pip install -q mmcv terminaltables
# build
# !cd {mmdetection_dir} && python setup.py install
# !pip install -r {os.path.join(mmdetection_dir, "requirements.txt")}
import sys
sys.path.append(mmdetection_dir)
import time
import matplotlib
import matplotlib.pylab as plt
plt.rcParams["axes.grid"] = False
# + [markdown] colab_type="text" id="JRg5LpevakhO"
# ## Stash the repo if you want to re-modify `voc.py` and config file.
# + colab_type="code" id="mxd1V4deJTBz" colab={}
# # !cd {mmdetection_dir} && git config --global user.email "<EMAIL>" && git config --global user.name "Tony607" && git stash
# + [markdown] colab_type="text" id="ONzS8Y7JJPXu"
# ## Modify `voc.py`
# ### parse data classes
# + colab_type="code" id="yLdioJJzIDB8" outputId="1527e9d9-91f4-4d19-fa84-74c49c6e98b8" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %cd {project_name}
# + colab_type="code" id="ngMDpgzhJGRB" colab={}
import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET
# + colab_type="code" id="trb6ARYuJGNK" colab={}
anno_path = os.path.join(project_name, "data/VOC2007/Annotations")
voc_file = os.path.join(mmdetection_dir, "mmdet/datasets/voc.py")
# + colab_type="code" id="dAK6aaTHJTJz" outputId="34cb4992-116f-46e3-bd58-f4d95f915ba3" colab={"base_uri": "https://localhost:8080/", "height": 35}
classes_names = []
xml_list = []
for xml_file in glob.glob(anno_path + "/*.xml"):
tree = ET.parse(xml_file)
root = tree.getroot()
for member in root.findall("object"):
classes_names.append(member[0].text)
classes_names = list(set(classes_names))
classes_names.sort()
classes_names
# + colab_type="code" id="YBpJeiKPJTFr" outputId="78d9381f-937c-4f30-a4d9-b042cf9d953c" colab={"base_uri": "https://localhost:8080/", "height": 290}
import re
fname = voc_file
with open(fname) as f:
s = f.read()
s = re.sub('CLASSES = \(.*?\)',
'CLASSES = ({})'.format(", ".join(["\'{}\'".format(name) for name in classes_names])), s, flags=re.S)
with open(fname, 'w') as f:
f.write(s)
# !cat {voc_file}
# + [markdown] colab_type="text" id="M3c6a77tJ_oF"
# ## Modify config file
# + colab_type="code" id="85KCI0q7J7wE" outputId="82cd8944-7cac-422f-d7aa-a26a9a695a81" colab={"base_uri": "https://localhost:8080/", "height": 55}
import os
config_fname = os.path.join(project_name, 'mmdetection', config_file)
assert os.path.isfile(config_fname), '`{}` not exist'.format(config_fname)
config_fname
# + colab_type="code" id="7AcgLepsKFX-" colab={}
fname = config_fname
with open(fname) as f:
s = f.read()
work_dir = re.findall(r"work_dir = \'(.*?)\'", s)[0]
# Update `num_classes` including `background` class.
s = re.sub('num_classes=.*?,',
'num_classes={},'.format(len(classes_names) + 1), s)
s = re.sub('ann_file=.*?\],',
"ann_file=data_root + 'VOC2007/ImageSets/Main/trainval.txt',", s, flags=re.S)
s = re.sub('total_epochs = \d+',
'total_epochs = {} #'.format(total_epochs), s)
if "CocoDataset" in s:
s = re.sub("dataset_type = 'CocoDataset'",
"dataset_type = 'VOCDataset'", s)
s = re.sub("data_root = 'data/coco/'",
"data_root = 'data/VOCdevkit/'", s)
s = re.sub("annotations/instances_train2017.json",
"VOC2007/ImageSets/Main/trainval.txt", s)
s = re.sub("annotations/instances_val2017.json",
"VOC2007/ImageSets/Main/test.txt", s)
s = re.sub("annotations/instances_val2017.json",
"VOC2007/ImageSets/Main/test.txt", s)
s = re.sub("train2017", "VOC2007", s)
s = re.sub("val2017", "VOC2007", s)
else:
s = re.sub('img_prefix=.*?\],',
"img_prefix=data_root + 'VOC2007/',".format(total_epochs), s)
with open(fname, 'w') as f:
f.write(s)
# !cat {config_fname}
# + colab_type="code" id="Hr-UALVtKFeX" colab={}
# %cd {mmdetection_dir}
# !python setup.py install
# + colab_type="code" id="LeuRcv-ZTSLS" outputId="a29ffb54-a784-42d3-de39-0562395a09bd" colab={"base_uri": "https://localhost:8080/", "height": 35}
os.makedirs("data/VOCdevkit", exist_ok=True)
voc2007_dir = os.path.join(project_name, "data/VOC2007")
os.system("ln -s {} data/VOCdevkit".format(voc2007_dir))
# + colab_type="code" id="xm-a0NkgKb4m" outputId="459cc2dd-4ba1-44e5-e4f0-79349cbf3bde" colab={"base_uri": "https://localhost:8080/", "height": 219}
# !python tools/train.py {config_fname}
# + colab_type="code" id="yFrN7i2Qpr1H" outputId="71469028-097d-4533-829e-3a0592b062dd" colab={"base_uri": "https://localhost:8080/", "height": 55}
checkpoint_file = os.path.join(mmdetection_dir, work_dir, "latest.pth")
assert os.path.isfile(
checkpoint_file), '`{}` not exist'.format(checkpoint_file)
checkpoint_file
# + [markdown] colab_type="text" id="4uCmYPNCVpqP"
# ## Test predict
#
# Turn down the `score_thr` if you think the model is missing any bbox.
# Turn up the `score_thr` if you see too much overlapping bboxes with low scores.
# + colab_type="code" id="FNTFhKuVVhMr" colab={}
import time
import matplotlib
import matplotlib.pylab as plt
plt.rcParams["axes.grid"] = False
import mmcv
from mmcv.runner import load_checkpoint
import mmcv.visualization.image as mmcv_image
# fix for colab
def imshow(img, win_name='', wait_time=0): plt.figure(
figsize=(50, 50)); plt.imshow(img)
mmcv_image.imshow = imshow
from mmdet.models import build_detector
from mmdet.apis import inference_detector, show_result, init_detector
# + colab_type="code" id="qVJBetouno4q" outputId="43f80734-86a6-4261-b4ea-ecefd0726c47" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %cd {mmdetection_dir}
score_thr = 0.8
# build the model from a config file and a checkpoint file
model = init_detector(config_fname, checkpoint_file)
# test a single image and show the results
img = 'data/VOCdevkit/VOC2007/JPEGImages/15.jpg'
result = inference_detector(model, img)
show_result(img, result, model.CLASSES,
score_thr=score_thr, out_file="result.jpg")
# + colab_type="code" id="smM4hrXBo9_E" outputId="92a8445e-2941-423b-b6b0-ad611b795a8c" colab={"base_uri": "https://localhost:8080/", "height": 617}
from IPython.display import Image
Image(filename='result.jpg')
# + [markdown] colab_type="text" id="1xXO9qELNnOw"
# ## Download the config file
# + colab_type="code" id="ufug_6bONd9d" colab={}
from google.colab import files
files.download(config_fname)
# + [markdown] colab_type="text" id="R2YcLN7GObJZ"
# ## Download checkpoint file.
# + [markdown] colab_type="text" id="rMCrafjhN61s"
# ### Option1 : upload the checkpoint file to your Google Drive
# Then download it from your Google Drive to local file system.
#
# During this step, you will be prompted to enter the token.
# + colab_type="code" id="eiRj6p4vN5iT" colab={}
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once in a notebook.
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once in a notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
fname = os.path.basename(checkpoint_file)
# Create & upload a text file.
uploaded = drive.CreateFile({'title': fname})
uploaded.SetContentFile(checkpoint_file)
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
# + [markdown] colab_type="text" id="bqC9kHuVOOw6"
# ### Option2 : Download the checkpoint file directly to your local file system
# This method may not be stable when downloading large files like the model checkpoint file. Try **option 1** instead if not working.
# + colab_type="code" id="AQ5RHYbTOVnN" colab={}
files.download(checkpoint_file)
|
mmdetection_train_custom_data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCharm (Data-structure-Algorithms-using-Python)
# language: python
# name: pycharm-a7c1b96d
# ---
# +
class Node:
def __init__(self, data):
self.data = data
self.next = None
def take_input():
Input = [int(element) for element in input().split()]
tail = None
head = None
for current_data in Input:
if current_data == -1:
return
new_Node = Node(current_data)
if head is None:
head = new_Node
tail = new_Node
else:
tail.next = new_Node
tail = new_Node
return head
def Print_LL(head):
while head is not None:
print(str(head.data) + " -> ",end="")
head = head.next
print("None")
return
def reverse_LL(head):
previous = None
while head:
temp = head
if head.next is None:
temp.next = previous
break
head = head.next
temp.next = previous
previous = temp
return head
# + pycharm={"name": "#%%\n"}
heady = take_input()
Print_LL(heady)
x = reverse_LL(heady)
Print_LL(x)
|
11. Linked List-1/reverse_LL.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## The Kruskal-Wallis H-test tests the null hypothesis that the population median of all of the groups are equal. It is a non-parametric version of ANOVA. The test works on 2 or more independent samples, which may have different sizes. Note that rejecting the null hypothesis does not indicate which of the groups differs. Post-hoc comparisons between groups are required to determine which groups are different.
#
# ## The Kruskal-Wallis H and Friedman tests for comparing more than two data samples: the nonparametric version of the ANOVA and repeated measures ANOVA tests.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import glob
import scipy
import scikit_posthocs as sp
from scipy import stats
# %matplotlib inline
fn=glob.glob("*xlsx")
print(fn)
FA=pd.read_excel(fn[0])
#FA=pd.read_excel(fn[0],header=False)
COLS=list(FA.columns.values)
print(COLS)
FA
GTYPES=list(set(FA[COLS[0]]))
FA.groupby(COLS[0]).size()
values_per_group = {col_name:col for col_name, col in FA.groupby(COLS[0])[COLS[1]]}
#print(values_per_group.values())
stat,p = stats.kruskal(*values_per_group.values())
print('Statistics=%.3f, p=%.20f' % (stat, p))
print(p)
# interpret
alpha = 0.05
if p > alpha:
print('Same distributions (fail to reject H0)')
else:
print('Different distributions (reject H0)')
# # P value tells us we may reject the null hypothesis that the population medians of all of the groups are equal. To learn what groups (species) differ in their medians we need to run post hoc tests. scikit-posthocs provides a lot of non-parametric tests mentioned above. Let's choose Conover's test.
pc=sp.posthoc_conover(FA, val_col=COLS[1], group_col=COLS[0], p_adjust = 'holm')
print(pc)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20,10))
heatmap_args = {'linewidths': 0.25, 'linecolor': '0.5', 'clip_on': False, 'square': True, 'cbar_ax_bbox': [0.80, 0.35, 0.04, 0.3]}
sp.sign_plot(pc, **heatmap_args)
pc2=sp.posthoc_dunn(FA, val_col=COLS[1], group_col=COLS[0], p_adjust = 'holm')
print(pc2)
# ## Post hoc pairwise test for multiple comparisons of mean rank sums (Dunn’s test). May be used after Kruskal-Wallis one-way analysis of variance by ranks to do pairwise comparisons.
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20,10))
heatmap_args = {'linewidths': 0.25, 'linecolor': '0.5', 'clip_on': False, 'square': True, 'cbar_ax_bbox': [0.80, 0.35, 0.04, 0.3]}
sp.sign_plot(pc2, **heatmap_args)
import seaborn as sns
# +
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20,10))
bplot=sns.boxplot(y=COLS[1],x=COLS[0],data=FA)
bplot=sns.swarmplot(y=COLS[1],x=COLS[0],data=FA,color='black',alpha=0.75)
plt.yticks(fontsize=24)
plt.xticks(fontsize=22, rotation="vertical")
plt.ylabel(r"$\lambda$ [nm]", fontsize=34)
plt.xlabel(" ", fontsize=34)
plt.savefig("boxplotlambda_alltechniques.jpeg",dpi=400,bbox_inches ="tight")
# +
fig, ax = plt.subplots(1, 1, figsize=(20, 10))
plt.yticks(size=24)
plt.xticks(fontsize=22, rotation="vertical")
bplot=sns.violinplot(y=COLS[1],x=COLS[0],
data=FA,
width=0.90,
alpha=0.17,
inner=None,
palette="colorblind")
bplot=sns.swarmplot(y=COLS[1],x=COLS[0],data=FA,color='black',alpha=0.75)
plt.ylabel(r"$\lambda$ [nm]", fontsize=34)
plt.xlabel(" ", fontsize=34)
plt.savefig("violijnplot_lambda all techniques.png",dpi=400,bbox_inches ="tight")
# -
FA
Sn=list(set(FA['sample']))
mu=pd.DataFrame()
A=[]
B=[]
C=[]
for k in Sn:
fj=FA.loc[FA['sample']==k]
u=fj['lambda'].describe()
#print(k)
#print(u)
#print('median:', fj['lambda'].median())
A.append(k)
B.append(fj['lambda'].median())
C.append(fj['lambda'].mean())
#print('*******************')
mu['sample']=A
mu['median-lambda']=B
mu['mean-lamdba']=C
print(mu)
|
UPFIGS/Lambda measurement-alltechniques.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import os
import matplotlib.pyplot as plt
import PIL.Image as pil
import tensorflow as tf
depth_gt = np.load("./models/gt_data_kitti/gt_depth.npy")
depth_gt.shape
print(depth_gt[0].shape)
plt.imshow(depth_gt[200])
pred_depth = np.load("./models/foobar/model-800/output/model.npy")
pred_depth[0].shape
plt.imshow(pred_depth[0])
fil = np.load("/mnt/carla_data/npy_files/2.npy")
fil.item().keys()
a = np.load("/mnt/carla_data/npy_files/13.npy")
a.item().keys()
for time, data_dict in a.item().items():
for key, value in data_dict.items():
print(key)
a = list(a.item().values())[0]
a = sorted(a.items())
ctr = 0
for key, _ in a:
if 'rgb' in key:
ctr = ctr + 1
print(key)
ctr
data_dict = list(fil.item().values())[0]
data_dict = sorted(data_dict.items())
data_dict
for key, _ in data_dict:
print(key)
dict(data_dict)['486_depth'].shape
ss = np.load("/mnt/carla_data/test_files/frame_460/952_ss.npy")
ss[:,:,0]
ss[:,:,0].tofile('abcd.raw')
plt.imshow(ss[:,:,0])
with open('/mnt/carla_data/test_files/test_files_eigen.txt') as f:
test_files = f.readlines()
test_files = ['/mnt/carla_data/test_files/' + t[:-1] for t in test_files]
test_files
for fil in test_files:
im_tgt = plt.imread(fil)
im_tgt[:,:,2]
scaled_im = raw_im.resize((416, 128), pil.ANTIALIAS)
# + active=""
#
# -
np.array(raw_im).shape
img = plt.imread("/mnt/carla_data/test_files/frame_476/968_rgb.png")
img.shape
img = plt.imread("/mnt/carla_data/data/frame_456/968_rgb.png")
img.shape
def read_npy_file(item):
data = np.load(item.decode())
return data.astype(np.uint8)
with open("/mnt/carla_data/carla_train_full/train.txt", 'r') as f:
frames = f.readlines()
subfolders = [x.split(' ')[0] for x in frames]
frame_ids = [x.split(' ')[1][:-1] for x in frames]
ins_file_list = [os.path.join("/mnt/carla_data/carla_train_full", subfolders[i],
frame_ids[i] + '_rgb_instance_new.npy') for i in range(len(frames))][:1]
ins_file_list
import tensorflow as tf
print(len(ins_file_list))
ins_paths_queue = tf.train.string_input_producer(
ins_file_list, shuffle=False)
ins_reader = tf.WholeFileReader()
ins_keys, ins_contents = ins_reader.read(ins_paths_queue)
# +
# ins_seq = tf.reshape(tf.decode_raw(ins_contents, tf.uint8), [1, opt.img_height, (opt.num_source+1) * opt.img_width, 2])
# -
# ins_seq = tf.reshape(tf.decode_raw(ins_contents, tf.uint8), [1, 128, 1248, 2])
ins_seq = tf.py_func(read_npy_file, [ins_keys], [tf.uint8,])
ins_seq[0].set_shape((128, 1248))
with tf.Session() as sess:
print(ins_seq[0].get_shape())
print(sess.run(ins_seq[0]))
file_list = tf.train.string_input_producer(['/mnt/carla_data/carla_train_full/frame_444/968_rgb_instance_new.raw'])
decoded = tf.decode_raw(tf.WholeFileReader().read(file_list), tf.uint8)
with tf.Session() as sess:
print(sess.run(decoded))
|
parse_depth.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/varsha2509/Springboard-DS/blob/master/Capstone2/Colab/DeepSat6_CNN_Model_Comparison.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="SPWPb2hqysw-" outputId="b42ae34c-8c3a-491a-cc49-97088df09d26" colab={"base_uri": "https://localhost:8080/", "height": 35}
#Install Packages and Mount Google Drive
import pandas as pd
import numpy as np
import cv2
import h5py
import csv
from scipy.io import loadmat
import matplotlib.pyplot as plt
from google.colab import drive
import os
from os import listdir
from numpy import asarray
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from PIL import Image
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
import tensorflow as tf
from tensorflow.keras import layers, models
from keras.preprocessing import image
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard
from keras.layers import Conv2D, MaxPool2D, Dense, Dropout, Flatten, ZeroPadding2D
from keras.models import Sequential, Model
from keras.applications import vgg16
from keras import backend as K
from keras import models
from keras.models import load_model
from keras.models import model_from_json
from sklearn.metrics import balanced_accuracy_score
from matplotlib.colors import ListedColormap
from multiprocessing.pool import ThreadPool
from keras.callbacks import EarlyStopping, ModelCheckpoint
# load vgg model
from keras.applications.vgg16 import VGG16
from skimage.io import imread
from glob import glob
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/My Drive/Springboard/Capstone Projects/Capstone-2/DeepSat-6-Dataset/"
#Mount the drive to colab notebook
drive.mount('/content/gdrive')
# + id="OH0ZlLboyxj4" outputId="826cc2a5-828b-418f-9b68-4fec7dcec9a1" colab={"base_uri": "https://localhost:8080/", "height": 35}
#Change the current working directory
# %cd /content/gdrive/My\ Drive/Springboard/Capstone\ Projects/Capstone-2/DeepSat-6-Dataset/
# + [markdown] id="IncMoV1uy4dP"
# #Loading weights for the different models
# + [markdown] id="C33m5wtjy9ff"
# ## Baseline CNN model
# + id="M4yGWXy5y2i4" outputId="cf06f9ec-dd1e-47e4-f6f3-89ae0b7c477f" colab={"base_uri": "https://localhost:8080/", "height": 35}
# load json and create model
json_file = open('cnn-baseline.json', 'r')
baseline_cnn_json = json_file.read()
json_file.close()
baseline_cnn = model_from_json(baseline_cnn_json)
# load weights into new model
baseline_cnn.load_weights("cnn-baseline.h5")
print("Loaded CNN baseline model from disk")
# # evaluate loaded model on test data
# loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# score = loaded_model.evaluate(X, Y, verbose=0)
# print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
# + [markdown] id="kngoX6OK1I6D"
# ## TL-1 model - Transfer Learning with VGG6 and Padding input image
# + id="StGTa4Xa1Hcj" outputId="ca9fb0cf-1557-463b-b211-d5e96c117006" colab={"base_uri": "https://localhost:8080/", "height": 35}
# load json and create model
json_file = open('vgg-base-padding.json', 'r')
tl_1_json = json_file.read()
json_file.close()
tl_1 = model_from_json(tl_1_json)
# load weights into new model
tl_1.load_weights("vgg-base-padding.h5")
print("Loaded TL-1 model from disk")
# # evaluate loaded model on test data
# loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# score = loaded_model.evaluate(X, Y, verbose=0)
# print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
# + [markdown] id="H9D5AP_z1kag"
# ## TL-2 model - Transfer Learning with VGG6 and Upsampling input image
# + id="andmq0j81iDj" outputId="c6308990-9968-4bad-ead3-9a67f2427bbe" colab={"base_uri": "https://localhost:8080/", "height": 35}
# load json and create model
json_file = open('vgg-base-upsampling.json', 'r')
tl_2_json = json_file.read()
json_file.close()
tl_2 = model_from_json(tl_2_json)
# load weights into new model
tl_2.load_weights("vgg-base-upsampling.h5")
print("Loaded TL-2 model from disk")
# # evaluate loaded model on test data
# loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# score = loaded_model.evaluate(X, Y, verbose=0)
# print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
# + [markdown] id="wO8KTOE91yg0"
# ## TL-3 model - Transfer Learning with VGG16 with fine tuning and padding input image
# + id="wQs8adup1vzP" outputId="26f40346-42d8-47ec-8443-2d222f19486a" colab={"base_uri": "https://localhost:8080/", "height": 35}
# load json and create model
json_file = open('vgg-finetuning-padding.json', 'r')
tl_3_json = json_file.read()
json_file.close()
tl_3 = model_from_json(tl_3_json)
# load weights into new model
tl_3.load_weights("vgg-finetuning-padding.h5")
print("Loaded TL-3 model from disk")
# # evaluate loaded model on test data
# loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# score = loaded_model.evaluate(X, Y, verbose=0)
# print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
# + [markdown] id="0q_ENU9S20LG"
# ## Bar graph comparing the balanced accuracy score for each model
#
#
#
#
# + id="V-nkgJBr2MUL"
## Create a dataframe with balanced accuracy scores for different models
model_comparison_scores = pd.DataFrame()
model_comparison_scores['scores'] = [89.69, 94.04, 92.13, 90.00, 95.12]
model_comparison_scores['names'] = ['Baseline Random Forest', 'Baseline CNN', 'Transfer Learning with Vgg16 (Padding Input Image)', 'Transfer Learning with Vgg16 (Upsampling Input Image)', 'Transfer Learning with Vgg16 and Fine Tuning']
model_comparison_scores['model'] = ['brf', 'b_cnn', 'tl_1', 'tl_2', 'tl_3']
# + id="y0bo0FFI4Yeb" outputId="4cd851df-36a2-44c2-90fe-768f29106c85" colab={"base_uri": "https://localhost:8080/", "height": 322}
plt.figure(figsize=(8, 8))
ax = model_comparison_scores.plot.barh(x='names', y='scores', color='grey')
ax.get_children()[4].set_color('skyblue')
ax.set_ylabel('Models', fontsize = 14)
ax.set_xlabel('Balanced Accuracy Score (%)', fontsize = 14)
ax.tick_params(axis='both', which='major', labelsize=14)
ax.get_legend().remove()
ax.set_title('Balanced Accuracy Scores for different models', fontsize = 14)
plt.show()
# + [markdown] id="PHH_tReK_8gV"
# ## Bar graph comparing the F1 score for Transfer learning Vgg16 and baseline CNN
#
#
# + id="YcXGXEgn4bjw"
## Bar graph comparing the balanced accuracy score for each model
f1_comparison_scores = pd.DataFrame()
f1_comparison_scores['classes'] = ['Barren Land', 'Building', 'Grassland','Road','Trees','Water']
f1_comparison_scores['b_cnn'] = [0.93,0.96,0.90,0.82,0.97,1.00]
f1_comparison_scores['tl_3'] = [0.96,0.95,0.92,0.85,0.98,1.00]
# + id="7_XUYVLB6KOy" outputId="e2eca067-45a7-4ef3-d8f0-d29b9753cd0a" colab={"base_uri": "https://localhost:8080/", "height": 238}
f1_comparison_scores.head(6)
# + id="jDHFz2bCCQf0"
f1_comparison_scores.set_index('classes', inplace=True)
# + id="quTqwDiqA1VP" outputId="c2b7e541-61c4-46f9-ab50-49f215291acf" colab={"base_uri": "https://localhost:8080/", "height": 430}
fig=plt.figure(figsize=(10,5))
ax = fig.add_subplot(111) # Create matplotlib axes
#ax2 = ax.twinx() # Create another axes that shares the same x-axis as ax.
width = 0.25
f1_comparison_scores.b_cnn.plot(kind='bar', color='lightgrey', ax=ax, width=width, position=1, label = 'Baseline CNN Model')
f1_comparison_scores.tl_3.plot(kind='bar', color='lightblue', ax=ax, width=width, position=0, label = 'Transfer Learning with VGG16')
ax.set_xlabel('Classes', fontsize = 14)
ax.set_ylabel('F1 Scores', fontsize = 14)
ax.legend(loc = 'best')
ax.tick_params(axis='both', which='major', labelsize=14)
#ax2.set_ylabel('Transfer Learning with Fine Tuning')
ax.set_title('F1-Score comparison for Baseline CNN model and Transfer Learning with Fine Tuning', fontsize = 14)
plt.show()
# + id="58L8nBYjBSV0"
|
Colab/DeepSat6_CNN_Model_Comparison.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src='./img/cams_header.png' alt='Logo EU Copernicus ECMWF' align='right' width='100%'></img>
# <br>
# # CAMS wildfire emissions
# ### About
# This notebook provides you a practical introduction to CAMS global atmospheric forecasts and shows you how you can use the variable `Total Aerosol Optical Depth at 550nm` for wildfire monitoring. The workflow shows the Total Aerosol Optical Depth at 550nm that originated from the devastating wildfires that caused record emissions around the Northern Hemisphere in August and September 2021.
# The notebook has the following outline:
# * [1 - Request data from the ADS programmatically via the CDS API](#request_data_fire)
# * [2 - Unzip the downloaded data file](#unzip_fire)
# * [3 - Load and browse CAMS global atmospheric composition forecast of Total Aerosol Optical Depth at 550nm](#load_fire)
# * [4 - Visualize the analysis of Total Aerosol AOD at 550nm](#visualize_fire)
# * [5 - Animate 12-hourly analysis of Total AOD at 550nm over the northern hemisphere from 1 to 8 August 2021](#animate_fire)
# ### Data
# This notebook introduces you to the [CAMS global atmospheric composition forecasts](https://ads.atmosphere.copernicus.eu/cdsapp#!/dataset/cams-global-atmospheric-composition-forecasts?tab=overview). The data has the following specifications:
#
# > **Data**: `CAMS global atmospheric composition forecasts` <br>
# > **Temporal coverage**: `12-hourly analysis for the period from 1 to 8 August 2021` <br>
# > **Spatial coverage**: `Geographical subset for northern hemisphere` <br>
# > **Format**: `zipped NetCDF`
#
# ### How to access the notebook
# * [**Kaggle**](https://kaggle.com/kernels/welcome?src=https://github.com/ecmwf-projects/copernicus-training/blob/master/Atmos-Training_CAMS-Fire-Monitoring.ipynb)
# * [**Binder**](https://hub-binder.mybinder.ovh/user/ecmwf-projects--rnicus-training-ulg9z83u/lab/tree/Atmos-Training_CAMS-Fire-Monitoring.ipynb)
# * [**Colab**](https://colab.research.google.com/github/ecmwf-projects/copernicus-training/blob/master/Atmos-Training_CAMS-Fire-Monitoring.ipynb)
# * [**nbviewer**](https://nbviewer.org/github/ecmwf-projects/copernicus-training/blob/master/Atmos-Training_CAMS-Fire-Monitoring.ipynb)
# ### Further resources
# * [Copernicus: A summer of wildfires saw devastation and record emissions around the Northern Hemisphere](https://atmosphere.copernicus.eu/copernicus-summer-wildfires-saw-devastation-and-record-emissions-around-northern-hemisphere)
# <hr>
# ### Install CDS API via pip
# !pip install cdsapi
# ### Load libraries
# +
# CDS API
import cdsapi
import os
# Libraries for working with multi-dimensional arrays
import numpy as np
import xarray as xr
import pandas as pd
# Libraries for plotting and visualising data
import matplotlib.path as mpath
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
import time
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import cartopy.feature as cfeature
from IPython.display import clear_output
clear_output(wait=True)
# -
# <hr>
# ### <a id='request_data_fire'></a> 1. Request data from the ADS programmatically with the CDS API
# The first step is to request data from the Atmosphere Data Store programmatically with the help of the CDS API. Let us make use of the option to manually set the CDS API credentials. First, you have to define two variables: `URL` and `KEY` which build together your CDS API key. Below, you have to replace the `#########` with your personal ADS key. Please find [here](https://ads.atmosphere.copernicus.eu/api-how-to) your personal ADS key.
URL = 'https://ads.atmosphere.copernicus.eu/api/v2'
KEY = '#######################'
# <br>
# The next step is then to request the data with the help of the CDS API. Below, we request `Total Aerosol Optical Depth` for the northern hemisphere from 1 to 8 August 2021 from the [CAMS global atmospheric composition forecasts](https://ads.atmosphere.copernicus.eu/cdsapp#!/dataset/cams-global-atmospheric-composition-forecasts?tab=overview) dataset. The request below requests `analysis` data, as we only request leadtime hour 0 for the two run times at 00:00 and 12:00 UTC.
#
# Let us store the dataset under the name `202108_northern_hemisphere_totalAOD.zip`.
# +
c = cdsapi.Client(url=URL, key=KEY)
c.retrieve(
'cams-global-atmospheric-composition-forecasts',
{
'variable': 'total_aerosol_optical_depth_550nm',
'date': '2021-08-01/2021-08-08',
'time': [
'00:00', '12:00',
],
'leadtime_hour': '0',
'type': 'forecast',
'area': [
90, -180, 0,
180,
],
'format': 'netcdf_zip',
},
'./202108_northern_hemisphere_totalAOD.zip')
# -
# <br>
# ### <a id='unzip_fire'></a> 2. Unzip the downloaded data file
# CAMS global atmospheric composition forecasts can be retrieved either in `GRIB` or in a `zipped NetCDF`. Above, we requested the data in a zipped NetCDF and for this reason, we have to unzip the file before we can open it. You can unzip `zip archives` in Python with the Python package `zipfile` and the function `extractall()`.
import zipfile
with zipfile.ZipFile('./202108_northern_hemisphere_totalAOD.zip', 'r') as zip_ref:
zip_ref.extractall('./')
# <br>
# ### <a id='load_fire'></a> 3. Load and browse CAMS global atmospheric composition forecast of Total Aerosol Optical Depth at 550nm
# A netCDF file with the name `data.nc` is extracted from the zip archive. You can load the NetCDF file with the Python library [xarray](http://xarray.pydata.org/en/stable/) and the function `open_dataset()`. The function loads a `xarray.Dataset`, which is a collection of one or more data variables that share the same dimensions. You see that the data file has three dimensions, `latitude`, `longitude` and `time` and one variable, `aod550`.
ds = xr.open_dataset('./data.nc')
ds
# Let us now extract from the Dataset above the data variable `aod550` as `xarray.DataArray`. You can load a data array from a xarray.Dataset by specifying the name of the variable (`aod550`) in square brackets.
# While an xarray **dataset** may contain multiple variables, an xarray **data array** holds a single multi-dimensional variable and its coordinates. Below you see that the variable `aod550` represents Total Aerosol Optical Depth at 550 nm.
aod550 = ds['aod550']
aod550
# <br>
# Let us define variables for the two attributes `units` and `long_name`, which we can use during the visulisation of the data.
aod550_unit = aod550.units
aod550_long_name = aod550.long_name
# <br>
# ### <a id='visualize_fire'></a>4. Visualize the analysis of Total AOD at 550nm
# And now we can plot the `Total AOD at 550 nm`. The visualisation code below can be divided in five main parts:
# * **Initiate a matplotlib figure:** with `plt.figure()` and an axes object
# * **Plotting function**: plot the data array with the matplotlib function `pcolormesh()`
# * **Define a geographic extent of the map**: use the minimum and maximum latitude and longitude bounds of the data
# * **Add additional mapping features**: such as coastlines, grid or a colorbar
# * **Set a title of the plot**: you can combine the `species name` and `time` information for the title
# +
# Index of analysis step
time_index = 2
# Initiate a matplotlib figure
fig = plt.figure(figsize=(15,15))
ax = plt.subplot(1,1,1, projection=ccrs.Stereographic(central_latitude=90))
# Plotting function with pcolormesh
im = plt.pcolormesh(aod550['longitude'].values, aod550['latitude'].values,
aod550[time_index,:,:], cmap='afmhot_r', transform=ccrs.PlateCarree())
# Define geographic extent of the map
#ax.set_extent([aod550.longitude.min(),aod550.longitude.max(),aod550.latitude.min(),aod550.latitude.max()], crs=ccrs.PlateCarree())
# Add additional features such as coastlines, grid and colorbar
ax.coastlines(color='black')
ax.gridlines(draw_labels=True, linewidth=1, color='gray', alpha=0.5, linestyle='--')
cbar = plt.colorbar(im,fraction=0.046, pad=0.05)
cbar.set_label(aod550_unit, fontsize=14)
# Set the title of the plot
ax.set_title(aod550_long_name + ' over the northern hemisphere - ' + str(aod550.time[time_index].values)[:-10]+'\n', fontsize=18)
# -
# <br>
# ### <a id='animate_fire'></a>4. Animate 12-hour Total AOD at 550nm analysis over the northern hemisphere from 1 to 8 August 2021
# In the last step, you can animate the `Total AOD at 550nm` in order to see how the trace gas develops over a period of eight days, from 1 to 8 August 2021.
# You can do animations with matplotlib's function `animation`. Jupyter's function `HTML` can then be used to display HTML and video content.
# The animation function consists of 4 parts:
# - **Setting the initial state:**<br>
# Here, you define the general plot your animation shall use to initialise the animation. You can also define the number of frames (time steps) your animation shall have.
#
#
# - **Functions to animate:**<br>
# An animation consists of three functions: `draw()`, `init()` and `animate()`. `draw()` is the function where individual frames are passed on and the figure is returned as image. In this example, the function redraws the plot for each time step. `init()` returns the figure you defined for the initial state. `animate()` returns the `draw()` function and animates the function over the given number of frames (time steps).
#
#
# - **Create a `animate.FuncAnimation` object:** <br>
# The functions defined before are now combined to build an `animate.FuncAnimation` object.
#
#
# - **Play the animation as video:**<br>
# As a final step, you can integrate the animation into the notebook with the `HTML` class. You take the generate animation object and convert it to a HTML5 video with the `to_html5_video` function
# +
# Setting the initial state:
# 1. Define figure for initial plot
fig = plt.figure(figsize=(15,15))
ax = plt.subplot(1,1,1, projection=ccrs.Stereographic(central_latitude=90))
# Plotting function with pcolormesh
im = plt.pcolormesh(aod550['longitude'].values, aod550['latitude'].values,
aod550[time_index,:,:], cmap='afmhot_r', transform=ccrs.PlateCarree())
ax.coastlines(color='black')
ax.gridlines(draw_labels=True, linewidth=1, color='gray', alpha=0.5, linestyle='--')
cbar = plt.colorbar(im,fraction=0.046, pad=0.05)
cbar.set_label(aod550_unit, fontsize=14)
# Set the title of the plot
ax.set_title(aod550_long_name + ' over the northern hemisphere - ' + str(aod550.time[time_index].values)[:-10]+'\n', fontsize=18)
frames = 15
def draw(i):
img = plt.pcolormesh(aod550.longitude,
aod550.latitude,
aod550[i,:,:],
cmap='afmhot_r',
transform=ccrs.PlateCarree(),
vmin=0,
vmax=7,
shading='auto')
ax.set_title(aod550_long_name + ' '+ str(aod550.time[i].data)[:-10], fontsize=18, pad=20.0)
return img
def init():
return fig
def animate(i):
return draw(i)
ani = animation.FuncAnimation(fig, animate, frames, interval=500, blit=False,
init_func=init, repeat=True)
HTML(ani.to_html5_video())
plt.close(fig)
# -
# <br>
# **Play the animation video as HTML5 video**
HTML(ani.to_html5_video())
# <br>
# The animation clearly shows the high values of Total Aerosol Optical Depth originating from the many different wildfires burning across North America in this period. This includes the Dixie fire, which, by August 6, had grown to become the largest single (i.e. non-complex) wildfire in California's history, and the second-largest wildfire overall.
#
# The animation also shows these high values crossing the continent to the east coast where it strongly affected local air quality.
# <br>
# <hr>
# <p><img src='./img/copernicus_logo.png' align='right' alt='Logo EU Copernicus' width='20%'></img></p>
# <br><br><br><br><br>
# <span style='float:right'><p style=\"text-align:right;\">This project is licensed under <a href="./LICENSE">APACHE License 2.0</a>. | <a href=\"https://github.com/ecmwf-projects/copernicus-training">View on GitHub</a></span>
|
Atmos-Training_CAMS-Fire-Monitoring.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: openvino_env
# language: python
# name: openvino_env
# ---
# # Background Removal From Images With U$^2$-Net and OpenVINO
#
# This notebook demostrates background removal in images with U$^2$-Net and [OpenVINO](https://github.com/openvinotoolkit/openvino)
#
# For more information about U$^2$-Net, including source code and test data, see their Github page at https://github.com/xuebinqin/U-2-Net and their research paper: [U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection](https://arxiv.org/pdf/2005.09007.pdf)
# + [markdown] tags=["hide"]
# The PyTorch U$^2$-Net model is converted to ONNX and loaded with OpenVINO. The model source is https://github.com/xuebinqin/U-2-Net. For a more detailed overview of loading PyTorch models in OpenVINO, including how to load an ONNX model in OpenVINO directly, without converting to IR format, check out the [PyTorch/ONNX](../102-pytorch-onnx-to-openvino) notebook.
# + [markdown] tags=["hide"]
# ## Prepare
# + [markdown] id="QB4Yo-rGGLmV" tags=["hide"]
# ### Import the PyTorch Library and U2NET
# + id="2ynWRum4iiTz"
import os
import sys
import time
from collections import namedtuple
from pathlib import Path
import cv2
import matplotlib.pyplot as plt
import mo_onnx
import numpy as np
import torch
from IPython.display import HTML, FileLink, display
from model.u2net import U2NET, U2NETP
from openvino.inference_engine import IECore
# + [markdown] tags=["hide"]
# ### Settings
#
# This notebook supports the original U^2-Net salient object detection model, as well as the smaller U2NETP version. Two sets of weights are supported for the original model: salient object detection and human segmentation.
# +
IMAGE_DIR = "data"
model_config = namedtuple("ModelConfig", ["name", "url", "model", "model_args"])
u2net_lite = model_config(
"u2net_lite",
"https://drive.google.com/uc?id=1rbSTGKAE-MTxBYHd-51l2hMOQPT_7EPy",
U2NETP,
()
)
u2net = model_config(
"u2net",
"https://drive.google.com/uc?id=1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ",
U2NET,
(3, 1)
)
u2net_human_seg = model_config(
"u2net_human_seg",
"https://drive.google.com/uc?id=1-Yg0cxgrNhHP-016FPdp902BR-kSsA4P",
U2NET,
(3, 1)
)
# Set u2net_model to one of the three configurations listed above
u2net_model = u2net_lite
# + tags=["hide_output", "hide"]
# The filenames of the downloaded and converted models
MODEL_DIR = "model"
model_path = (
Path(MODEL_DIR)
/ u2net_model.name
/ Path(u2net_model.name).with_suffix(".pth")
)
onnx_path = model_path.with_suffix(".onnx")
ir_path = model_path.with_suffix(".xml")
# + [markdown] id="u5xKw0hR0jq6" tags=["hide"]
# ### Load the U2NET Model
#
# The U$^2$-Net human segmentation model weights are stored on Google Drive. They will be downloaded if they have not been downloaded yet. The next cell loads the U2NET model and the pretrained weights.
# + tags=["hide"]
if not model_path.exists():
import gdown
os.makedirs(model_path.parent, exist_ok=True)
print("Start downloading model weights file... ")
with open(model_path, "wb") as model_file:
gdown.download(u2net_model.url, output=model_file)
print(f"Model weights have been downloaded to {model_path}")
# + tags=["hide"]
# Load the model
net = u2net_model.model(*u2net_model.model_args)
net.eval()
# Load the weights
print(f"Loading model weights from: '{model_path}'")
net.load_state_dict(torch.load(model_path, map_location="cpu"))
# Save the model if it doesn't exist yet
if not model_path.exists():
print("\nSaving the model")
torch.save(net.state_dict(), str(model_path))
print(f"Model saved at {model_path}")
# + [markdown] id="Rhc_7EObUypw" tags=["hide"]
# ## Convert PyTorch U$^2$-Net model to ONNX and IR
#
# ### Convert PyTorch model to ONNX
#
# The output for this cell will show some warnings. These are most likely harmless. Conversion succeeded if the last line of the output says `ONNX model exported to [filename].onnx.`
# + colab={"base_uri": "https://localhost:8080/"} id="ipQWpbgQUxoo" outputId="bbc1734a-c2a2-4261-ed45-264b9e3edd00" tags=["hide"]
if not onnx_path.exists():
dummy_input = torch.randn(1, 3, 512, 512)
torch.onnx.export(net, dummy_input, onnx_path, opset_version=11)
print(f"ONNX model exported to {onnx_path}.")
else:
print(f"ONNX model {onnx_path} already exists.")
# + [markdown] id="6JSoEIk60uxV" tags=["hide"]
# ### Convert ONNX model to OpenVINO IR Format
#
# Call the OpenVINO Model Optimizer tool to convert the ONNX model to OpenVINO IR format, with FP16 precision. The models are saved to the current directory. We add the mean values to the model and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network. The mean and standard deviation values can be found in the [dataloader](https://github.com/xuebinqin/U-2-Net/blob/master/data_loader.py) file in the [U^2-Net repository](https://github.com/xuebinqin/U-2-Net/) and multiplied by 255 to support images with pixel values from 0-255.
#
# See the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer.
# + [markdown] tags=["hide"]
# Call the OpenVINO Model Optimizer tool to convert the ONNX model to OpenVINO IR, with FP16 precision. Executing this command may take a while. There may be some errors or warnings in the output. Model Optimization was succesful if the last lines of the output include `[ SUCCESS ] Generated IR version 10 model.
# `
# + tags=["hide"]
# Get the path to the Model Optimizer script
mo_path = str(Path(mo_onnx.__file__))
# Construct the command for Model Optimizer
# Set log_level to CRITICAL to suppress warnings that can be ignored for this demo
mo_command = f""""{sys.executable}"
"{mo_path}"
--input_model "{onnx_path}"
--input_shape "[1,3, 512, 512]"
--mean_values="[123.675, 116.28 , 103.53]"
--scale_values="[58.395, 57.12 , 57.375]"
--data_type FP16
--output_dir "{model_path.parent}"
--log_level "CRITICAL"
"""
mo_command = " ".join(mo_command.split())
print("Model Optimizer command to convert the ONNX model to OpenVINO:")
print(mo_command)
# + id="6YUwrq7QWSzw" tags=["hide"]
if not ir_path.exists():
print("Exporting ONNX model to IR... This may take a few minutes.")
# mo_result = %sx $mo_command
print("\n".join(mo_result))
else:
print(f"IR model {ir_path} already exists.")
# + [markdown] id="JyD5EKka34Wd" tags=["hide"]
# ## Load and Pre-process Input Image
#
# The U2NET IR model expects images in RGB format. OpenCV reads images in BGR. We convert the images to RGB, resize them to 512 by 512 and transpose the dimensions to format that is expected by the IR model
# + colab={"base_uri": "https://localhost:8080/"} id="DGFW5VXL3x9G" outputId="300eacff-c6de-4eb5-e99a-8def5260da1a" tags=["hide"]
IMAGE_PATH = Path(IMAGE_DIR) / "coco_hollywood.jpg"
image = cv2.cvtColor(
cv2.imread(str(IMAGE_PATH)),
cv2.COLOR_BGR2RGB,
)
resized_image = cv2.resize(image, (512, 512))
# Convert the image shape to shape and data type expected by network
# for OpenVINO IR model: (1, 3, 512, 512)
input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0)
# + [markdown] id="FnEiEbNq4Csh" tags=["hide"]
# ## Do inference on IR model
#
# Load the IR model to Inference Engine and do inference
# + id="otfT6EDk03KV" tags=["hide"]
# Load network to Inference Engine
ie = IECore()
net_ir = ie.read_network(model=ir_path)
exec_net_ir = ie.load_network(network=net_ir, device_name="CPU")
# Get names of input and output layers
input_layer_ir = next(iter(exec_net_ir.input_info))
output_layer_ir = next(iter(exec_net_ir.outputs))
# Run the Inference on the Input image...
start_time = time.perf_counter()
res_ir = exec_net_ir.infer(inputs={input_layer_ir: input_image})
res_ir = res_ir[output_layer_ir]
end_time = time.perf_counter()
print(
f"Inference finished. Inference time: {end_time-start_time:.3f} seconds, "
f"FPS: {1/(end_time-start_time):.2f}."
)
# + [markdown] tags=["hide"]
# ## Visualize results
#
# Show the original image, the segmentation result, and the original image with the background removed.
# + tags=["hide"]
# Resize the network result to the image shape and round the values
# to 0 (background) and 1 (foreground)
# Network result has shape (1,1,512,512), np.squeeze converts this to (512, 512)
resized_result = np.rint(
cv2.resize(np.squeeze(res_ir), (image.shape[1], image.shape[0]))
).astype(np.uint8)
# Create a copy of the image and set all background values to 255 (white)
bg_removed_result = image.copy()
bg_removed_result[resized_result == 0] = 255
fig, ax = plt.subplots(1, 3, figsize=(20, 7))
ax[0].imshow(image)
ax[1].imshow(resized_result, cmap="gray")
ax[2].imshow(bg_removed_result)
for a in ax:
a.axis("off")
# + [markdown] tags=["hide"]
# ### Add a background image
#
# In the segmentation result, all foreground pixels have a value of 1, all background pixels a value of 0. Replace the background image as follows:
#
# - Load a new image `background_image`
# - Resize this image to the same size as the original image
# - In the `background_image` set all the pixels where the resized segmentation result has a value of 1 - the foreground pixels in the original image - to 0.
# - Add the `bg_removed_result` from the previous step - the part of the original image that only contains foreground pixels - to the `background_image`.
# + tags=["hide"]
BACKGROUND_FILE = "data/wall.jpg"
OUTPUT_DIR = "output"
os.makedirs(OUTPUT_DIR, exist_ok=True)
background_image = cv2.cvtColor(cv2.imread(BACKGROUND_FILE), cv2.COLOR_BGR2RGB)
background_image = cv2.resize(
background_image, (image.shape[1], image.shape[0])
)
# Set all the foreground pixels from the result to 0
# in the background image and add the background-removed image
background_image[resized_result == 1] = 0
new_image = background_image + bg_removed_result
# Save the generated image
new_image_path = Path(
f"{OUTPUT_DIR}/{IMAGE_PATH.stem}-{Path(BACKGROUND_FILE).stem}.jpg"
)
cv2.imwrite(str(new_image_path), cv2.cvtColor(new_image, cv2.COLOR_RGB2BGR))
# Display the original image and the image with the new background side by side
fig, ax = plt.subplots(1, 2, figsize=(18, 7))
ax[0].imshow(image)
ax[1].imshow(new_image)
for a in ax:
a.axis("off")
plt.show()
# Create a link to download the image
image_link = FileLink(new_image_path)
image_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"The generated image <code>{new_image_path.name}</code> is saved in "
f"the directory <code>{new_image_path.parent}</code>. You can also "
"download the image by clicking on this link: "
f"{image_link._repr_html_()}"
)
)
# + [markdown] tags=["hide"]
# # References
#
# * PIP install openvino-dev: https://github.com/openvinotoolkit/openvino/blob/releases/2021/3/docs/install_guides/pypi-openvino-dev.md
# * OpenVINO ONNX support: https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_ONNX_Support.html
# * Model Optimizer Documentation: https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html
# * U^2-Net: https://github.com/xuebinqin/U-2-Net
# * U^2-Net research paper: [U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection](https://arxiv.org/pdf/2005.09007.pdf)
|
notebooks/205-vision-background-removal/205-vision-background-removal.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !date
# # Supplementary Figure 2
# +
import glob
import pandas as pd
import numpy as np
import pandas as pd
import scipy as scp
import sklearn
import itertools
from scipy.optimize import fsolve
from upsetplot import generate_data, plot, from_memberships
from collections import Counter
from matplotlib.ticker import FormatStrFormatter
from matplotlib.ticker import StrMethodFormatter
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import matplotlib.patches as mpatches
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({'font.size': 22})
# %config InlineBackend.figure_format = 'retina'
# +
v2_names = np.array(['SRR8599150_v2',
'heart1k_v2', 'SRR8611943_v2',
'SRR8257100_v2', 'EMTAB7320_v2',
'SRR7299563_v2', 'SRR8513910_v2',
'SRR8639063_v2', 'SRR8524760_v2',
'SRR6956073_v2', 'hgmm1k_v2',
'SRR8206317_v2', 'SRR8327928_v2',
'SRR6998058_v2'], dtype=object)
v3_names = np.array(['pbmc_1k_v3', 'hgmm10k_v3',
'neuron_10k_v3', 'pbmc_10k_v3',
'heart1k_v3', 'hgmm1k_v3'], dtype=object)
# +
v2_data = {}
v2_data["EMTAB7320_v2"] = {"n_reads": 335147976}
v2_data["heart1k_v2"] = {"n_reads": 88872840}
v2_data["hgmm1k_v2"] = {"n_reads": 75225120}
v2_data["SRR6956073_v2"] = {"n_reads": 161274652}
v2_data["SRR6998058_v2"] = {"n_reads": 37227612}
v2_data["SRR7299563_v2"] = {"n_reads": 112176350}
v2_data["SRR8206317_v2"] = {"n_reads": 85992089}
v2_data["SRR8257100_v2"] = {"n_reads": 189337914}
v2_data["SRR8327928_v2"] = {"n_reads": 190094560}
v2_data["SRR8513910_v2"] = {"n_reads": 146617182}
v2_data["SRR8524760_v2"] = {"n_reads": 97106426}
v2_data["SRR8599150_v2"] = {"n_reads": 8860361}
v2_data["SRR8611943_v2"] = {"n_reads": 21574502}
v2_data["SRR8639063_v2"] = {"n_reads": 416437344}
v2_data["EMTAB7320_v2"]["n_cells"] = 4510
v2_data["heart1k_v2"]["n_cells"] = 712
v2_data["hgmm1k_v2"]["n_cells"] = 1079
v2_data["SRR6956073_v2"]["n_cells"] = 4168
v2_data["SRR6998058_v2"]["n_cells"] = 575
v2_data["SRR7299563_v2"]["n_cells"] = 1660
v2_data["SRR8206317_v2"]["n_cells"] = 4418
v2_data["SRR8257100_v2"]["n_cells"] = 11685
v2_data["SRR8327928_v2"]["n_cells"] = 10396
v2_data["SRR8513910_v2"]["n_cells"] = 726
v2_data["SRR8524760_v2"]["n_cells"] = 3064
v2_data["SRR8599150_v2"]["n_cells"] = 3949
v2_data["SRR8611943_v2"]["n_cells"] = 5194
v2_data["SRR8639063_v2"]["n_cells"] = 6614
# +
v3_data = {}
v3_data["hgmm1k_v3"] = {"n_reads": 63105786}
v3_data["neuron_10k_v3"] = {"n_reads": 357111595}
v3_data["pbmc_10k_v3"] = {"n_reads": 638901019}
v3_data["pbmc_1k_v3"] = {"n_reads": 66601887}
v3_data["heart1k_v3"] = {"n_reads": 84512390}
v3_data["hgmm10k_v3"] = {"n_reads": 721180737}
v3_data["hgmm1k_v3"]["n_cells"] = 1011
v3_data["neuron_10k_v3"]["n_cells"] = 11477
v3_data["pbmc_10k_v3"]["n_cells"] = 1045
v3_data["pbmc_1k_v3"]["n_cells"] = 11790
v3_data["heart1k_v3"]["n_cells"] = 11692
v3_data["hgmm10k_v3"]["n_cells"] = 1227
# +
w = 67365891
c = 345420
u = 2013414
v2_data["heart1k_v2"]["barcode_error_correction"] = (w, c, u)
w = 57345535
c = 176786
u = 1849405
v3_data["heart1k_v3"]["barcode_error_correction"] = (w, c, u)
w = 58523823
c = 358110
u = 2035210
v2_data["hgmm1k_v2"]["barcode_error_correction"] = (w, c, u)
w = 46243317
c = 132278
u = 1394347
v3_data["hgmm1k_v3"]["barcode_error_correction"] = (w, c, u)
w = 499346666
c = 2613284
u = 20298095
v3_data["hgmm10k_v3"]["barcode_error_correction"] = (w, c, u)
w = 227709973
c = 659929
u = 7299697
v3_data["neuron_10k_v3"]["barcode_error_correction"] = (w, c, u)
w = 353379492
c = 1912254
u = 14819352
v3_data["pbmc_10k_v3"]["barcode_error_correction"] = (w, c, u)
w = 39178903
c = 190366
u = 1538993
v3_data["pbmc_1k_v3"]["barcode_error_correction"] = (w, c, u)
w = 28344188
c = 231718
u = 625557
v2_data["SRR6998058_v2"]["barcode_error_correction"] = (w, c, u)
w = 66294966
c = 782287
u = 1728840
v2_data["SRR8206317_v2"]["barcode_error_correction"] = (w, c, u)
w = 111254198
c = 1567548
u = 4904318
v2_data["SRR8327928_v2"]["barcode_error_correction"] = (w, c, u)
w = 348557155
c = 1857224
u = 1836077
v2_data["SRR8639063_v2"]["barcode_error_correction"] = (w, c, u)
w = 258864227
c = 4111830
u = 9256167
v2_data["EMTAB7320_v2"]["barcode_error_correction"] = (w, c, u)
w = 107572180
c = 1082195
u = 2639035
v2_data["SRR6956073_v2"]["barcode_error_correction"] = (w, c, u)
w = 64690144
c = 477618
u = 1520183
v2_data["SRR7299563_v2"]["barcode_error_correction"] = (w, c, u)
w = 173540630
c = 1094514
u = 4191648
v2_data["SRR8257100_v2"]["barcode_error_correction"] = (w, c, u)
w = 131004911
c = 910116
u = 3772762
v2_data["SRR8513910_v2"]["barcode_error_correction"] = (w, c, u)
w = 3420063
c = 38493
u = 117197
v2_data["SRR8599150_v2"]["barcode_error_correction"] = (w, c, u)
w = 16021922
c = 206410
u = 518515
v2_data["SRR8611943_v2"]["barcode_error_correction"] = (w, c, u)
w = 68514365
c = 615351
u = 1748491
v2_data["SRR8524760_v2"]["barcode_error_correction"] = (w, c, u)
# +
# (inwhitelist, correct, uncorrected)
w = [v2_data[i]["barcode_error_correction"][0]/(v2_data[i]["barcode_error_correction"][0] + v2_data[i]["barcode_error_correction"][1] + v2_data[i]["barcode_error_correction"][2]) for i in v2_names]
[w.append(v3_data[i]["barcode_error_correction"][0]/(v3_data[i]["barcode_error_correction"][0] + v3_data[i]["barcode_error_correction"][1] + v3_data[i]["barcode_error_correction"][2])) for i in v3_names]
c = [v2_data[i]["barcode_error_correction"][1]/(v2_data[i]["barcode_error_correction"][0] + v2_data[i]["barcode_error_correction"][1] + v2_data[i]["barcode_error_correction"][2]) for i in v2_names]
[c.append(v3_data[i]["barcode_error_correction"][1]/(v3_data[i]["barcode_error_correction"][0] + v3_data[i]["barcode_error_correction"][1] + v3_data[i]["barcode_error_correction"][2])) for i in v3_names]
u = [v2_data[i]["barcode_error_correction"][2]/(v2_data[i]["barcode_error_correction"][0] + v2_data[i]["barcode_error_correction"][1] + v2_data[i]["barcode_error_correction"][2]) for i in v2_names]
[u.append(v3_data[i]["barcode_error_correction"][2]/(v3_data[i]["barcode_error_correction"][0] + v3_data[i]["barcode_error_correction"][1] + v3_data[i]["barcode_error_correction"][2])) for i in v3_names]
# +
nreads = [v2_data[i]["n_reads"] for i in v2_names]
[nreads.append(v3_data[i]["n_reads"]) for i in v3_names]
idx_sorted = np.argsort(nreads)
names = v2_names
n3 = v3_names
names = np.append(names, n3)
names = names[idx_sorted]
sorted_nreads = np.sort(nreads)
w = np.array(w)[idx_sorted]
c = np.array(c)[idx_sorted]
u = np.array(u)[idx_sorted]
data = [w, c, u]
p = data[1]/(16*data[0] + data[1])
p = p.mean()
# +
fig, ax1 = plt.subplots(figsize=(10, 8))
L = np.linspace(1, 30, 200)
ax1.plot(L, L*p*(1-p)**(L-1)*100, color="black", linewidth=3)
ax1.set_xlabel('Length of Barcode')
ax1.set_ylabel('% Chance of a Hamming Distance 1 Error')
plt.gca().xaxis.set_major_formatter(StrMethodFormatter('{x:,.0f}'))
plt.gca().yaxis.set_major_formatter(StrMethodFormatter('{x:,.1f}'))
plt.tight_layout()
plt.savefig("p_barcode_correct.pdf")
plt.show()
|
Supplementary_Figure_2/supp_figure_2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys ###kldtest(3つのセルと下のグラフまで)
sys.path.append('../scripts/')
from robot import * #以上のインポートは、主要なモジュールを全部読み込む手抜き
from scipy.stats import norm, chi2 #norm: ガウス分布(あとで使用)、chi2: カイ二乗分布
def num(epsilon, delta, binnum): #必要なパーティクルの数
return math.ceil(chi2.ppf(1.0 - delta, binnum-1)/(2*epsilon)) #端数は切り上げ
# +
fig, (axl, axr) = plt.subplots(ncols=2, figsize=(10,4)) #二つ横並びで図を出力する準備
bs = np.arange(2, 10)
n = [num(0.1, 0.01, b) for b in bs] #ビンの数が2から10までのパーティクルの数
axl.set_title("bin: 2-10")
axl.plot(bs, n)
bs = np.arange(2, 100000)
n = [num(0.2, 0.01, b) for b in bs] #ビンの数が2から100000までのパーティクルの数
axr.set_title("bin: 2-100000")
axr.plot(bs, n)
plt.show()
# -
def num_wh(epsilon, delta, binnum): ###kldtestwh(下まで)
dof = binnum-1
z = norm.ppf(1.0 - delta)
return math.ceil(dof/(2*epsilon)*(1.0 - 2.0/(9*dof) + math.sqrt(2.0/(9*dof))*z )**3)
for binnum in 2, 4, 8, 1000, 10000, 100000: #様々なビンの数で比較
print("ビン:", binnum, "ε=0.1, δ=0.01", num(0.1, 0.01, binnum), num_wh(0.1, 0.01, binnum))
print("ビン:", binnum, "ε=0.5, δ=0.01", num(0.5, 0.01, binnum), num_wh(0.5, 0.01, binnum))
print("ビン:", binnum, "ε=0.5, δ=0.05", num(0.5, 0.05, binnum), num_wh(0.5, 0.05, binnum))
|
section_advanced_localization/kld_test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.7 64-bit (''study_tcav'': conda)'
# name: python37764bitstudytcavconda020a20cf98c9452bb4f3db1404ae5785
# ---
import tcav.utils_plot as utils_plot
from tcav.utils import pickle_load
from config import root_dir, model_to_run, bottlenecks, concepts, version, num_random_exp, max_examples, run_parallel, num_workers
from tcav.utils_analysis import get_score_dist, get_random_cav, get_logit_grad, cos_sim
import matplotlib.pyplot as plt
import numpy as np
import itertools
# +
# project_names =['GoogleNet_mixed3a_mixed4a_mixed4c_mixed4e_mixed5b:fire engine:blue_green_red_yellow_0','GoogleNet_mixed3a_mixed4a_mixed4c_mixed4e_mixed5b:zebra:dotted_striped_zigzagged_0']
# results_path = [root_dir + 'log/' + name + '/tcavs/Results' for name in project_names]
# + tags=[]
project_names = '2layers-colored-mnist-number_10'
keyword = 'mnist-0'
keyword2 = 'mnist'
exceptword = 'logit'
exceptword2 = 'grad'
results_lst = []
cavs_path = root_dir + 'log/' + project_names + '/cavs/'
random_tcav_path = root_dir + 'log/' + project_names + '/random100/'
random_cav_path = root_dir + 'log/' + project_names + '/random100/cavs'
# results(random同士なし)をロードしてrandomと結合
results_path = os.listdir(root_dir + 'log/' + project_names + '/tcavs')
results_path.sort()
results_all_lst = []
results_non_dup_lst = []
for path in results_path:
if exceptword2 not in path and exceptword not in path and keyword in path and keyword2 in path:
try:
#random_all = pickle_load(random_tcav_path + path.split(':')[1] + '_results_all:grad_nomalize')
#random_non_dup = pickle_load(random_tcav_path + path.split(':')[1] + '_results_non_dup:grad_nomalize')
random_all = pickle_load(random_tcav_path + path.split(':')[1] + '_results_all')
random_non_dup = pickle_load(random_tcav_path + path.split(':')[1] + '_results_non_dup')
except:
continue
results_all = pickle_load(root_dir + 'log/' + project_names + '/tcavs/' + path)
results_non_dup = pickle_load(root_dir + 'log/' + project_names + '/tcavs/' + path)
results_all.extend(random_all)
results_non_dup.extend(random_non_dup)
results_all_lst.append(results_all)
results_non_dup_lst.append(results_non_dup)
print(path)
# -
# ## TCAV スコア
# + tags=[]
import importlib
import seaborn as sns
sns.set()
importlib.reload(utils_plot)
for i,results in enumerate(results_all_lst):
#utils_plot.plot_results(results, num_random_exp=num_random_exp, save_path = root_dir + 'log/' + project_names + '/plot',keyword='color_'+str(i))
utils_plot.plot_results(results, num_random_exp=num_random_exp)
# -
import importlib
importlib.reload(utils_plot)
for i,results in enumerate(results_all_lst):
utils_plot.plot_sensitivity_results(results, num_random_exp=num_random_exp, save_path = root_dir + 'log/' + project_names + '/plot',keyword='simi'+str(i))
#utils_plot.plot_sensitivity_results(results)
# + tags=[]
random_i_ups = {}
for result in results_all_lst[0]:
# store random
if 'random500_' in result['cav_concept']:
print(result['val_directional_dirs'])
あ
# +
bottlenecks = ['conv1']
targets = ['mnist-0','mnist-1','mnist-4','mnist-5','mnist-6']
targets = ['mnist-0']
cos_simirarity_score = {}
simirarity_score = {}
for target in targets:
simirarity_score[target] = {}
cos_simirarity_score[target] = {}
for bottleneck in bottlenecks:
if bottleneck not in simirarity_score[target]:
simirarity_score[target][bottleneck] = {}
simirarity_score[target][bottleneck]['mean'] = []
simirarity_score[target][bottleneck]['std'] = []
cos_simirarity_score[target][bottleneck] = {}
cos_simirarity_score[target][bottleneck]['mean'] = []
cos_simirarity_score[target][bottleneck]['std'] = []
logit_grad = get_logit_grad(cavs_path, bottleneck, target)
cav = get_random_cav(random_cav_path, bottleneck)
s_simi_lst = []
cos_s_simi_lst = []
for c in cav:
for lg in logit_grad:
cos_s_simi = np.array(cos_sim(lg,c))
s_simi = np.array(np.dot(lg,c))
cos_s_simi_lst.append(cos_s_simi)
s_simi_lst.append(s_simi)
simirarity_score[target][bottleneck]['mean'].append(np.mean(s_simi_lst))
simirarity_score[target][bottleneck]['std'].append(np.std(s_simi_lst))
cos_simirarity_score[target][bottleneck]['mean'].append(np.mean(cos_s_simi_lst))
cos_simirarity_score[target][bottleneck]['std'].append(np.std(cos_s_simi_lst))
# -
simirarity_score
cos_simirarity_score
plot_concepts = ['Random']
for target in targets:
save_path = root_dir + 'log/' + project_names + '/plot'
num_bottlenecks = len(bottlenecks)
num_concepts = len(plot_concepts)
bar_width = 0.35
index = np.arange(num_concepts) * bar_width * (num_bottlenecks + 1)
# matplotlib
fig, ax = plt.subplots()
# draw all bottlenecks individually
for i, [bn, vals] in enumerate(simirarity_score[target].items()):
bar = ax.bar(index + i * bar_width, vals['mean'],
bar_width, yerr=vals['std'], label=bn)
target_class = target.title()
ax.set_title('{} cos similarity'.format(target_class))
ax.set_ylabel('cos similarity')
# ax.set_xlabel(xlabel_name)
y_range = 0.005
ax.set_ylim(-y_range, y_range)
ax.set_xticks(index + num_bottlenecks * bar_width / 2)
#plt.xticks(fontsize=8)
ax.set_xticklabels(plot_concepts)
ax.legend(loc='upper left',bbox_to_anchor=(1.05, 1))
fig.tight_layout()
#plt.savefig(f'{save_path}/{target}:{num_concepts}:cos_similarity.eps')
plt.show()
# ## TCAVスコアの分布を可視化
# + tags=[]
for results in results_all_lst[:1]:
dist = get_score_dist(results)
for concept in dist:
if concept != 'random':
for bottleneck in dist[concept]:
concept_label = '{} N = {}'.format(concept, len(dist[concept][bottleneck]))
random_label = 'random N = {}'.format(len(dist['random'][bottleneck]))
plt.figure()
plt.hist(dist[concept][bottleneck], bins=100, alpha=0.3, histtype='stepfilled', color='r',label= concept_label)
plt.hist(dist['random'][bottleneck], bins=100, alpha=0.3, histtype='stepfilled', color='b',label = random_label)
plt.legend()
plt.title(bottleneck)
plt.show()
# -
# random
#for results in results_non_dup_lst[:1]:
for results in results_all_lst[:1]:
dist = get_score_dist(results)
for concept in dist:
if concept == 'random':
for bottleneck in dist[concept]:
concept_label = '{} N = {}'.format(concept, len(dist[concept][bottleneck]))
random_label = 'random N = {}'.format(len(dist['random'][bottleneck]))
plt.figure()
plt.hist(dist['random'][bottleneck], bins=100, alpha=0.3, histtype='stepfilled', color='b',label = random_label)
plt.legend()
plt.title(bottleneck)
plt.show()
# ## CAVの分離精度を確認
# conceptごとに
for results in results_all_lst[:1]:
cav_accuracy = utils_plot.get_cav_accuracy(results)
for concept in cav_accuracy:
if concept != 'random':
for bottleneck in cav_accuracy[concept]:
print('---------------------------------')
print('{}:{} vs random'.format(bottleneck,concept))
concept_acc = [list(ele.values())[0] for ele in cav_accuracy[concept][bottleneck]]
random_acc = [list(ele.values())[1] for ele in cav_accuracy[concept][bottleneck]]
print('CAV concept accuracy : {} ± {}'.format(np.mean(concept_acc),np.std(concept_acc)))
print('CAV random accuracy : {} ± {}'.format(np.mean(random_acc),np.std(random_acc)))
else:
for bottleneck in cav_accuracy[concept]:
print('---------------------------------')
print('{}:random vs random'.format(bottleneck))
acc = [list(ele.values())[2] for ele in cav_accuracy[concept][bottleneck]]
print('CAV random accuracy : {} ± {}'.format(np.mean(acc),np.std(acc)))
# conceptをまとめる
for results in results_all_lst[:1]:
cav_accuracy = utils_plot.get_cav_accuracy(results)
for bottleneck in cav_accuracy['random']:
print('===============================')
print('bottleneck is {}'.format(bottleneck))
concept_acc = []
random_acc = []
for concept in cav_accuracy:
if concept != 'random':
concept_acc.extend([list(ele.values())[0] for ele in cav_accuracy[concept][bottleneck]])
random_acc.extend([list(ele.values())[1] for ele in cav_accuracy[concept][bottleneck]])
print('---------------------------------')
print('concept vs random')
print('CAV concept accuracy : {} ± {}'.format(np.mean(concept_acc),np.std(concept_acc)))
print('CAV random accuracy : {} ± {}'.format(np.mean(random_acc),np.std(random_acc)))
print('---------------------------------')
print('random vs random'.format(bottleneck))
acc = [list(ele.values())[2] for ele in cav_accuracy['random'][bottleneck]]
print('CAV random accuracy : {} ± {}'.format(np.mean(acc),np.std(acc)))
bottlenecks = ['conv2']
sim_all = []
for bottleneck in bottlenecks:
cav = get_random_cav(random_cav_path, bottleneck)
sim_lst = []
for v in itertools.combinations(np.arange(len(cav)), 2):
sim = cos_sim(cav[v[0]],cav[v[1]])
sim_lst.append(sim)
sim_all.append(sim_lst)
#plt.title('Cos Similarity in ' + concept + ' CAV')
plt.hist(sim_all[0], bins=100, alpha=0.3, histtype='stepfilled', color='b',label=bottlenecks[0])
plt.hist(sim_all[1], bins=100, alpha=0.3, histtype='stepfilled', color='r',label=bottlenecks[1])
plt.xlabel('cos_similarity')
plt.ylim(0,50)
plt.legend()
#plt.savefig(root_dir + 'log/' + project_names + '/plot/cav_simi-'+concept)
plt.show()
import seaborn as sns
sns.set()
plt.hist(sim_all[0], bins=100, alpha=0.6, histtype='stepfilled', color='black',label=bottlenecks[0]);
plt.title('cosine similarity in Random CAV')
plt.xlabel('cosine similarity')
plt.ylim(0,600000)
plt.xlim(-1,1)
plt.ylabel('frequency')
plt.savefig(root_dir + 'log/' + project_names + '/plot/cav_simi_hist-random.eps')
plt.show()
# + tags=[]
random_cav_path
# -
|
show_results_plus_append_rand.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Image网 Submission `128x128`
# This contains a submission for the Image网 leaderboard in the `128x128` category.
#
# In this notebook we:
# 1. Train on 1 pretext task:
# - Train a network to do image inpatining on Image网's `/train`, `/unsup` and `/val` images.
# 2. Train on 4 downstream tasks:
# - We load the pretext weights and train for `5` epochs.
# - We load the pretext weights and train for `20` epochs.
# - We load the pretext weights and train for `80` epochs.
# - We load the pretext weights and train for `200` epochs.
#
# Our leaderboard submissions are the accuracies we get on each of the downstream tasks.
# +
import json
import torch
import numpy as np
from functools import partial
from fastai2.basics import *
from fastai2.vision.all import *
# -
torch.cuda.set_device(3)
# +
# Chosen parameters
lr=2e-2
sqrmom=0.99
mom=0.95
beta=0.
eps=1e-4
bs=64
sa=1
m = xresnet34
act_fn = Mish
pool = MaxPool
nc=20
# -
source = untar_data(URLs.IMAGEWANG_160)
len(get_image_files(source/'unsup')), len(get_image_files(source/'train')), len(get_image_files(source/'val'))
# Use the Ranger optimizer
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
m_part = partial(m, c_out=nc, act_cls=torch.nn.ReLU, sa=sa, pool=pool)
model_meta[m_part] = model_meta[xresnet34]
save_name = 'imagewang_contrast_kornia'
# ## Pretext Task: Contrastive Learning
# +
#export
from pytorch_metric_learning import losses
class XentLoss(losses.NTXentLoss):
def forward(self, output1, output2):
stacked = torch.cat((output1, output2), dim=0)
labels = torch.arange(output1.shape[0]).repeat(2)
return super().forward(stacked, labels, None)
class ContrastCallback(Callback):
run_before=Recorder
def __init__(self, size=256, aug_targ=None, aug_pos=None, temperature=0.1):
self.aug_targ = ifnone(aug_targ, get_aug_pipe(size))
self.aug_pos = ifnone(aug_pos, get_aug_pipe(size))
self.temperature = temperature
def update_size(self, size):
pipe_update_size(self.aug_targ, size)
pipe_update_size(self.aug_pos, size)
def begin_fit(self):
self.old_lf = self.learn.loss_func
self.old_met = self.learn.metrics
self.learn.metrics = []
self.learn.loss_func = losses.NTXentLoss(self.temperature)
def after_fit(self):
self.learn.loss_fun = self.old_lf
self.learn.metrics = self.old_met
def begin_batch(self):
xb, = self.learn.xb
xb_targ = self.aug_targ(xb)
xb_pos = self.aug_pos(xb)
self.learn.xb = torch.cat((xb_targ, xb_pos), dim=0),
self.learn.yb = torch.arange(xb_targ.shape[0]).repeat(2),
# -
#export
def pipe_update_size(pipe, size):
for tf in pipe.fs:
if isinstance(tf, RandomResizedCropGPU):
tf.size = size
# +
def get_dbunch(size, bs, workers=8, dogs_only=False):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
folders = ['unsup', 'val'] if dogs_only else ['train', 'val']
files = get_image_files(source, folders=folders)
tfms = [[PILImage.create, ToTensor, RandomResizedCrop(size, min_scale=0.9)],
[parent_label, Categorize()]]
# dsets = Datasets(files, tfms=tfms, splits=GrandparentSplitter(train_name='unsup', valid_name='val')(files))
dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))
# batch_tfms = [IntToFloatTensor, *aug_transforms(p_lighting=1.0, max_lighting=0.9)]
batch_tfms = [IntToFloatTensor]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
# +
size = 128
bs = 256
dbunch = get_dbunch(160, bs)
len(dbunch.train.dataset)
# -
dbunch.show_batch()
# +
# # xb = TensorImage(torch.randn(1, 3,128,128))
# afn_tfm, lght_tfm = aug_transforms(p_lighting=1.0, max_lighting=0.8, p_affine=1.0)
# # lght_tfm.split_idx = None
# xb.allclose(afn_tfm(xb)), xb.allclose(lght_tfm(xb, split_idx=0))
# -
import kornia
#export
def get_aug_pipe(size, stats=None, s=.8):
stats = ifnone(stats, imagenet_stats)
rrc = kornia.augmentation.RandomResizedCrop((size,size), scale=(0.2, 1.0), ratio=(3/4, 4/3))
rhf = kornia.augmentation.RandomHorizontalFlip()
rcj = kornia.augmentation.ColorJitter(0.8*s, 0.8*s, 0.8*s, 0.2*s)
tfms = [rrc, rhf, rcj, Normalize.from_stats(*stats)]
pipe = Pipeline(tfms)
pipe.split_idx = 0
return pipe
aug = get_aug_pipe(size)
aug2 = get_aug_pipe(size)
cbs = ContrastCallback(size=size, aug_targ=aug, aug_pos=aug2, temperature=0.1)
xb,yb = dbunch.one_batch()
nrm = Normalize.from_stats(*imagenet_stats)
xb_dec = nrm.decodes(aug(xb))
show_images([xb_dec[0], xb[0]])
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 256), nn.ReLU(), nn.Linear(256, 128))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func,
metrics=[], loss_func=CrossEntropyLossFlat(), cbs=cbs, pretrained=False,
config={'custom_head':ch}
)
# +
# state_dict = torch.load(f'imagewang_contrast_simple_ep80.pth')
# learn.model[0].load_state_dict(state_dict, strict=False)
# -
learn.unfreeze()
learn.fit_one_cycle(10, 2e-2, wd=1e-2)
torch.save(learn.model[0].state_dict(), f'{save_name}.pth')
# +
# learn.save(save_name)
# -
# ## Downstream Task: Image Classification
def get_dbunch(size, bs, workers=8, dogs_only=False):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
if dogs_only:
dog_categories = [f.name for f in (source/'val').ls()]
dog_train = get_image_files(source/'train', folders=dog_categories)
valid = get_image_files(source/'val')
files = dog_train + valid
splits = [range(len(dog_train)), range(len(dog_train), len(dog_train)+len(valid))]
else:
files = get_image_files(source)
splits = GrandparentSplitter(valid_name='val')(files)
item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
tfms = [[PILImage.create, ToTensor, *item_aug],
[parent_label, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=splits)
batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
# +
def do_train(size=128, bs=64, lr=1e-2, epochs=5, runs=5, dogs_only=False, save_name=None):
dbunch = get_dbunch(size, bs, dogs_only=dogs_only)
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, normalize=False,
metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(),
pretrained=False,
config={'custom_head':ch})
if save_name is not None:
state_dict = torch.load(f'{save_name}.pth')
learn.model[0].load_state_dict(state_dict)
# state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth')
# learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_one_cycle(epochs, lr, wd=1e-2)
# -
# ### 5 Epochs
epochs = 5
runs = 1
do_train(epochs=epochs, runs=runs, lr=2e-2, dogs_only=False, save_name=save_name)
# ## Random weights - ACC = 0.337999
do_train(epochs=epochs, runs=1, dogs_only=False, save_name=None)
# ### 20 Epochs
epochs = 20
runs = 1
# not inpainting model
do_train(epochs=20, runs=runs, lr=1e-2, dogs_only=False, save_name=save_name)
# inpainting model
do_train(epochs=20, runs=runs, lr=1e-2, dogs_only=False, save_name=save_name)
do_train(epochs=20, runs=runs, lr=1e-2, dogs_only=False, save_name=None)
# ## 80 epochs
epochs = 80
runs = 1
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=None)
# Accuracy: **62.18%**
# ### 200 epochs
epochs = 200
runs = 1
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
# Accuracy: **62.03%**
|
old_experiments/02_ImageWang_ContrastLearning_13_kornia.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={}
# # Introduction to Akori Web Proyect Dataset Analysis
# + [markdown] pycharm={}
# ## Contents
# * [Abstract](#abstract)
# * [Dataset](#dataset)
# - [Experimental Protocol](#experiment)
# - [Subjects](#age_gender)
# * [Heatmap](#heatmap)
# * [Salience Map](#salience)
# - [Pre-trained Performance](#pretrained)
# - [Trained Performance](#trained)
# * [Spacial Bias](#spacial_bias)
# * [Optimized Performance](#optimized)
# * [Conclusions](#conclusions)
# * [References](#references)
# + [markdown] pycharm={}
# <a id='abstract'></a>
# ## Abstract
# ### Intro and presentation of challenge
# When the cognitive mechanisms that affect attention in the human brain are studied, the eye-movement behaviour is a good source of information to research it's process given that the visual pathways in the brain have been amply studied. When characterizing which processes influence attentional behaviour, these can be represented through what is called a salience model, where three principal sub-processes are often presented: bottom-up influence contains elements which relate to pre-attentional stimulus and inherently present in the image. Top-down influence is the container of attentional influences that are considered high-level processing by the brain and finally, a spacial inhibitory influence needed to reward the exploring of new areas in the visual field. Bottom-up influence has been the most studied because of it's direct relation to visual cortical areas that process colour, light, orientation, etc. Leaving top-down influence and inhibitory patterns as a compliment to encapsulate behaviours which are poorly explained as a bias. This approach doesn't allow a complete understanding of many human tasks and is only effective for very basic experiments; Given a certain task, how can we distinguish possible top-down channels which are more relevant when characterizing eye-movement and thus attention? Answering this question helps us achieve a better performing model and also give insight of other neuronal pathways that may modulate attention.
#
# ### Solution proposal and expectations
# For this study a method is proposed to explore top-down influence channels specifically using a human free-viewing experiment in a web-page environment. First, given that humans are highly trained by every-day life in web-exploring, we expected a high top-down bias in their behaviour (poor prediction with bottom-up models like Itti-Koch). Second, we expected to discern which top-down patterns are influencing the most in this task in particular. Finally, a mathematical method to apply this work to other tasks is established, which allows to further develop theory in how both bottom-up and top-down patterns interact and is processed in the human brain.
#
# ### Results and conclusions
# Using *Natural Scanpath Saliency* (NSS) as performance metric, we found that a pre-trained neural network algorithm (<a href="https://github.com/marcellacornia/sam" target="_blank">LSTM-based Saliency Attentive Model</a>) already performs better than a trained Itti-Koch model, with $NSS_{pre}=1.28$ where usually $NSS_{itti} \approx 1.06$ (according to <a href="http://saliency.mit.edu/results_cat2000.html" target="_blank">MIT Database</a>), we can conclude from this result that the neural network is able to grasp modulatory influences that are not included in Itti-Koch's bottom-up model. Thus, our next step is to quantify two new models: first, we train the same model with the web-page database that we created and expect a better performance, this will help us realize the existance of modulatory influence unique to the webpage environment compared to other image-sets. Second and last, we'll include a new channel to the network which encapsulates the spacial bias which is present in the dataset, with this final result we expect the best performance and conclude that spacial bias modulates attention in a web-page, and also the addition of extra channels helps to specialize salience prediction models to more complex tasks.
# + [markdown] pycharm={}
# <a id='dataset'></a>
# ## Dataset used
# The Akori-web dataset is a collection of 80 subjects recorded at <a href="http://neurosistemas.cl" target="_blank"><NAME>'s Neurosistemas Laboratory</a> at Universidad de Chile. Subjects' age ranged from 18 to 63 years with 38 women and 42 men *(see age [histogram](#age_gender) below)*. In this experiment subjects watched a set of 14 commercial webpages, each page is divided by a subset of 5 images to eliminate the effect of page scrolling (70 images total), plus an additional central fixation point at the start of each page *(see [experimental protocol](#experiment) figure below)*, in this way we can fix a neutral starting fixation point for each webpage exploration and later remove this fixation to unbias final behavioural results.
# + [markdown] pycharm={}
# <a id='experiment'></a>
# <img src="https://storage.googleapis.com/akoriweb_figures/protocol.png" alt="Experimental Protocol" style="width:600px;height:140px" class="center">
# <p style="text-align: center;">
# Experimental protocol in one iteration, this process is done 14 times for each subject.
# </p>
# + [markdown] pycharm={}
# <a id='age_gender'></a>
# ### Age and gender stacked histogram
# The following code makes a `matplotlib` histogram, it's important to note two males' age have been marked as -1, which means their age was not reported. We retrieved this information from `subject_info.json`, a simple JSON file from the Akori web database *(if curious you can download file from <a href="http://storage.googleapis.com/akoriweb_misc/subject_Info.json" target="_blank">here</a>)*.
# + pycharm={}
import scripts.data_loader as load
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rcParams['figure.dpi'] = 100 # image quality
# load subjects' gender and age from google cloud database
bucket = 'akoriweb_misc'
file = 'subject_info.json'
request = load.request_from_bucket(bucket, file) # load subject info from my cloud database
data = request.json()
age = data['age']
ismale = data['ismale']
# separate two groups
female_ages = [x for index, x in enumerate(age) if ismale[index] == 0]
male_ages = [x for index, x in enumerate(age) if ismale[index] == 1]
plt.style.use("grayscale") # some styles: grayscale, default, seaborn, dark_background, bmh
plt.hist([male_ages, female_ages], stacked=True, label=["male", "female"])
plt.legend()
plt.xlabel("Age")
plt.ylabel("count")
plt.show()
# + [markdown] pycharm={}
# <a id='heatmap'></a>
# ## Making a heatmap
# First, we explore what subjects eye movement looks like with a heatmap visualization. Given a certain image, we'll make heatmap for all subjects, to achieve this we need a function that loads a subject from the dataset as a fixation map and then apply a gaussian filtered version on top of the webpage. To do this first we need `scripts/data_loader.py` to request subject data from the cloud database *(you can download full database from <a href="http://storage.googleapis.com/akoriweb_misc/dataset_json.7z" target="_blank">here</a>)*. Changing `map='priority'` allows us to make a priority heamap instead of a time heatmap, this may be relevant because what is seen first could not be what is seen for longer times.
#
# + pycharm={}
import scripts.data_loader as load
from scripts.qol import make_heatmap
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rcParams['figure.dpi'] = 180 # image quality
page = 'VTR' # page name
number = 1 # page number
sigma = 30 # guassian parameter
ignored = 2 # ignore first n-fixations
# make a list of subjects: "sujeto1.json", "sujeto2.json", ..., etc
dataset_dir = ["sujeto"+str(num+1)+".json" for num in range(80)]
# make the accumulated fixmap
total_fixmap, webpage = load.fixmap(dataset_dir, page, number, ignore=ignored, map='time', norm='norm')
# now make heatmap
heatmap = make_heatmap(total_fixmap, sigma)
# visualize results
plt.style.use("grayscale")
plt.figure()
plt.imshow(webpage)
plt.imshow(heatmap, alpha=0.9)
plt.tick_params(labelleft=False, labelbottom=False, bottom=False, left=False)
plt.show()
# + [markdown] pycharm={}
# <a id='salience'></a>
# ## Computing Salience Map
# The final achievement of this project is predicting user's ocular behaviour when exploring web-pages, to achieve the best possible result, we believe fusing the best performance algorithm's from the engineering field with the expertise in attentional mechanisms from neuroscience will achieve the best possible result. To prove this we start by using a pre-trained model with non-web database *TODO: agregar explicación del benchmark del MIT*, followed by a trained method of the same algorithm and a final optimization to the algorithm to specialize it to web-page environments. For each model we use NSS to quantify performance.
# + [markdown] pycharm={}
# <a id='pretrained'></a>
# ### Pre-trained SAM-salience model vs All database
# For a trained model we already computed results from *TODO: Cornia's model explanation* using pretrained SAM-resnet neural network, just to save time and processing power.
#
# + pycharm={}
from scripts.salience_metrics import nss
from scripts.qol import log_progress
from PIL import Image
import scripts.data_loader as load
import numpy as np
from bs4 import BeautifulSoup
from io import BytesIO
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rcParams['figure.dpi'] = 100 # image quality
scores = []
ignored = 2 # ignore first n-fixations
# make a list of subjects: "sujeto1.json", "sujeto2.json", ..., etc
subject_set = ["sujeto"+str(num+1)+".json" for num in range(80)]
# make the list of pages from this directory
bucket = "akoriweb_pages"
request = load.request_from_bucket(bucket, '')
soup = BeautifulSoup(request.text, 'html.parser')
keys = soup.find_all('key')
page_set = [key.get_text() for key in keys]
print("processing {np} pages x {ns} subjects, this may take a while...".format(np=len(page_set), ns=len(subject_set)))
for a_page in log_progress(page_set, every=1):
# get page and number from path
last = a_page.find('/')
page = a_page[:last] # page is until first '/'
first = a_page.rfind(' ')
last = a_page.rfind('.')
number = int(a_page[first:last]) # num is between last space and '.jpg'
# Make fixmap
total_fixmap, webpage = load.fixmap(subject_set, page, number, ignore=ignored, map='time')
# Load pre-trained salience from cloud
image_id = "{page} {num}_salience.jpg".format(page=page, num=number)
request = load.request_from_bucket('akoriweb_pretrained_sam', image_id)
salmap = np.array(Image.open(BytesIO(request.content)))
salmap_normalized = salmap/salmap.max() # normalize saliency map
# Compute score
scores.append(nss(salmap_normalized, total_fixmap))
# make histogram of scores and mean value
plt.style.use("grayscale") # some styles: grayscale, default, seaborn, dark_background, bmh
plt.hist(scores)
plt.title("Mean NSS-score = "+str(np.mean(scores)))
plt.xlabel("NSS")
plt.ylabel("count")
plt.show()
# + [markdown] pycharm={}
# ### Trained SAM-Salience model
# *TODO*
|
WebReport.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
__author__ = 'saeedamen' # <NAME>
#
# Copyright 2016 Cuemacro
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the
# License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
# See the License for the specific language governing permissions and limitations under the License.
#
"""
backtest
Gives several examples of backtesting simple trading strategies, using Backtest (a lower level class)
"""
# -
# for backtest and loading data
from finmarketpy.backtest import BacktestRequest, Backtest
from findatapy.market import Market, MarketDataRequest, MarketDataGenerator
from findatapy.util.fxconv import FXConv
# for logging
from findatapy.util.loggermanager import LoggerManager
# for signal generation
from finmarketpy.economics import TechIndicator, TechParams
# for plotting
from chartpy import Chart, Style
# +
# housekeeping
logger = LoggerManager().getLogger(__name__)
import datetime
# +
# pick USD crosses in G10 FX
# note: we are calculating returns from spot (it is much better to use to total return
# indices for FX, which include carry)
logger.info("Loading asset data...")
tickers = ['EURUSD', 'USDJPY', 'GBPUSD', 'AUDUSD', 'USDCAD',
'NZDUSD', 'USDCHF', 'USDNOK', 'USDSEK']
vendor_tickers = ['FRED/DEXUSEU', 'FRED/DEXJPUS', 'FRED/DEXUSUK', 'FRED/DEXUSAL', 'FRED/DEXCAUS',
'FRED/DEXUSNZ', 'FRED/DEXSZUS', 'FRED/DEXNOUS', 'FRED/DEXSDUS']
md_request = MarketDataRequest(
start_date="01 Jan 1989", # start date
finish_date=datetime.date.today(), # finish date
freq='daily', # daily data
data_source='quandl', # use Quandl as data source
tickers=tickers, # ticker (findatapy)
fields=['close'], # which fields to download
vendor_tickers=vendor_tickers, # ticker (Quandl)
vendor_fields=['close'], # which Bloomberg fields to download
cache_algo='internet_load_return') # how to return data
market = Market(market_data_generator=MarketDataGenerator())
asset_df = market.fetch_market(md_request)
spot_df = asset_df
# +
backtest = Backtest()
br = BacktestRequest()
fxconv = FXConv()
# get all asset data
br.start_date = "02 Jan 1990"
br.finish_date = datetime.datetime.utcnow()
br.spot_tc_bp = 0 # 2.5 bps bid/ask spread
br.ann_factor = 252
# have vol target for each signal
br.signal_vol_adjust = True
br.signal_vol_target = 0.05
br.signal_vol_max_leverage = 3
br.signal_vol_periods = 60
br.signal_vol_obs_in_year = 252
br.signal_vol_rebalance_freq = 'BM'
br.signal_vol_resample_freq = None
tech_params = TechParams();
tech_params.bb_period = 200;
tech_params.bb_mult = 0.5 ;
indicator = 'BB'
logger.info("Running backtest...")
# use technical indicator to create signals
# (we could obviously create whatever function we wanted for generating the signal dataframe)
tech_ind = TechIndicator()
tech_ind.create_tech_ind(spot_df, indicator, tech_params);
signal_df = tech_ind.get_signal()
contract_value_df = None
# use the same data for generating signals
backtest.calculate_trading_PnL(br, asset_df, signal_df, contract_value_df, run_in_parallel=False)
port = backtest.portfolio_cum()
port.columns = [indicator + ' = ' + str(tech_params.sma_period) + ' ' + str(backtest.portfolio_pnl_desc()[0])]
signals = backtest.portfolio_signal()
# print the last positions (we could also save as CSV etc.)
print(signals.tail(1))
# +
style = Style()
style.title = "FX trend strategy"
style.source = 'Quandl'
style.scale_factor = 1
style.file_output = 'fx-trend-example.png'
Chart().plot(port, style=style)
|
.venv/lib/python3.8/site-packages/finmarketpy_examples/notebooks/backtest_example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import json
from datetime import datetime
import matplotlib.ticker as ticker
# loading announcements
mazda=pd.read_json('dataset_mazda.json')
# ### Comparison of the models
# +
# order of the models on the plot
mazda_sort = mazda.groupby(['model-pojazdu'])['id'].aggregate(np.size).reset_index().sort_values('id', ascending=False)
# style and size of the plot
f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(14, 12), sharex=True)
plt.xticks(rotation=70)
sns.set(style='whitegrid')
sns.despine(left=True)
# plot 1
sns.barplot(x='model-pojazdu', y='cena', data=mazda, ax = ax1,
ci=None, estimator=np.size, palette='Oranges_r', order = mazda_sort['model-pojazdu'])
# plot 2
sns.barplot(x='model-pojazdu', y='cena', data=mazda, ax = ax2,
ci=None, estimator=np.mean, palette='Oranges_r', order = mazda_sort['model-pojazdu'])
# plot 3
sns.barplot(x='model-pojazdu', y='cena', data=mazda, ax = ax3,
ci=None, estimator=np.median, palette='Oranges_r', order = mazda_sort['model-pojazdu'])
# description of the plot
ax1.set_title('The comparison of the models')
ax1.set_ylabel('Number of announcements')
ax2.set_ylabel('Mean of price')
ax3.set_ylabel('Median of price')
# +
# change int to str
mazda['rok-produkcji_str'] = mazda['rok-produkcji'].apply(lambda x: 'Rok: ' + str(x))
# exclusion of the models with a small number of ads (<20)
mazda_filtred = mazda[mazda['model-pojazdu'].isin(
((mazda.groupby(['model-pojazdu'])['id'].agg(
[np.size]).reset_index())[((mazda.groupby(['model-pojazdu'])['id'].agg(
[np.size]).reset_index())['size']>=20)])['model-pojazdu'])].sort_values(by=['rok-produkcji'],ascending=False)
# plot
print('Number of announcements for each model per year of production')
sns.catplot(x='id', y='rok-produkcji_str', col='model-pojazdu',
data= mazda_filtred,
kind='bar', estimator =np.size, height=5, aspect=0.7 ,col_wrap=5,
palette='Oranges_r')
# -
# ### More detailed data on one model example
# ##### What is the average price of Mazda 3?
# +
# choose mazda 3
mazda_3 = mazda[mazda['model-pojazdu'].isin(['3'])]
# style and size of the plot
f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(10, 10), sharex=True)
plt.xticks(rotation=70)
sns.set(style='whitegrid')
sns.despine(left=True)
# plot 1
sns.barplot(x='rok-produkcji', y='cena', data=mazda_3, ax = ax1,
ci=None, estimator=np.size, palette='Oranges_r')
# plot 2
sns.barplot(x='rok-produkcji', y='cena', data=mazda_3, ax = ax2,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 3
sns.barplot(x='rok-produkcji', y='cena', data=mazda_3, ax = ax3,
ci=None, estimator=np.median, palette='Oranges_r')
# description of the plot
ax1.set_title('The Mazda 3 per year of production')
ax1.set_ylabel('Number of announcements')
ax2.set_ylabel('Mean of price')
ax3.set_ylabel('Median of price')
# -
# ##### How the basic parameters affect the price?
# create a new column
country = ['Polska', None]
mazda_3 = mazda_3.assign(pochodzenie = mazda_3['kraj-pochodzenia'].apply(lambda x: 'Polska'
if x in country else 'Sprowadzane'))
# +
# style of the plot
sns.set(style='whitegrid')
sns.despine(left=True)
# plot
g = sns.relplot(data=mazda_3.sort_values(by=['generacja'],ascending=True),
x='rok-produkcji', y='cena', hue='generacja',
height=7, aspect=1.5, palette='Dark2' )
# description and parameters of the plot
plt.title('The relationship between price, year of production and model generation')
plt.ylim(0, 170000)
g.ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
# -
# focus on the one generation of mazda 3
mazda_3_IIIgen = mazda_3[(mazda_3['generacja'].isin(['III (2013-)'])) &
(mazda_3['rok-produkcji'] >=2013)].sort_values(by=['rok-produkcji'])
# +
# style of the plot
sns.set(style='whitegrid')
sns.despine(left=True)
#plot
g = sns.relplot(data= mazda_3_IIIgen,
x='moc', y='cena', hue = 'rok-produkcji_str', style = 'rodzaj-paliwa', size = 'rodzaj-paliwa', sizes =(50,200),
height=6, aspect=1.5, palette='Oranges')
# description and parameters of the plot
plt.title('The relationship between price, horsepower, year of production and type of fuel')
g.ax.xaxis.set_major_locator(ticker.MultipleLocator(10))
# +
# style of the plot
sns.set(style='whitegrid')
sns.despine(left=True)
# plot
g = sns.relplot(data=mazda_3_IIIgen,
x='przebieg', y='cena', hue = 'rok-produkcji_str', style = 'rodzaj-paliwa', size = 'rodzaj-paliwa',
sizes =(50,200),
height=6, aspect=1.5, palette='Oranges')
# description and parameters of the plot
plt.title('The relationship between price, mileage, year of production and type of fuel')
plt.xlim(0, 250000)
# -
# ##### How the background affect the price?
# +
# plot
print('The average price depends on year of production, place of production and seller')
sns.catplot(data= mazda_3_IIIgen,
x='rok-produkcji', y='cena', hue='pochodzenie', col='oferta-od',
kind='bar', estimator=np.mean, height=4, aspect=1.2, ci=None,
palette='Oranges_r');
# +
# plot
print('The average price depends on year of production, place of production and status "not crashed"')
sns.catplot(data= mazda_3_IIIgen,
x='rok-produkcji', y='cena', hue='bezwypadkowy', col='pochodzenie',
kind='bar', estimator=np.mean, height=4, aspect=1.2, ci=None,
palette='Oranges_r');
# +
print('The average price depends on year of production, place of production, seller and status "not crashed"')
#table
mazda_3_IIIgen.pivot_table(values='cena', index=['pochodzenie','oferta-od','bezwypadkowy'],
columns='rok-produkcji',aggfunc='mean').round(2)
# +
# order of states
regions = ['Zachodniopomorskie','Pomorskie', 'Warminsko-mazurskie', 'Podlaskie',
'Lubuskie', 'Wielkopolskie', 'Kujawsko-pomorskie', 'Mazowieckie',
'Dolnoslaskie', 'Lodzkie', 'Swietokrzyskie', 'Lubelskie',
'Opolskie', 'Slaskie', 'Malopolskie', 'Podkarpackie'] # , 'Inne'
# style of the plot
sns.set(style='whitegrid')
sns.despine(left=True)
print('The average price depends on where the car come from')
# plot
sns.relplot(
data=mazda_3_IIIgen[mazda_3_IIIgen['lokalizacja_wojewodztwo'] != 'inne'],
x='rok-produkcji', y='cena', col='lokalizacja_wojewodztwo', hue='pierwszy-wlasciciel', # 'serwisowany-w-aso'
kind='line', linewidth=4, col_wrap=4, ci = None,
height=3, aspect=1.3, palette='Oranges' , col_order = regions)
# +
# plot parameters
f, ax = plt.subplots(figsize=(16, 5))
sns.set(style='whitegrid')
sns.despine(left=True)
plt.xticks(rotation=70)
# plot
sns.barplot(data=mazda_3_IIIgen,
x='lokalizacja_wojewodztwo', y='cena', hue = 'rok-produkcji',
estimator=np.mean, ci=None, palette='Oranges_r', order=regions)
ax.set_title('The average price depends on where the car come from')
# +
print('The average price depends on where the car come from')
#table
mazda_3_IIIgen.pivot_table(values='cena', index=['lokalizacja_wojewodztwo'],
columns='rok-produkcji',aggfunc='mean').round(2)
# -
# ##### How the car equipment affect the price?
# +
# style and size of the plot
f, ((ax1, ax2, ax3), (ax4, ax5, ax6), (ax7, ax8, ax9)) = plt.subplots(3, 3, figsize=(17, 12), sharex=True)
#plt.xticks(rotation=70)
sns.set(style='whitegrid')
sns.despine(left=True)
print('The average price depends on the car equipment')
# plot 1
sns.barplot(x='rok-produkcji', y='cena', hue = 'bluetooth', data=mazda_3_IIIgen, ax = ax1,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 2
sns.barplot(x='rok-produkcji', y='cena', hue = 'czujnik-martwego-pola', data=mazda_3_IIIgen, ax = ax2,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 3
sns.barplot(x='rok-produkcji', y='cena', hue = 'kamera-cofania', data=mazda_3_IIIgen, ax = ax3,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 4
sns.barplot(x='rok-produkcji', y='cena', hue = 'czujnik-martwego-pola', data=mazda_3_IIIgen, ax = ax4,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 5
sns.barplot(x='rok-produkcji', y='cena', hue = 'klimatyzacja-dwustrefowa', data=mazda_3_IIIgen, ax = ax5,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 6
sns.barplot(x='rok-produkcji', y='cena', hue = 'nawigacja-gps', data=mazda_3_IIIgen, ax = ax6,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 7
sns.barplot(x='rok-produkcji', y='cena', hue = 'skrzynia-biegow', data=mazda_3_IIIgen, ax = ax7,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 8
sns.barplot(x='rok-produkcji', y='cena', hue = 'system-start-stop', data=mazda_3_IIIgen, ax = ax8,
ci=None, estimator=np.mean, palette='Oranges_r')
# plot 9
sns.barplot(x='rok-produkcji', y='cena', hue = 'tapicerka-skorzana', data=mazda_3_IIIgen, ax = ax9,
ci=None, estimator=np.mean, palette='Oranges_r')
# -
# ##### What is the price of a car with selected parameters?
# +
# table
mazda_3_IIIgen[(mazda_3_IIIgen['rok-produkcji'].isin(['2013','2014']))
&(mazda_3_IIIgen['rodzaj-paliwa'] == 'Benzyna')
#&(mazda_3_IIIgen['bezwypadkowy'] == True)
&(mazda_3_IIIgen['lokalizacja_wojewodztwo'].isin(['Zachodniopomorskie','Pomorskie',
'Warminsko-mazurskie', 'Podlaskie',
'Wielkopolskie', 'Kujawsko-pomorskie',
'Mazowieckie']))
&(mazda_3_IIIgen['bluetooth'] == True)
&(mazda_3_IIIgen['skrzynia-biegow'] == 'Manualna')
&(mazda_3_IIIgen['klimatyzacja-dwustrefowa'] == True)
].groupby(['bezwypadkowy','lokalizacja_wojewodztwo'])['cena'].agg(
[np.min, np.max, np.mean, np.median, np.size])
#'przebieg', , 'moc', 'oferta-od', 'pierwszy-wlasciciel' ,'pochodzenie'
|
cars - statistics and visualization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# #### This notebook aims to see why the diagonal noise is being created at the intersection of the open boundary domains and if the cause of such a boundary is because we have different data at those grid points.
#
#
import numpy as np
import numpy.ma as ma
import netCDF4 as nc
import matplotlib.pyplot as plt
import matplotlib as mpl
from salishsea_tools import viz_tools, geo_tools,nc_tools
from scipy.interpolate import griddata, interp1d
import matplotlib.cm as cm
# ### First we will run a check on our 2d files
# +
west_bdy = nc.Dataset('/ocean/ssahu/CANYONS/bdy_files/2d_west_m04.nc');
west_ssh = west_bdy.variables['sossheig'];
north_bdy = nc.Dataset('/ocean/ssahu/CANYONS/bdy_files/2d_north_m04.nc');
north_ssh = north_bdy.variables['sossheig'];
south_bdy = nc.Dataset('/ocean/ssahu/CANYONS/bdy_files/2d_south_m04.nc');
south_ssh = south_bdy.variables['sossheig'];
# -
print(np.where(west_ssh == north_ssh), np.where(west_ssh == south_ssh))
print(np.mean(west_ssh), np.mean(south_ssh) , np.mean(north_ssh))
nc_tools.show_dimensions(west_bdy)
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(south_ssh[0,...], cmap =cm.ocean)
fig.colorbar(mesh)
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(100, 10))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(south_ssh[0,...], cmap =cm.spectral_r)
fig.colorbar(mesh)
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(100, 10))
viz_tools.set_aspect(ax)
mesh = ax.pcolormesh(west_ssh[0,...], cmap =cm.spectral_r)
fig.colorbar(mesh)
plt.show()
print(west_ssh.shape, south_ssh.shape)
|
grid/Resolving_NEMO_bdy_data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
#
# # Representational Similarity Analysis
#
#
# Representational Similarity Analysis is used to perform summary statistics
# on supervised classifications where the number of classes is relatively high.
# It consists in characterizing the structure of the confusion matrix to infer
# the similarity between brain responses and serves as a proxy for characterizing
# the space of mental representations [1]_ [2]_ [3]_.
#
# In this example, we perform RSA on responses to 24 object images (among
# a list of 92 images). Subjects were presented with images of human, animal
# and inanimate objects [4]_. Here we use the 24 unique images of faces
# and body parts.
#
# <div class="alert alert-info"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not
# build the images below.</p></div>
#
# References
# ----------
#
# .. [1] <NAME>. "Multidimensional scaling, tree-fitting, and clustering."
# Science 210.4468 (1980): 390-398.
# .. [2] <NAME>. & <NAME>.. "Content and cluster analysis:
# assessing representational similarity in neural systems." Philosophical
# psychology 13.1 (2000): 47-76.
# .. [3] <NAME>., <NAME>., & Bandettini. P. "Representational
# similarity analysis-connecting the branches of systems neuroscience."
# Frontiers in systems neuroscience 2 (2008): 4.
# .. [4] <NAME>., <NAME>., & <NAME>. "Resolving human object
# recognition in space and time." Nature neuroscience (2014): 17(3),
# 455-462.
#
# +
# Authors: <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from pandas import read_csv
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.manifold import MDS
import mne
from mne.io import read_raw_fif, concatenate_raws
from mne.datasets import visual_92_categories
print(__doc__)
data_path = visual_92_categories.data_path()
# Define stimulus - trigger mapping
fname = op.join(data_path, 'visual_stimuli.csv')
conds = read_csv(fname)
print(conds.head(5))
# -
# Let's restrict the number of conditions to speed up computation
#
#
max_trigger = 24
conds = conds[:max_trigger] # take only the first 24 rows
# Define stimulus - trigger mapping
#
#
conditions = []
for c in conds.values:
cond_tags = list(c[:2])
cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
for k, i in enumerate(c[2:], 2)]
conditions.append('/'.join(map(str, cond_tags)))
print(conditions[:10])
# Let's make the event_id dictionary
#
#
event_id = dict(zip(conditions, conds.trigger + 1))
event_id['0/human bodypart/human/not-face/animal/natural']
# Read MEG data
#
#
# +
n_runs = 4 # 4 for full data (use less to speed up computations)
fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
raws = [read_raw_fif(fname % block, verbose='error')
for block in range(n_runs)] # ignore filename warnings
raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
# -
# Epoch data
#
#
picks = mne.pick_types(raw.info, meg=True)
epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
picks=picks, tmin=-.1, tmax=.500, preload=True)
# Let's plot some conditions
#
#
epochs['face'].average().plot()
epochs['not-face'].average().plot()
# Representational Similarity Analysis (RSA) is a neuroimaging-specific
# appelation to refer to statistics applied to the confusion matrix
# also referred to as the representational dissimilarity matrices (RDM).
#
# Compared to the approach from Cichy et al. we'll use a multiclass
# classifier (Multinomial Logistic Regression) while the paper uses
# all pairwise binary classification task to make the RDM.
# Also we use here the ROC-AUC as performance metric while the
# paper uses accuracy. Finally here for the sake of time we use
# RSA on a window of data while Cichy et al. did it for all time
# instants separately.
#
#
# +
# Classify using the average signal in the window 50ms to 300ms
# to focus the classifier on the time interval with best SNR.
clf = make_pipeline(StandardScaler(),
LogisticRegression(C=1, solver='liblinear',
multi_class='auto'))
X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)
y = epochs.events[:, 2]
classes = set(y)
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
# Compute confusion matrix for each cross-validation fold
y_pred = np.zeros((len(y), len(classes)))
for train, test in cv.split(X, y):
# Fit
clf.fit(X[train], y[train])
# Probabilistic prediction (necessary for ROC-AUC scoring metric)
y_pred[test] = clf.predict_proba(X[test])
# -
# Compute confusion matrix using ROC-AUC
#
#
confusion = np.zeros((len(classes), len(classes)))
for ii, train_class in enumerate(classes):
for jj in range(ii, len(classes)):
confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])
confusion[jj, ii] = confusion[ii, jj]
# Plot
#
#
labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6
fig, ax = plt.subplots(1)
im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])
ax.set_yticks(range(len(classes)))
ax.set_yticklabels(labels)
ax.set_xticks(range(len(classes)))
ax.set_xticklabels(labels, rotation=40, ha='left')
ax.axhline(11.5, color='k')
ax.axvline(11.5, color='k')
plt.colorbar(im)
plt.tight_layout()
plt.show()
# Confusion matrix related to mental representations have been historically
# summarized with dimensionality reduction using multi-dimensional scaling [1].
# See how the face samples cluster together.
#
#
fig, ax = plt.subplots(1)
mds = MDS(2, random_state=0, dissimilarity='precomputed')
chance = 0.5
summary = mds.fit_transform(chance - confusion)
cmap = plt.get_cmap('rainbow')
colors = ['r', 'b']
names = list(conds['condition'].values)
for color, name in zip(colors, set(names)):
sel = np.where([this_name == name for this_name in names])[0]
size = 500 if name == 'human face' else 100
ax.scatter(summary[sel, 0], summary[sel, 1], s=size,
facecolors=color, label=name, edgecolors='k')
ax.axis('off')
ax.legend(loc='lower right', scatterpoints=1, ncol=2)
plt.tight_layout()
plt.show()
|
dev/_downloads/a68128275cc59b074b8c9782296d1d4a/decoding_rsa.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import pandas as pd
import numpy as np
# +
boston_dataset = load_boston()
boston_dataset
dados = pd.DataFrame(data= boston_dataset.data, columns= boston_dataset.feature_names)
dados = dados.drop(['INDUS', 'CHAS'], axis= 1)
log_precos = np.log(boston_dataset.target)
objetivo = pd.DataFrame(data= log_precos, columns= ['PRICE'])
# +
CRIME_IDX = 0
ZN_IDX = 1
CHAS_IDX = 2
RM_IDX = 4
PTRATIO_IDX = 8
casa_stat = np.ndarray(shape= (1, 11))
casa_stat = dados.mean().values.reshape(1,11)
casa_stat
# +
regr = LinearRegression().fit(dados, objetivo)
previsto = regr.predict(dados)
MSE = mean_squared_error(objetivo, previsto, squared= True)
RMSE = mean_squared_error(objetivo, previsto, squared= False)
print("MSE: ", MSE)
print("RMSE: ", RMSE)
# -
def obter_estimativa_log(n_quartos,
estudantes_por_turma,
proximo_rio= False,
alta_fidelidade= True):
#Configurando
casa_stat[0][RM_IDX] = n_quartos
casa_stat[0][PTRATIO_IDX] = estudantes_por_turma
casa_stat[0][CHAS_IDX] = proximo_rio
# Prevendo
estimativa_log = regr.predict(casa_stat)[0][0]
if(alta_fidelidade):
margem_acima = estimativa_log + 2*RMSE
margem_abaixo = estimativa_log - 2*RMSE
intervalo = 95
else:
margem_acima = estimativa_log + RMSE
margem_abaixo = estimativa_log - RMSE
intervalo = 68
return estimativa_log, margem_acima, margem_abaixo, intervalo
# +
ZILLOW_MEDIAN_PRICE = 583.3
FATOR = ZILLOW_MEDIAN_PRICE / np.median(boston_dataset.target)
estimativa_log, acima, abaixo, intervalo = obter_estimativa_log(9,
estudantes_por_turma= 15,
proximo_rio= False,
alta_fidelidade= False)
# +
def obter_estimativa_em_dolar(rm, ptratio,
chas=False,
large_range= True):
""" Retorna o valor da estimativa de preço em dolares, de uma casa em Boston a partir
dos parametros
Parametros:
---------------
rm: Numero de quartos na residencia
ptratio: Numero de estudantes por professor na area
chas: True se a casa for perto do rio, falso se não
large_range: True para uma precisão de 95%, False para 68%
"""
if rm < 1 or ptratio < 1:
print("Valor inválido, tente novamente")
return
est_log, maximo, minimo, intervalo = obter_estimativa_log(n_quartos= rm,
estudantes_por_turma= ptratio,
proximo_rio= True,
alta_fidelidade= False)
est_dolar = np.e**est_log * 1000
est_maxima = np.e**maximo * 1000
est_minima = np.e**minimo * 1000
est_aprox = np.around(est_dolar, -3)
est_aprox_min = np.around(est_maxima, -3)
est_aprox_max = np.around(est_minima, -3)
return est_aprox, est_aprox_min, est_aprox_max, intervalo
# -
obter_estimativa_em_dolar(9, 15, chas= True, large_range= False)
|
Secao-5/Ferramenta de Avaliacao.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Cross-validation for parameter tuning, model selection, and feature selection
# *From the video series: [Introduction to machine learning with scikit-learn](https://github.com/justmarkham/scikit-learn-videos)*
# ## Agenda
#
# - What is the drawback of using the **train/test split** procedure for model evaluation?
# - How does **K-fold cross-validation** overcome this limitation?
# - How can cross-validation be used for selecting **tuning parameters**, choosing between **models**, and selecting **features**?
# - What are some possible **improvements** to cross-validation?
# ## Review of model evaluation procedures
# **Motivation:** Need a way to choose between machine learning models
#
# - Goal is to estimate likely performance of a model on **out-of-sample data**
#
# **Initial idea:** Train and test on the same data
#
# - But, maximizing **training accuracy** rewards overly complex models which **overfit** the training data
#
# **Alternative idea:** Train/test split
#
# - Split the dataset into two pieces, so that the model can be trained and tested on **different data**
# - **Testing accuracy** is a better estimate than training accuracy of out-of-sample performance
# - But, it provides a **high variance** estimate since changing which observations happen to be in the testing set can significantly change testing accuracy
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
# +
# read in the iris data
iris = load_iris()
# create X (features) and y (response)
X = iris.data
y = iris.target
# +
# use train/test split with different random_state values
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4)
# check classification accuracy of KNN with K=5
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print metrics.accuracy_score(y_test, y_pred)
# -
# **Question:** What if we created a bunch of train/test splits, calculated the testing accuracy for each, and averaged the results together?
#
# **Answer:** That's the essense of cross-validation!
# ## Steps for K-fold cross-validation
# 1. Split the dataset into K **equal** partitions (or "folds").
# 2. Use fold 1 as the **testing set** and the union of the other folds as the **training set**.
# 3. Calculate **testing accuracy**.
# 4. Repeat steps 2 and 3 K times, using a **different fold** as the testing set each time.
# 5. Use the **average testing accuracy** as the estimate of out-of-sample accuracy.
# Diagram of **5-fold cross-validation:**
#
# 
# +
# simulate splitting a dataset of 25 observations into 5 folds
from sklearn.cross_validation import KFold
kf = KFold(25, n_folds=5, shuffle=False)
# print the contents of each training and testing set
print '{} {:^61} {}'.format('Iteration', 'Training set observations', 'Testing set observations')
for iteration, data in enumerate(kf, start=1):
print '{:^9} {} {:^25}'.format(iteration, data[0], data[1])
# -
# - Dataset contains **25 observations** (numbered 0 through 24)
# - 5-fold cross-validation, thus it runs for **5 iterations**
# - For each iteration, every observation is either in the training set or the testing set, **but not both**
# - Every observation is in the testing set **exactly once**
# ## Comparing cross-validation to train/test split
# Advantages of **cross-validation:**
#
# - More accurate estimate of out-of-sample accuracy
# - More "efficient" use of data (every observation is used for both training and testing)
#
# Advantages of **train/test split:**
#
# - Runs K times faster than K-fold cross-validation
# - Simpler to examine the detailed results of the testing process
# ## Cross-validation recommendations
# 1. K can be any number, but **K=10** is generally recommended
# 2. For classification problems, **stratified sampling** is recommended for creating the folds
# - Each response class should be represented with equal proportions in each of the K folds
# - scikit-learn's `cross_val_score` function does this by default
# ## Cross-validation example: parameter tuning
# **Goal:** Select the best tuning parameters (aka "hyperparameters") for KNN on the iris dataset
from sklearn.cross_validation import cross_val_score
# 10-fold cross-validation with K=5 for KNN (the n_neighbors parameter)
knn = KNeighborsClassifier(n_neighbors=5)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print scores
# use average accuracy as an estimate of out-of-sample accuracy
print scores.mean()
# search for an optimal value of K for KNN
k_range = range(1, 31)
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
print k_scores
# +
import matplotlib.pyplot as plt
# %matplotlib inline
# plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis)
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy')
# -
# ## Cross-validation example: model selection
# **Goal:** Compare the best KNN model with logistic regression on the iris dataset
# 10-fold cross-validation with the best KNN model
knn = KNeighborsClassifier(n_neighbors=20)
print cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean()
# 10-fold cross-validation with logistic regression
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
print cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean()
# ## Cross-validation example: feature selection
# **Goal**: Select whether the Newspaper feature should be included in the linear regression model on the advertising dataset
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
# read in the advertising dataset
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# +
# create a Python list of three feature names
feature_cols = ['TV', 'Radio', 'Newspaper']
# use the list to select a subset of the DataFrame (X)
X = data[feature_cols]
# select the Sales column as the response (y)
y = data.Sales
# -
# 10-fold cross-validation with all three features
lm = LinearRegression()
scores = cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')
print scores
# fix the sign of MSE scores
mse_scores = -scores
print mse_scores
# convert from MSE to RMSE
rmse_scores = np.sqrt(mse_scores)
print rmse_scores
# calculate the average RMSE
print rmse_scores.mean()
# 10-fold cross-validation with two features (excluding Newspaper)
feature_cols = ['TV', 'Radio']
X = data[feature_cols]
print np.sqrt(-cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')).mean()
# ## Improvements to cross-validation
# **Repeated cross-validation**
#
# - Repeat cross-validation multiple times (with **different random splits** of the data) and average the results
# - More reliable estimate of out-of-sample performance by **reducing the variance** associated with a single trial of cross-validation
#
# **Creating a hold-out set**
#
# - "Hold out" a portion of the data **before** beginning the model building process
# - Locate the best model using cross-validation on the remaining data, and test it **using the hold-out set**
# - More reliable estimate of out-of-sample performance since hold-out set is **truly out-of-sample**
#
# **Feature engineering and selection within cross-validation iterations**
#
# - Normally, feature engineering and selection occurs **before** cross-validation
# - Instead, perform all feature engineering and selection **within each cross-validation iteration**
# - More reliable estimate of out-of-sample performance since it **better mimics** the application of the model to out-of-sample data
# ## Resources
#
# - scikit-learn documentation: [Cross-validation](http://scikit-learn.org/stable/modules/cross_validation.html), [Model evaluation](http://scikit-learn.org/stable/modules/model_evaluation.html)
# - scikit-learn issue on GitHub: [MSE is negative when returned by cross_val_score](https://github.com/scikit-learn/scikit-learn/issues/2439)
# - Section 5.1 of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/) (11 pages) and related videos: [K-fold and leave-one-out cross-validation](https://www.youtube.com/watch?v=nZAM5OXrktY) (14 minutes), [Cross-validation the right and wrong ways](https://www.youtube.com/watch?v=S06JpVoNaA0) (10 minutes)
# - <NAME>: [Accurately Measuring Model Prediction Error](http://scott.fortmann-roe.com/docs/MeasuringError.html)
# - Machine Learning Mastery: [An Introduction to Feature Selection](http://machinelearningmastery.com/an-introduction-to-feature-selection/)
# - Harvard CS109: [Cross-Validation: The Right and Wrong Way](https://github.com/cs109/content/blob/master/lec_10_cross_val.ipynb)
# - Journal of Cheminformatics: [Cross-validation pitfalls when selecting and assessing regression and classification models](http://www.jcheminf.com/content/pdf/1758-2946-6-10.pdf)
|
ML/DAT8-master/notebooks/13_cross_validation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Step 1: Function to print out a board.
# +
#from IPython.display import clear_output
def display_board(board):
#clear_output()
print(board[7]+'|'+board[8]+'|'+board[9])
print('─────')
print(board[4]+'|'+board[5]+'|'+board[6])
print('─────')
print(board[1]+'|'+board[2]+'|'+board[3])
# +
#Test 1:
test_board = ['#','X','O','X','O','X','O','X','O','X']
display_board(test_board)
# -
# ### Step 2: Function to take input & assign X or O.
def player_input():
""" OUTPUT = (Player 1 marker, PLayer 2 marker) """
marker = ''
#can use ~ while not (marker == 'X' or marker == 'O')
while marker != 'X' and marker != 'O':
marker = input("Player 1, choose X or O: ").upper()
if marker == 'X':
return ('X','O')
else:
return ('O','X')
# +
#Test 2:
player_input()
# -
player1_marker, player2_marker = player_input()
player1_marker
player2_marker
# ### Step 3: Function takes X or O, desired position & assign it.
def place_marker(board, marker, position):
board[position] = marker
# +
#Test 3:
place_marker(test_board,'$',8)
display_board(test_board)
# -
test_board
# ### Step 4: Function to take board list in, mark X or O, and to check if mark has won:
def win_check(board, mark):
# can do ~ (board[1] == board[2] == board[3] == mark) ...
# similarly for other 2 rows, 3 columns and 2 diagonals.
return ((board[7] == mark and board[8] == mark and board[9] == mark) or # across the top
(board[4] == mark and board[5] == mark and board[6] == mark) or # across the middle
(board[1] == mark and board[2] == mark and board[3] == mark) or # across the bottom
(board[7] == mark and board[4] == mark and board[1] == mark) or # down the left
(board[8] == mark and board[5] == mark and board[2] == mark) or # down the middle
(board[9] == mark and board[6] == mark and board[3] == mark) or # down the right
(board[7] == mark and board[5] == mark and board[3] == mark) or # diagonal 1
(board[9] == mark and board[5] == mark and board[1] == mark)) # diagonal 2
# +
#Test 4:
display_board(test_board)
win_check(test_board,'X')
# -
# ### Step 5: Function to randomly choose which player goes 1st.
# +
import random
def choose_first():
flip = random.randint(0,1)
if flip == 0:
return "Player 1"
else:
return "Player 2"
# -
# ### Step 6: Function to tell if free space is available.
def space_check(board,position):
return board[position] == ' '
# ### Step 7: Function to tell if board is full. If full true, else false.
def full_board_check(board):
for i in range(1,10):
if space_check(board,i):
return False
return True
# ### Step 8: Ask for next position, check step 6, & use position later.
def player_choice(board):
position = 0
while position not in range(1,10) or not space_check(board,position):
position = int(input("Choose a position (1-9): "))
return position
# ### Step 9: Ask if wanna play again & return boolean vlaue.
def replay():
choice = input ("Play again? Yes or No: ")
return choice == "Y"
# ### Step 10: Use logic to sequentially set functions to run full game:
# +
# While loop to keep running the game
print ("Welcome to Tic Tac Toe game.")
while True:
# Play the game
## Set up - Board, Who's first, markers X,O
the_board = [' ']*10
player1_marker, player2_marker = player_input()
turn = choose_first()
print (turn + " will go first.")
play_game = input("Ready to play? Y or N?")
if play_game == "Y":
game_on = True
else:
game_on = False
## Game play
### PLayer one turn
while game_on:
if turn == "Player 1":
# Show the board
display_board(the_board)
# Choose the position
position = player_choice(the_board)
# Place the marker on position
place_marker(the_board, player1_marker, position)
# Check if they won
if win_check(the_board,player1_marker):
display_board(the_board)
print ("Player 1 has won!")
game_on = False
# Check if there's tie
else:
if full_board_check(the_board):
display_board(the_board)
print ("The game is tie!")
game_on = False
# No tie and no win? Player 2 turn
else:
turn = "Player 2"
### Player two turn
else:
# Show the board
display_board(the_board)
# Choose the position
position = player_choice(the_board)
# Place the marker on position
place_marker(the_board, player2_marker, position)
# Check if they won
if win_check(the_board,player2_marker):
display_board(the_board)
print ("Player 2 has won!")
game_on = False
# Check if there's tie
else:
if full_board_check(the_board):
display_board(the_board)
print ("The game is tie!")
game_on = False
# No tie and no win? Player 1 turn
else:
turn = "Player 1"
if not replay():
break
# Break out of while loop by replay()
|
Step wise game.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="jcmd36RffJbd" colab_type="code" outputId="c00d06d4-7e06-4bfb-9fb0-f8a3af7a9eaa" executionInfo={"status": "ok", "timestamp": 1584768926077, "user_tz": 0, "elapsed": 3734, "user": {"displayName": "D\u00e9<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhZWBtkZa7W61uoK9OHg7aKMhRAw8hLh6aIf5r4Zw=s64", "userId": "11284580895088960951"}} colab={"base_uri": "https://localhost:8080/", "height": 70}
#@title Check GPU
#@markdown Run this to connect to a Colab Instance, and see what GPU Google gave you.
# gpu = !nvidia-smi --query-gpu=gpu_name --format=csv
print(gpu[1])
print("The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.")
print("Sometimes resetting the instance in the 'runtime' tab will give you a different GPU.")
# + id="lBAZ0KukZuq4" colab_type="code" outputId="49967231-4965-4733-8e49-685e849080de" executionInfo={"status": "ok", "timestamp": 1584772929782, "user_tz": 0, "elapsed": 1586, "user": {"displayName": "D\u00e9<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhZWBtkZa7W61uoK9OHg7aKMhRAw8hLh6aIf5r4Zw=s64", "userId": "11284580895088960951"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
# drive.flush_and_unmount()
drive.mount('/content/drive',force_remount=True)
# + id="96tKP5jQKwky" colab_type="code" outputId="3b755f61-585c-4908-8dc1-3da362aa4e0d" executionInfo={"status": "ok", "timestamp": 1584768948263, "user_tz": 0, "elapsed": 8382, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhZWBtkZa7W61uoK9OHg7aKMhRAw8hLh6aIf5r4Zw=s64", "userId": "11284580895088960951"}} colab={"base_uri": "https://localhost:8080/", "height": 105}
# !git clone https://github.com/minyuanye/SIUN.git
# + id="iZb59uxnK_DN" colab_type="code" outputId="b85ed6d5-2560-458c-d538-9988e9915db7" executionInfo={"status": "ok", "timestamp": 1584768948266, "user_tz": 0, "elapsed": 7966, "user": {"displayName": "D\u00<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhZWBtkZa7W61uoK9OHg7aKMhRAw8hLh6aIf5r4Zw=s64", "userId": "11284580895088960951"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# %cd SIUN
# + id="yCpnTEeaLAAz" colab_type="code" outputId="a543a776-9e7f-4cb3-b81e-d9a095696038" executionInfo={"status": "ok", "timestamp": 1584768948267, "user_tz": 0, "elapsed": 7498, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhZWBtkZa7W61uoK9OHg7aKMhRAw8hLh6aIf5r4Zw=s64", "userId": "11284580895088960951"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# %cd code
# + id="DnikyMqLRrB5" colab_type="code" colab={}
import shutil
# + id="IGr8AbA6Liul" colab_type="code" colab={}
# !mkdir input
# !mkdir output
#upload files
# + id="xNk9KpFWaIo_" colab_type="code" outputId="ca7840c9-7dc3-44a1-96c9-3ce7dbed2caa" executionInfo={"status": "ok", "timestamp": 1584768957307, "user_tz": 0, "elapsed": 15324, "user": {"displayName": "D\u00e9nes Csala", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhZWBtkZa7W61uoK9OHg7aKMhRAw8hLh6aIf5r4Zw=s64", "userId": "11284580895088960951"}} colab={"base_uri": "https://localhost:8080/", "height": 70}
# !mkdir "//content/drive/My Drive/kontext/deold/6"
# !mkdir "//content/drive/My Drive/kontext/deold/6b"
# !mkdir "//content/drive/My Drive/kontext/deold/6c"
# + id="mW3egLV1RMh9" colab_type="code" colab={}
files={'bp1':'bp1_2.00x_640x480.mp4',
'bp2':'bp2_2.00x_984x720.mp4',
'bp3':'bp3_2.00x_960x720.mp4',
'bp4':'bp4_2.00x_960x640.mp4',
'bp5':'bp5_2.00x_1280x720.mp4',
'ed':'ed_2.00x_960x720.mp4',
'kv1':'kv1_2.00x_960x720.mp4',
'kv2':'kv2_2.00x_640x480.mp4',
'szf2':'szf2_2.00x_1600x972.mp4'
}
# + id="VmRfZKG8RC4T" colab_type="code" outputId="3d5550df-6648-4471-89b7-66c349d852c4" executionInfo={"status": "ok", "timestamp": 1584779113256, "user_tz": 0, "elapsed": 5905987, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhZWBtkZa7W61uoK9OHg7aKMhRAw8hLh6aIf5r4Zw=s64", "userId": "11284580895088960951"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
for vid in ['bp2']:
# for file in [files[vid],files[vid][:-4]+'_12.mp4',files[vid][:-4]+'_21.mp4']:
# for k in ['','b','c']:
for file in [files[vid],files[vid][:-4]+'_21.mp4']:
for k in ['b']:
print(file,k)
#prepare env
shutil.rmtree('input')
shutil.rmtree('output')
# !mkdir output
# !mkdir input
#video to frames
import cv2
vidcap = cv2.VideoCapture('//content/drive/My Drive/kontext/deold/5'+k+'/'+file)
success,image = vidcap.read()
count = 0
while success:
cv2.imwrite("input/frame%.5d.jpg" % count, image) # save frame as JPEG file
success,image = vidcap.read()
count += 1
print('Read finished.')
# deblur
# !python deblur.py --apply --dir-path=input --result-dir=output --gpu=0
# frames to video
import numpy as np
import os
from os.path import isfile, join
pathIn= 'output/'
pathOut = '//content/drive/My Drive/kontext/deold/6'+k+'/'+file
fps = vidcap.get(cv2.CAP_PROP_FPS)
frame_array = []
fs = [f for f in os.listdir(pathIn) if isfile(join(pathIn, f))]
#for sorting the file names properly
fs.sort(key = lambda x: x[5:-4])
fs.sort()
frame_array = []
fs = [f for f in os.listdir(pathIn) if isfile(join(pathIn, f))]
#for sorting the file names properly
fs.sort(key = lambda x: x[5:-4])
for i in range(len(fs)):
filename=pathIn + fs[i]
#reading each files
img = cv2.imread(filename)
height, width, layers = img.shape
size = (width,height)
#inserting the frames into an image array
frame_array.append(img)
out = cv2.VideoWriter(pathOut,cv2.VideoWriter_fourcc(*'DIVX'), fps, size)
for i in range(len(frame_array)):
# writing to a image array
out.write(frame_array[i])
out.release()
# + id="5Cer-pJbJbCJ" colab_type="code" colab={}
|
deold/6_deblur.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 3 - Visualization
% matplotlib inline
# The special command above will make all the `matplotlib` images appear in the notebook.
# +
import numpy as np
import random as py_random
import numpy.random as np_random
import time
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="whitegrid")
from IPython.display import YouTubeVideo, Image
# -
# ## Directions
#
# **Failure to follow the directions will result in a "0"**
#
# The due dates for each are indicated in the Syllabus and the course calendar. If anything is unclear, please email <EMAIL> the official email for the course or ask questions in the Lab discussion area on Blackboard.
#
# The Labs also present technical material that augments the lectures and "book". You should read through the entire lab at the start of each module.
#
# ### General Instructions
#
# 1. You will be submitting your assignment to Blackboard. If there are no accompanying files, you should submit *only* your notebook and it should be named using *only* your JHED id: fsmith79.ipynb for example if your JHED id were "fsmith79". If the assignment requires additional files, you should name the *folder/directory* your JHED id and put all items in that folder/directory, ZIP it up (only ZIP...no other compression), and submit it to Blackboard.
#
# * do **not** use absolute paths in your notebooks. All resources should appear in the same directory as the rest of your assignments.
# * the directory **must** be named your JHED id and **only** your JHED id.
#
# 2. Data Science is as much about what you write (communicating) as the code you execute (researching). In many places, you will be required to execute code and discuss both the purpose and the result. Additionally, Data Science is about reproducibility and transparency. This includes good communication with your team and possibly with yourself. Therefore, you must show **all** work.
#
# 3. Avail yourself of the Markdown/Codecell nature of the notebook. If you don't know about Markdown, look it up. Your notebooks should not look like ransom notes. Don't make everything bold. Clearly indicate what question you are answering.
#
# 4. Submit a cleanly executed notebook. The first code cell should say `In [1]` and each successive code cell should increase by 1 throughout the notebook.
# ## Execises
#
# For each of the following charts, you should:
#
# 1. What do you think the main story of the chart is. Does the chart really tell it?
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
#
# You may insert as many Markdown and Code cells below each chart as you need.
# ** Chart 1.**
Image( "resources/chart_01.jpg", width=500)
#
# 1. What do you think the main story of the chart is. Does the chart really tell it?
# The chart shows the amount of sales of the major fast food chains
#
#
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
# (1) I dont think it violates Clevanland's principle. It's not hard to decode it to quantitative values.
# (2) I think it violate Ware's principles, as we have only limited ability to perceive differences in sizes. For example, KFC, Wendy's and Burger King look almost the same to me.
# (3) I think it adheres to Gestalt's principles. For example, the decription text box is connected to the logo. This is an example of Principle of Connection.
#
#
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
#
# +
sales = [4.1, 4.3, 8, 8.2, 9.4,11.3,21]
x = range( len( sales))
width = 1/1.5
figure, axes = plt.subplots()
axes.bar(x, sales, width, color="steelblue", align="center")
axes.set_xticks([0, 1, 2, 3, 4,5,6])
axes.set_xticklabels(["Starbucks", "Taco Bell", "Pizza Hut", "KFC", "Wendy","Burger King","MC"])
axes.yaxis.grid( b=True, which="major")
axes.set_ylim((0, 30))
plt.show()
# -
# **Chart 2.**
Image( "resources/chart_02.jpg", width=500)
# + active=""
# 1. What do you think the main story of the chart is. Does the chart really tell it?
# The chart shows two things:
# (1) the percent of imported food by each food category
# (2) the percent of imported food by county by food category
#
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
# (1) I think it adheres to Clevanland's principle. The description and the quantitative numbers help us decode the chart.
# (2) I think it violates Ware's principles, as we have only limited ability to perceive differences in sizes. For example, it is very hard to tell from the filet size that the imported seafood is 12%. Also, the imported honey seems much bigger than 60%
# (3) I think it violates adheres Gestalt's principles. For example, the fish filet is next to the list of counties. While countires and foods are different things, the chart violates both Principle of Continuity and Principle of Proximity.
#
#
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
#
#
# +
#We dont want to use pie charts. I think a data table is better than any chart here.
from IPython.display import HTML, display
data = [["","Fruit","Honey","Vege","Lamb","Seafood"],
["Import","51%","61%","20%","52%","88%"],
["Not Import","49%","39%","80%","48%","12%"],
]
display(HTML(
'<table><tr>{}</tr></table>'.format(
'</tr><tr>'.join(
'<td>{}</td>'.format('</td><td>'.join(str(_) for _ in row)) for row in data)
)
))
# -
# **Chart 3.**
Image( "resources/chart_03.jpg", width=500)
# 1. What do you think the main story of the chart is. Does the chart really tell it?
# The chart shows the minimum age to drink in Canada.
#
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
# (1) I think it adheres to Clevanland's principle. The description and the quantitative numbers can help us decode the chart very easily.
# (2) I think it adheres to Ware's principles, as we have good ability to perceive differences in length and height.
#
# (3) I think it adheres to Gestalt's principles. For example, all states are in blue as they are states in Canada. This adheres to the Principle of Similarity
#
#
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
#
#
# +
#I think a data table is better than any chart here.
from IPython.display import HTML, display
data = [['Ontario','Quebec','Nova Scotia','New Brunswick','Manitoba','British Columbia','Prince Edward Island','Saskatchewan','Alberta','Newfoundland and Labrador'],
['<h1 style="background-color:Tomato;">18','<h1 style="background-color:DodgerBlue;">19',
'<h1 style="background-color:DodgerBlue;">19','<h1 style="background-color:DodgerBlue;">19',
'<h1 style="background-color:Tomato;">18',
'<h1 style="background-color:DodgerBlue;">19','<h1 style="background-color:DodgerBlue;">19','<h1 style="background-color:DodgerBlue;">19',
'<h1 style="background-color:Tomato;">18','<h1 style="background-color:DodgerBlue;">19'],
]
display(HTML(
'<table><tr>{}</tr></table>'.format(
'</tr><tr>'.join(
'<td>{}</td>'.format('</td><td>'.join(str(_) for _ in row)) for row in data)
)
))
# -
# **Chart 4**
Image( "resources/chart_04.jpg", width=500)
# 1. What do you think the main story of the chart is. Does the chart really tell it?
# I think the chart compares the percent of >25 and <=25 enrollment.
#
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
# The chart has exactly the same problem as the example in the Module 4 instruction document. "The problem with this chart is that perceptually we want to look perpendicular differences between the curves, as if they were a map and we had to navigate on a ship. "
#
# (1) I think it adheres to Clevanland's principle. The description and the quantitative numbers can help us decode the chart very easily.
# (2) Also, the start point of Y is not 0.
# (3) I think it violates Ware's principles, as we have cannot to perceive differences in sizes and shapes accurately.
#
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
#
# +
over25 = [28, 29.2, 32.8, 33.6, 33.0]
youger25 = [72, 70.8, 67.2, 66.4, 67.0]
diff = [a_i - b_i for a_i, b_i in zip(youger25, over25)]
x = range( len( diff))
width = 1/1.5
figure, axes = plt.subplots()
axes.bar(x, diff, width, color="steelblue", align="center")
axes.set_xticks([0, 1, 2, 3, 4])
axes.set_xticklabels(["1972", "1973", "1974", "1975", "1976"])
axes.yaxis.grid( b=True, which="major")
axes.set_ylim((0, 50))
plt.show()
# -
# **Chart 5**
Image( "resources/chart_05.jpg", width=500)
# 1. What do you think the main story of the chart is. Does the chart really tell it?
#
# I think the chart shows the job loss by quarter.
#
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
#
# (1) I do not think the Y starts from 0. As the height of 7 million seems much shorter than the highet of 15 million.
# (2) I think it adheres to Ware's principles, as we have good ability to perceive differences in length and height.
# (3) I think it adheres to Gestalt's principles, especially the Principle of Similarity and Principle of Continuity
#
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
#
# +
figure = plt.figure(figsize=(10, 6))
xs = ["Dec 07", "Sep 08", "March 09", "Jun 10"]
ys = [7, 9, 13.5, 15]
axes = figure.add_subplot(1, 1, 1)
axes.plot(xs, ys, "o-", color="steelblue")
axes.set_xlim((0, 3))
axes.set_ylim((0, 20))
axes.set_xticks(xs)
#axes.set_xticklabels("Dec 07", "Sep 08", "March 09", "Jun 10")
# -
# **Chart 6**
Image( "resources/chart_06.jpg", width=500)
# 1. What do you think the main story of the chart is. Does the chart really tell it?
#
# I think the chart shows the unemployment rate per month in 2011.
#
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
#
# (1) Y does not start from 0. By first impression, you would think 8.6% is much lower than 9.0%.
# (2) I think it adheres to Ware's principles, as we have good ability to perceive differences in height.
# (3) I think it adheres to Gestalt's principles, especially the Principle of Similarity and Principle of Continuity
#
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
#
# +
figure = plt.figure(figsize=(10, 6))
xs = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov"]
ys = [9.0,8.9,8.8,9.0,9.1,9.2,9.1,9.1,9.1,9.0,8.6]
axes = figure.add_subplot(1, 1, 1)
axes.plot(xs, ys, "o-", color="steelblue")
axes.set_xlim((0, 10))
axes.set_ylim((0, 12))
axes.set_xticks(xs)
# -
# ** Chart 7 **
Image( "resources/chart_07.jpg", width=500)
# 1. What do you think the main story of the chart is. Does the chart really tell it?
#
# I think the chart shows the border apprenhensions October - April in 2011, 2012 and 2013.
#
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
#
# (1) Y does not start from 0. By first impression, you would think 165,244 is much lower than 192,298
# (2) I think it adheres to Ware's principles, as we have good ability to perceive differences in height.
# (3) I think it adheres to Gestalt's principles, especially the Principle of Similarity and Principle of Continuity
#
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
# +
y = [165244, 170223, 192298]
x = range( len( y))
width = 1/1.5
figure, axes = plt.subplots()
axes.bar(x, y, width, color="steelblue", align="center")
axes.set_xticks([0, 1, 2])
axes.set_xticklabels(["2011", "2012", "2013"])
axes.yaxis.grid( b=True, which="major")
axes.set_ylim((0, 200000))
plt.show()
# -
# ** Chart 8 **
Image( "resources/chart_08.jpg", width=500)
# 1. What do you think the main story of the chart is. Does the chart really tell it?
#
# I think the chart shows a list of frequencies.
#
# 2. Discuss how the chart either adheres to or violates the principles discussed in the module (Cleveland, Ware and Gestalt).
#
# (1) We should never use a pie char.
#
# (2) I think it violates to Ware's principles, as we have poor ability to perceive differences 2d/3d/shape/color. I dont understand the meanings of different color.
#
# (3) I think it adheres to Cleveland's principles, as we can easily decode the chart to a table.
#
# 3. As best as you can, decode the values and present an alternatives using HTML, `matplotlib` and possibly `seaborn` **following the guidelines laid down in the module**. Why do you think this is a better alternative? What does it fix from the original? The solution may involve more than one chart or even no chart at all (a table).
# +
y = [2, 8, 19, 42, 29]
x = range( len( y))
width = 1/1.5
figure, axes = plt.subplots()
axes.bar(x, y, width, color="steelblue", align="center")
axes.set_xticks([0, 1, 2,3,4])
axes.set_xticklabels(["Daily","Every few days","Every few weeks","Every few months","Never"])
axes.yaxis.grid( b=True, which="major")
axes.set_ylim((0, 50))
plt.show()
# -
# **Chart 9**
#
# The following chart is from an interesting article on data science, [What do Data Scientists do all Day](http://www.wsj.com/articles/what-data-scientists-do-all-day-at-work-1457921541?mod=e2fb). To me, and perhaps to you, this is the most interesting quote:
#
# >WSJ: How do you think data science will change in the next five years?
#
# >DR. NARASIMHAN: I’m now coming to the feeling that data science is no longer a zero or one thing, where you either are or are not a data scientist. All of us, whether we have the title or not, are making data-driven decisions. All of us are getting savvier.
#
# >I think techniques in data science are becoming more commoditized and are applicable to more of us, like in fantasy sports with the handicapping of different games or in making our own financial decisions. Take shopping. We want to buy the same thing our friends are buying, and we want the absolute lowest price we could find. Amazon has done this to some extent, but even when we know something is on sale for just an hour, there could be enough tools listening to do that.
#
# >Data science will percolate into all of our daily lives. Pick dating and online relationships. Think of when you get profiled—if an algorithm picks a partner, she will have certain attributes and it’s probably a good [match]. If any field today is not data driven, those are ripe for disruption.
Image( "resources/chart_13.jpg", width=500)
# What's wrong with this set of charts? Can you fix it?
# +
import pandas as pd
df = pd.DataFrame()
df['y'] = [11,32,46,12,19,42,31,7]
df['type'] = ["Basic Exp","Basic Exp","Basic Exp","Basic Exp","Data Cln","Data Cln","Data Cln","Data Cln"]
df['hr'] = ["<=1","1-4","1-3","4","<=1","1-4","1-3","4"]
ax = sns.barplot(x="type", y="y", hue="hr", data=df)
#We dont want to use pie chart
#I will probabaly do something like below
# -
|
lab4/apl/yxu29.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + papermill={"duration": 0.120073, "end_time": "2020-07-13T12:59:07.453859", "exception": false, "start_time": "2020-07-13T12:59:07.333786", "status": "completed"} tags=[]
from edc import setup_environment_variables
setup_environment_variables()
# + papermill={"duration": 0.047311, "end_time": "2020-07-13T12:59:07.522944", "exception": false, "start_time": "2020-07-13T12:59:07.475633", "status": "completed"} tags=[]
from edc import check_compatibility
check_compatibility("user-0.19.6")
# + [markdown] papermill={"duration": 0.014435, "end_time": "2020-07-13T12:59:07.554797", "exception": false, "start_time": "2020-07-13T12:59:07.540362", "status": "completed"} tags=[]
# # EDC Sentinel Hub: Data Fusion
#
# ## Example 1: Filling clouds in a Sentinel-2 image with Sentinel-1 SAR data
#
# This notebook shows how to request [S1 GRD](https://docs.sentinel-hub.com/api/latest/#/data/Sentinel-1-GRD) and [S2 L1A](https://docs.sentinel-hub.com/api/latest/#/data/Sentinel-2-L2A) from the [Sentinel Hub API](https://docs.sentinel-hub.com/api/latest/#/API/), and how to merge the data in an [evalscript](https://docs.sentinel-hub.com/api/latest/#/Evalscript/).
#
# The S2 True Color composition is returned (R: B04, G: B03, B: B02), whereas cloud covered areas are replaced by the S1 VV polarization band.
#
# ### Prerequisites
#
# This notebook requires an active subscription to:
#
# - EDC Sentinel Hub
#
# + papermill={"duration": 0.062443, "end_time": "2020-07-13T12:59:07.641306", "exception": false, "start_time": "2020-07-13T12:59:07.578863", "status": "completed"} tags=[]
# Request tools imports
from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
# + papermill={"duration": 0.189269, "end_time": "2020-07-13T12:59:07.847803", "exception": false, "start_time": "2020-07-13T12:59:07.658534", "status": "completed"} tags=[]
# Utilities
import os
import shapely.geometry
import IPython.display
# + papermill={"duration": 0.394387, "end_time": "2020-07-13T12:59:08.265425", "exception": false, "start_time": "2020-07-13T12:59:07.871038", "status": "completed"} tags=[]
# %matplotlib inline
# + [markdown] papermill={"duration": 0.030435, "end_time": "2020-07-13T12:59:08.318967", "exception": false, "start_time": "2020-07-13T12:59:08.288532", "status": "completed"} tags=[]
# ### Setup
# + papermill={"duration": 0.038397, "end_time": "2020-07-13T12:59:08.381609", "exception": false, "start_time": "2020-07-13T12:59:08.343212", "status": "completed"} tags=[]
# Pass the Sentinel Hub Client ID and Secret to variables to be used in the request.
# my_client_id = %env SH_CLIENT_ID
# my_client_secret = %env SH_CLIENT_SECRET
# + papermill={"duration": 0.034877, "end_time": "2020-07-13T12:59:08.438578", "exception": false, "start_time": "2020-07-13T12:59:08.403701", "status": "completed"} tags=[]
# Create an OAuth2 session based on the client ID
client = BackendApplicationClient(client_id=my_client_id)
oauth = OAuth2Session(client=client)
# + [markdown] papermill={"duration": 0.048362, "end_time": "2020-07-13T12:59:08.519921", "exception": false, "start_time": "2020-07-13T12:59:08.471559", "status": "completed"} tags=[]
# The Sentinel Hub API uses OAuth2 Authentication and requires that you have an [access token](https://docs.sentinel-hub.com/api/latest/#/API/authentication). These tokens are limited in time, but a new one can be requested with the following command.
# + papermill={"duration": 0.159111, "end_time": "2020-07-13T12:59:08.702303", "exception": false, "start_time": "2020-07-13T12:59:08.543192", "status": "completed"} tags=[]
# Get a token for the session
token = oauth.fetch_token(token_url='https://services.sentinel-hub.com/oauth/token',
client_id=my_client_id, client_secret=my_client_secret)
# + [markdown] papermill={"duration": 0.016321, "end_time": "2020-07-13T12:59:08.748302", "exception": false, "start_time": "2020-07-13T12:59:08.731981", "status": "completed"} tags=[]
# ## Setting an area of interest
#
# We will download Sentinel-1 and Sentinel-2 imagery of the hills located to the North of # Ljubljana, Slovenia.
# The bounding box is in the WGS84 coordinate system, and consists of the longitude and latitude coordinates of lower left and upper right corners.
# + papermill={"duration": 0.038592, "end_time": "2020-07-13T12:59:08.809297", "exception": false, "start_time": "2020-07-13T12:59:08.770705", "status": "completed"} tags=[]
# Set bounding box coordinates for the area of interest
bbox = (14.37595, 46.09347, 14.56924, 46.19266)
# + papermill={"duration": 0.031245, "end_time": "2020-07-13T12:59:08.864244", "exception": false, "start_time": "2020-07-13T12:59:08.832999", "status": "completed"} tags=[]
# Display the coordinates on a map
IPython.display.GeoJSON(shapely.geometry.box(*bbox).__geo_interface__)
# + [markdown] papermill={"duration": 0.031888, "end_time": "2020-07-13T12:59:08.922704", "exception": false, "start_time": "2020-07-13T12:59:08.890816", "status": "completed"} tags=[]
# ## API request
#
# We need to specify arguments to send a POST request, more information about all the availaible options [here](https://docs.sentinel-hub.com/api/latest/reference/#operation/process):
#
# - The access point URL: set it to `https://services.sentinel-hub.com/api/v1/process`
#
# - `json` - the contents of the request containing:
# - `input`: (required) specifies the data to request:
# - `bounds`: defines the request bounds by specifying the bounding box and/or geometry for the request.
# - `data`: describes the data being requested along with certain processing and filtering parameters.
# - `output`: (optional) allows to set parameters for the returned data
# - `width`: The request image width. Must be an integer between 1 and 2500.
# - `height`: The request image height. Must be an integer between 1 and 2500.
# - `evalscript`: calculates the output values for each pixel
#
# + [markdown] papermill={"duration": 0.029336, "end_time": "2020-07-13T12:59:08.969703", "exception": false, "start_time": "2020-07-13T12:59:08.940367", "status": "completed"} tags=[]
# #### Step 1
# First, set the input options. The options are set in the form of a dictionary with key/values pairs.
#
# 1. Specify the bounds of your request using the bounding box previously described and specifying the [projection](
# https://docs.sentinel-hub.com/api/latest/#/API/crs?id=crs-support) (one of the CRS supported by Sentinel Hub API).
#
# 2. Specify the data to fetch. As in this example there are 2 data sources (Sentinel-1 and Sentinel-2), the options for each sensor are specified in a list of dictionaries.
# + papermill={"duration": 0.040174, "end_time": "2020-07-13T12:59:09.037366", "exception": false, "start_time": "2020-07-13T12:59:08.997192", "status": "completed"} tags=[]
# 1. Bounds
bounds = {"properties":{"crs":"http://www.opengis.net/def/crs/OGC/1.3/CRS84"}, "bbox": bbox}
# 2. Data
data = [{"id": "s1",
"type": "S1GRD",
"dataFilter": {"timeRange": {"from": "2020-02-01T00:00:00Z",
"to": "2020-02-06T00:00:00Z"},
"mosaickingOrder": "mostRecent"}},
{"id": "l2a",
"type": "S2L2A",
"dataFilter": {"timeRange": {"from": "2020-02-01T00:00:00Z",
"to": "2020-02-06T00:00:00Z"},
"mosaickingOrder": "mostRecent"}}]
# Set the options to the input
input_options = {"bounds": bounds, "data": data,}
# + [markdown] papermill={"duration": 0.026794, "end_time": "2020-07-13T12:59:09.087459", "exception": false, "start_time": "2020-07-13T12:59:09.060665", "status": "completed"} tags=[]
# #### Step 2
#
# Then set the output options. The default output size of the requested image is 256x256 pixels, so here we will modify the output to be larger. The result will be displayed only, therefore we also set the output format to jpeg.
# + papermill={"duration": 0.034114, "end_time": "2020-07-13T12:59:09.156300", "exception": false, "start_time": "2020-07-13T12:59:09.122186", "status": "completed"} tags=[]
output_options = {"width": 640, "height": 640, "responses": [{"format": {"type": "image/jpeg",
"quality": 90}}]}
# + [markdown] papermill={"duration": 0.024024, "end_time": "2020-07-13T12:59:09.215518", "exception": false, "start_time": "2020-07-13T12:59:09.191494", "status": "completed"} tags=[]
# #### Step 3
#
# The [evalscript](https://docs.sentinel-hub.com/api/latest/#/Evalscript/) is a piece of javascript code that allows to process the pixels of the images that are returned.
#
# In the current example, the S1 and S2 bands needed are called in the `setup` function of the script.
#
# The `evaluatePixel` function returns the S1 VV polarization band for the pixels in the S2-L2A SCL band (see [here](https://docs.sentinel-hub.com/api/latest/#/data/Sentinel-2-L2A)) classified as:
#
# - 7 - Clouds low probability / Unclassified
# - 8 - Clouds medium probability
# - 9 - Clouds high probability
# - 10 - Cirrus
#
# For all the other pixels the S2 [True Color](https://www.sentinel-hub.com/eoproducts/true-color) visualisation is returned (scaled by 2.5 for better viewing).
# + papermill={"duration": 0.033356, "end_time": "2020-07-13T12:59:09.273068", "exception": false, "start_time": "2020-07-13T12:59:09.239712", "status": "completed"} tags=[]
evascript = """
//VERSION=3
function setup (){
return {
input: [
{datasource: "s1", bands:["VV"]},
{datasource: "l2a", bands:["B02", "B03", "B04", "SCL"], units:"REFLECTANCE"}],
output: [
{id: "default", bands: 3, sampleType: SampleType.AUTO}
]
}
}
function evaluatePixel(samples, inputData, inputMetadata, customData, outputMetadata) {
var sample = samples.s1[0]
var sample2 = samples.l2a[0]
if ([7, 8, 9, 10].includes(sample2.SCL)) {
return {
default: [sample.VV, sample.VV, sample.VV]
}
} else {
return {
default: [sample2.B04*2.5, sample2.B03*2.5, sample2.B02*2.5]
}
}
}
"""
# + [markdown] papermill={"duration": 0.02396, "end_time": "2020-07-13T12:59:09.322036", "exception": false, "start_time": "2020-07-13T12:59:09.298076", "status": "completed"} tags=[]
# #### Step 4 (final)
#
# The different parts of the request built above are merged together in the `oauth.post` command and the request is posted. If all the elements are correct, the command should return a `200` status.
# + papermill={"duration": 1.255128, "end_time": "2020-07-13T12:59:10.595203", "exception": false, "start_time": "2020-07-13T12:59:09.340075", "status": "completed"} tags=[]
response = oauth.post('https://services.sentinel-hub.com/api/v1/process',
json={"input": input_options,
"evalscript": evascript,
"output": output_options,
})
print("Request status: %s, %s" % (response.status_code, response.reason))
# + [markdown] papermill={"duration": 0.019933, "end_time": "2020-07-13T12:59:10.639156", "exception": false, "start_time": "2020-07-13T12:59:10.619223", "status": "completed"} tags=[]
# If the request was successful, you can now observe the image:
# + papermill={"duration": 0.049754, "end_time": "2020-07-13T12:59:10.706440", "exception": false, "start_time": "2020-07-13T12:59:10.656686", "status": "completed"} tags=[]
IPython.display.Image(response.content)
|
notebooks/contributions/EDC_SentinelHub_DataFusion_Basic.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# name: python3
# ---
# +
# Load data
import os
import numpy as np
from pydub import AudioSegment
import csv
import scipy
def load_data(path):
"""
Output structure:
list of dictionaries
playlist[n] = {
name (string)
audio_array (np.array)
sampling_rate (double)
...
real_bpm (Int)
}
"""
print(f"Loading data from {path}...")
playlist = []
for root, dirs, files in os.walk(path, topdown=False):
for file in files:
if file == ".DS_Store":
continue
audio_file = AudioSegment.from_wav(os.path.join(root, file))
if audio_file.channels > 1:
# make sure we are only using one channel. It may not matter.
audio_file = audio_file.split_to_mono()[0]
audio_array = np.array(audio_file.get_array_of_samples(), dtype=float)
song_name, artist_name = extract_names(file)
song_dict = {
"artist_name": artist_name,
"song_name": song_name,
"audio_segment": audio_file,
"audio_array": audio_array,
"song_path": os.path.join(root, file),
}
playlist.append(song_dict)
playlist = basic_feature_extraction(playlist)
# playlist = load_true_bpm(playlist)
print(f"\t{len(playlist)} songs loaded")
return playlist
def extract_names(file):
song_name, _, artist_name = file.partition(" - ")
song_name = song_name[3:]
artist_name, _, _ = artist_name.partition(".")
return song_name, artist_name
def basic_feature_extraction(playlist):
"""
Output structure:
list of dictionaries
playlist[n] = {
name (string)
audio_array (np.array)
sampling_rate (double)
...
}
"""
for song in playlist:
song["frame_rate"] = song["audio_segment"].frame_rate
return playlist
def load_true_bpm(playlist):
# load csv with the bpms
with open("songs.csv", "r") as file:
csv_reader = csv.DictReader(file, delimiter=",")
playlist_true_bpm = list(csv_reader)
for song in playlist:
flag = 0
for song_ref in playlist_true_bpm:
if song["song_name"] == song_ref["song_name"]:
song["true_bpm"] = song_ref["bpm"]
flag = 1
if flag == 0:
# Don't know if this is the best way of raising an error.
# Please change to a better one if you know one.
print("No true bpm found for song:", song["song_name"])
return playlist
def store_song(mix, path):
scipy.io.wavfile.write(
path, rate=mix["frame_rate"], data=mix["audio_array"].astype("int32")
)
# +
# Relevant feature extraction
# Beat detection
# Key detection
# Structural segmentation
# from librosa.util.utils import frame
import numpy as np
import scipy
import sklearn
from madmom.features.beats import RNNBeatProcessor
from madmom.features.beats import DBNBeatTrackingProcessor
from madmom.features.key import CNNKeyRecognitionProcessor
from madmom.features.key import key_prediction_to_label
import librosa
import essentia
from essentia.standard import FrameGenerator, PeakDetection
import utils
def feature_extraction(playlist):
print('Extracting features')
for i, song in enumerate(playlist):
print(f'\tSong {i+1} / {len(playlist)}')
print('\t\tEstimating beat...')
beats_frames, bpm = beat_detection(song)
song['beat_times'] = beats_frames # Array like the samples marking with the beat ocurrs, ones/zeros
song['estimated_bpm'] = bpm # Int
print('\t\tEstimating key...')
key_probabilities, key_label = key_detection(song)
song['estimated_key'] = key_label.split(' ')[0] # Probalby string or a int encoding of all the keys
song['estimated_mode'] = key_label.split(' ')[1]
song['key_probabilities'] = key_probabilities
print('\t\tEstimating cue-points')
cue_points = structural_segmentation(song)
song['cue_points'] = cue_points # Array like the samples marking with the cue-point ocurrs
# Maybe cut silences or if the cue-points in
# the beginning and the end are too extreme
return playlist
# FEATURES
def beat_detection(song):
proc = DBNBeatTrackingProcessor(fps=100)
act = RNNBeatProcessor()(song["song_path"])
beat_times = proc(act)
# create the array of ones and zeros
beat_frames = convert_to_frames(beat_times,song)
# compute the bpm of the song
bpm = beats_per_minute(beat_times,song)
return beat_frames, bpm
def convert_to_frames(beat_times, song):
beat_frames = (beat_times*song["frame_rate"]).astype(int)
beat_frames_mapped = np.zeros_like(song["audio_array"])
beat_frames_mapped[beat_frames] = 1
return beat_frames_mapped
def beats_per_minute(beat_times, song):
song_length = len(song["audio_array"])/song["frame_rate"]/60
beats_count = len(beat_times)
bpm = beats_count/song_length # We could have problems with the first and the last beat
return bpm
def key_detection(song):
#key = rubberband/madmom (experiment with both)
proc = CNNKeyRecognitionProcessor()
key_probabilities = proc(song["song_path"])
key_label = key_prediction_to_label(key_probabilities)
return key_probabilities, key_label
def structural_segmentation(song):
kernel_dim = 32
samples_per_beat = int(1.0/(song['estimated_bpm']/(60.0 * song['frame_rate'])))
frame_size = int(0.5 * samples_per_beat)
hop_size = int(0.25 * samples_per_beat)
mfcc_ssm = mfcc_structural_similarity_matrix(song, frame_size=frame_size, hop_size=hop_size)
rms_ssm = rms_structural_similarity_matrix(song, frame_size=frame_size, hop_size=hop_size)
kernel = get_checkboard_kernel(kernel_dim)
mfcc_novelty = apply_kernel(mfcc_ssm, kernel)
rms_novelty = apply_kernel(rms_ssm, kernel)
size_dif = mfcc_novelty.size - rms_novelty.size
if size_dif > 0:
rms_novelty = np.pad(rms_novelty, (0, np.abs(size_dif)), mode='edge')
else:
mfcc_novelty = np.pad(mfcc_novelty, (0, np.abs(size_dif)), mode='edge')
novelty = mfcc_novelty * rms_novelty
peaks_rel_pos, peaks_amp = detect_peaks(novelty)
"""
save_cmap(mfcc_ssm, 'figures/mfcc_smm.png', ' MFCC Self-Similarity Matrix')
save_cmap(rms_ssm, 'figures/mfcc_smm.png', ' MFCC Self-Similarity Matrix')
save_cmap(kernel, 'figures/kernel', 'Checkboard Gaussian Kernel')
save_line(range(len(novelty)), novelty, 'figures/novelty.png', 'Novelty function', 'Frames', 'Amplitude')
save_line(peaks_rel_pos, peaks_amp, 'figures/peaks.png', 'Novelty peaks', 'Frames', 'Amplitude', '.')
"""
peaks_abs_pos = peaks_rel_pos * hop_size
peak_times = np.zeros_like(song['audio_array'])
for i in range(len(peaks_abs_pos)):
beat_peak = find_near_beat(peaks_abs_pos[i], song['beat_times'])
peak_times[beat_peak] = 1
return peak_times
def mfcc_structural_similarity_matrix(song, frame_size, hop_size):
mspec = librosa.feature.melspectrogram(song['audio_array'], sr=song['frame_rate'], n_mels=128, n_fft=frame_size, window="hann", win_length=frame_size, hop_length=hop_size,)
log_mspec = librosa.power_to_db(mspec, ref=np.max)
mfcc = librosa.feature.mfcc(S = log_mspec, sr=song['frame_rate'], n_mfcc=13)
ssm = sklearn.metrics.pairwise.cosine_similarity(mfcc.T, mfcc.T)
ssm -= np.average(ssm)
m = np.min(ssm)
M = np.max(ssm)
ssm -= m
ssm /= np.abs(m) + M
return ssm
def rms_structural_similarity_matrix(song, frame_size, hop_size):
rms_list = []
for frame in FrameGenerator(essentia.array(song['audio_array']), frameSize = frame_size, hopSize = hop_size):
rms_list.append(np.average(frame**2))
ssm = sklearn.metrics.pairwise.pairwise_distances(np.array(rms_list).reshape(-1, 1))
ssm -= np.average(ssm)
m = np.min(ssm)
M = np.max(ssm)
ssm -= m
ssm /= np.abs(m) + M
return ssm
def get_checkboard_kernel(dim):
gaussian_x = scipy.signal.gaussian(2*dim, std = dim/2.0).reshape((-1,1))
gaussian_y = scipy.signal.gaussian(2*dim, std = dim/2.0).reshape((1,-1))
kernel = np.dot(gaussian_x,gaussian_y)
kernel[:dim,dim:] *= -1
kernel[dim:,:dim] *= -1
return kernel
def apply_kernel(ssm, kernel):
kernel_dim = int(kernel.shape[0]/2)
ssm_dim = ssm.shape[0]
novelty = np.zeros(ssm_dim)
ssm_padded = np.pad(ssm, kernel_dim, mode='edge')
for index in range(ssm_dim):
frame = ssm_padded[index:index+2*kernel_dim, index:index+2*kernel_dim]
novelty[index] = np.sum(frame * kernel)
novelty /= np.max(novelty)
return novelty
def detect_peaks(novelty):
threshold = np.max(novelty) * 0.025
peakDetection = PeakDetection(interpolate=False, maxPeaks=100, orderBy='amplitude', range=len(novelty), maxPosition=len(novelty), threshold=threshold)
peaks_pos, peaks_ampl = peakDetection(novelty.astype('single'))
peaks_ampl = peaks_ampl[np.argsort(peaks_pos)]
peaks_pos = peaks_pos[np.argsort(peaks_pos)]
return peaks_pos, peaks_ampl
def find_near_beat(position, beat_times):
position = int(position)
i_low = 0
i_up = 0
while(position - i_low > 0 and beat_times[position-i_low] == 0):
i_low += 1
while(position + i_up < len(beat_times) and beat_times[position+i_up] == 0):
i_up += 1
if i_low < i_up:
return position - i_low
else:
return position + i_up
def evaluate(playlist):
for song in playlist:
# Evaluating sort of acc in bpm detection
pass
# print or store or whatever
# +
# Choosing the first song
# either:
# iteratively choosing next song
# tree search for optimal sequence
circle_of_fifths = {
"major": ["C", "G", "D", "A", "E", "B", "F#", "Db", "Ab", "Eb", "Bb", "F"],
"minor": ["A", "E", "B", "F#", "C#", "G#", "D#", "Bb", "F", "C", "G", "D"],
}
scale = ["C", "Db", "D", "Eb", "E", "F", "F#", "G", "Ab", "A", "Bb", "B"]
def get_song_sequence(playlist):
print("Selecting tracks order...")
not_in_queue = playlist.copy()
not_in_queue.sort(key=lambda song: song["estimated_bpm"])
queue = []
queue.append(not_in_queue.pop(0))
while not_in_queue:
next_song = pick_next_song(queue[-1], not_in_queue)
queue.append(next_song)
not_in_queue.remove(next_song)
return queue
def pick_next_song(current, options):
"""
Explore several strategies
Example:
- Selecting candidate inside a +- bpm bounds
- Picking the most similar one in key
(see the paper for inspiration in distances between keys)
"""
threshold = 4
selection = None
current_bpm = current["estimated_bpm"]
current_key_distance = 12 # Maximum distance
while not selection:
for song in options:
if (
song["estimated_bpm"] >= current_bpm - threshold
and song["estimated_bpm"] <= current_bpm + threshold
):
optional_key_distance = key_distance_fifths(
current["estimated_key"],
current["estimated_mode"],
song["estimated_key"],
song["estimated_mode"],
)
if optional_key_distance < current_key_distance:
selection = song
current_key_distance = optional_key_distance
threshold += 2
return selection
def key_distance_semitones(key1, key2):
idx1 = scale.index(key1)
idx2 = scale.index(key2)
diff = abs(idx1 - idx2)
distance = min(diff, 12 - diff)
return distance
def key_distance_fifths(key1, mode1, key2, mode2):
idx1 = circle_of_fifths[mode1].index(key1)
idx2 = circle_of_fifths[mode2].index(key2)
diff = abs(idx1 - idx2)
distance = min(diff, 12 - diff)
return distance
# +
# Iteratively:
# Create the transition for one pair of songs
# - Time wrapping (progressivelly better)
# - Key changing (explore strategies)
# - Align all the sequence according to modified beats (try to do it with downbeats)
# - Volume fades to mix both
import numpy as np
import rubberband as rb
def create_transitions(queue):
mix = queue[0]
print("Creating_transitions...")
for i in range(1, len(queue)):
print(f"\tMixing tracks {i} and {i+1}...")
mix = mix_pair(mix, queue[i])
return mix
def mix_pair(previous_mix, next_song):
"""
output
mix = {
name ([string])
audio_array (np.array)
sampling_rate ([double])
...
real_bpm ([Int])
estimated_bpm ([Int])
estimated_key ([String])
cue_points (np.array)
}
"""
# selecting the actual cue-points from all the posibilities
previous_mix_cue_point = select_cue_points(previous_mix)
print("\t\tAligning songs...")
next_song_aligned = align(next_song)
print("\t\tMixing beats...")
previous_mix_stretched,next_song_stretched,previous_ending,next_beginning = time_wrap(previous_mix, next_song_aligned, previous_mix_cue_point)
#print("\t\tTransposing keys...")
# previous_mix, next_song = key_change(previous_mix_stretched, next_song_stretched)
print("\t\tFading transition...")
previous_mix_faded, next_song_faded = fade(previous_mix_stretched, next_song_stretched, previous_ending, next_beginning)
print("\t\tCombining tracks...")
mix = combine_songs(previous_mix_faded, next_song_faded, previous_ending)
return mix #, previous_mix_faded, next_song_faded
def select_cue_points(previous_mix):
cue_point = np.zeros_like(previous_mix["cue_points"])
possible_idx = np.where(previous_mix["cue_points"] == 1)[0]
flag = False
i = 1
while flag == False:
# select first cue point that are at least 20s from end.
if (len(previous_mix["audio_array"]) - possible_idx[-i]) / previous_mix["frame_rate"] >= 20:
cue_point[possible_idx[-i]] = 1
flag = True
i += 1
return cue_point
def align(next_song):
first_beat = np.where(next_song["beat_times"] == 1)[0][0]
new_next = next_song.copy()
new_next["audio_array"] = next_song["audio_array"][first_beat:]
new_next["beat_times"] = next_song["beat_times"][first_beat:]
new_next["cue_points"] = next_song["cue_points"][first_beat:]
return new_next
def time_wrap(previous_mix, next_song, previous_mix_cue_point):
avg_bpm = (previous_mix["estimated_bpm"] + next_song["estimated_bpm"]) / 2
ending_stretching_ratio = previous_mix["estimated_bpm"] / avg_bpm
beginning_stretching_ratio = next_song["estimated_bpm"] / avg_bpm
cue_point_idx = np.where(previous_mix_cue_point == 1)[0][0]
#NEW-------------
transition_length_seconds = 20
transition_length_prev_frames_stretched = transition_length_seconds * previous_mix["frame_rate"]
transition_length_prev_frames = int(transition_length_prev_frames_stretched / ending_stretching_ratio)
transition_length_next_frames_stretched = transition_length_seconds * next_song["frame_rate"]
transition_length_next_frames = int(transition_length_next_frames_stretched / beginning_stretching_ratio)
"""
print('beg len samp: ', transition_length_next_frames)
print('end len samp: ', transition_length_prev_frames)
print('beg bpm', previous_mix["estimated_bpm"])
print('end bpm', next_song["estimated_bpm"])
print('beg stretch', beginning_stretching_ratio)
print('end stretch', ending_stretching_ratio)
"""
ending_audio = previous_mix["audio_array"][cue_point_idx : cue_point_idx + transition_length_prev_frames]
ending_beats = previous_mix["beat_times"][cue_point_idx : cue_point_idx + transition_length_prev_frames]
beginning_audio = next_song["audio_array"][:transition_length_next_frames]
beginning_beats = next_song["beat_times"][:transition_length_next_frames]
"""
# ending_length_samples = previous_mix["audio_array"].size - cue_point_idx
ending_length_samples = 20 * previous_mix["frame_rate"]
transition_length = ending_length_samples * ending_stretching_ratio
transition_length_seconds = transition_length / previous_mix["frame_rate"]
# if transition_length_seconds > 20:
# transition_length_seconds = 20
print(transition_length_seconds)
beginning_length_stretched = transition_length_seconds * next_song["frame_rate"]
beginning_length_samples = int(beginning_length_stretched * beginning_stretching_ratio)
print('beg len samp: ', beginning_length_samples)
print('end len samp: ', ending_length_samples)
ending_audio = previous_mix["audio_array"][cue_point_idx : cue_point_idx + ending_length_samples]
ending_beats = previous_mix["beat_times"][cue_point_idx : cue_point_idx + ending_length_samples]
beginning_audio = next_song["audio_array"][:beginning_length_samples]
beginning_beats = next_song["beat_times"][:beginning_length_samples]
"""
ending_audio_stretched = rb.stretch(np.array(ending_audio, dtype="int32"),rate=previous_mix["frame_rate"],ratio=ending_stretching_ratio,crispness=6,formants=False,precise=True)
beginning_audio_stretched = rb.stretch(np.array(beginning_audio, dtype="int32"),rate=next_song["frame_rate"],ratio=beginning_stretching_ratio,crispness=6,formants=False,precise=True)
"""
print("end: ", len(ending_audio_stretched))
print("start: ", len(beginning_audio_stretched))
"""
ending_beats_stretched = stretch_beats(ending_beats, ending_stretching_ratio, ending_audio_stretched.size)
beginning_beats_stretched = stretch_beats(beginning_beats, beginning_stretching_ratio, beginning_audio_stretched.size)
previous_mix["estimated_bpm"] = next_song["estimated_bpm"]
new_previous = previous_mix.copy()
#new_previous["audio_array"] = np.concatenate((new_previous["audio_array"][:-ending_length_samples], ending_audio_stretched))
#new_previous["beat_times"] = np.concatenate((new_previous["beat_times"][:-ending_length_samples], ending_beats_stretched))
#new_previous["cue_points"] = np.concatenate((new_previous["cue_points"][:-ending_length_samples],np.zeros(ending_audio_stretched.size, dtype=previous_mix["cue_points"].dtype)))
new_previous["audio_array"] = np.concatenate((new_previous["audio_array"][:cue_point_idx], ending_audio_stretched))
new_previous["beat_times"] = np.concatenate((new_previous["beat_times"][:cue_point_idx], ending_beats_stretched))
new_previous["cue_points"] = np.concatenate((new_previous["cue_points"][:cue_point_idx],np.zeros(ending_audio_stretched.size, dtype=previous_mix["cue_points"].dtype)))
new_next = next_song.copy()
new_next["audio_array"] = np.concatenate((beginning_audio_stretched, new_next["audio_array"][transition_length_next_frames:]))
new_next["beat_times"] = np.concatenate((beginning_beats_stretched, new_next["beat_times"][transition_length_next_frames:]))
new_next["cue_points"] = np.concatenate((np.zeros(beginning_audio_stretched.size, dtype=next_song["cue_points"].dtype),next_song["cue_points"][transition_length_next_frames:]))
#return (new_previous,new_next,new_previous["audio_array"][:-ending_length_samples].size,beginning_audio_stretched.size)
return (new_previous,new_next,new_previous["audio_array"][:cue_point_idx].size,beginning_audio_stretched.size)
def stretch_beats(beat_times, stretching_ratio, desired_length):
new_beats = []
zero_sequence_length = 0
for i in beat_times:
if i == 0:
zero_sequence_length += 1
elif i == 1:
new_beats += [0] * int(zero_sequence_length * stretching_ratio)
new_beats += [1]
zero_sequence_length = 0
diff = desired_length - len(new_beats)
if diff > 0:
new_beats += [0] * diff
return np.array(new_beats, dtype=int)
def key_change(previous_mix, next_song, previous_mix_cue_point, next_song_cue_point):
# rubberband
# Choose to change the key of next_song completely or only the transition part
return previous_mix, next_song
def fade(previous_mix, next_song, previous_mix_cue_point, next_song_cue_point):
fade_seconds = 20
fade_frames = fade_seconds * previous_mix["frame_rate"]
for i in range(fade_frames):
#exponential fade
#previous_mix["audio_array"][-i] = previous_mix["audio_array"][-i] * (1.1 - np.exp(2.398 * (1 - i / fade_frames)) * 0.1)
#next_song["audio_array"][i] = next_song["audio_array"][i] * (0.1 * np.exp(2.398 * i / fade_frames) - 0.1)
#linear fade
previous_mix["audio_array"][-i] = previous_mix["audio_array"][-i] * i/fade_frames
next_song["audio_array"][i] = next_song["audio_array"][i] * i/fade_frames
return previous_mix, next_song
def combine_songs(previous_mix, next_song, previous_ending):
mix = previous_mix.copy()
next_audio_padded = np.pad(next_song["audio_array"], (previous_ending, 0), constant_values=0)
next_beat_padded = np.pad(next_song["beat_times"], (previous_ending, 0), constant_values=0)
next_cue_padded = np.pad(next_song["cue_points"], (previous_ending, 0), constant_values=0)
mix["audio_array"] = next_audio_padded
mix["beat_times"] = next_beat_padded
mix["cue_points"] = next_cue_padded
mix["audio_array"][: previous_mix["audio_array"].size] += previous_mix["audio_array"]
mix["beat_times"][: previous_mix["beat_times"].size] += previous_mix["beat_times"]
mix["cue_points"][: previous_mix["cue_points"].size] += previous_mix["cue_points"]
return mix
# +
import matplotlib.pyplot as plt
def save_cmap(matrix, filename, title='', xlabel='', ylabel='', colorbar=False):
fig, ax = plt.subplots()
c = ax.pcolormesh(matrix, shading='auto', cmap='magma')
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(xlabel)
if colorbar:
fig.colorbar(c, ax=ax)
plt.savefig(filename)
def save_line(x, y, filename, title='', xlabel='', ylabel='', style=''):
fig, ax = plt.subplots()
plt.plot(x, y, style)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.savefig(filename)
# -
load_path = "songs/dev_songs_pop2020s/"
store_path = "songs/dev_songs_pop2020s_output/song_mix_new_1.wav"
playlist = load_data(load_path)
playlist_features = feature_extraction(playlist)
queue = get_song_sequence(playlist_features)
for song in queue:
print(song['estimated_bpm'], ' ', song['song_name'])
mix = create_transitions(queue)
store_song(mix, store_path)
#store_song(previous_mix_faded, "songs/dev_songs_house_output/prev_mix_faded_linear.wav")
#store_song(next_song_faded, "songs/dev_songs_house_output/new_song_faded_linear.wav")
|
dev_notebooks/testing_notebook_2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
# %matplotlib inline
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
import sys, os
sys.path.insert(0, os.path.abspath('bin'))
import physicellProjectNanohub
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 43, "hidden": false, "row": 0, "width": 11}, "report_default": {"hidden": false}}}}
physicellProjectNanohub.gui
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {}, "report_default": {"hidden": true}}}}
#from debug import debug_view
#debug_view
|
physicellProjectNanohub.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="QNxEZK2pPA1P" colab_type="text"
# ##### Copyright © 2019 The TensorFlow Authors.
# + id="7JwKPOmN2-15" colab_type="code" colab={}
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="23R0Z9RojXYW" colab_type="text"
# # Using the Apache Beam Orchestrator for TFX
#
# This notebook demonstrates how to use Apache Beam as an orchestrator for TFX. Pipelines are also created in Apache Beam, which means that Beam sequences tasks according to the dependencies of each task, running each task as its dependencies are met. Beam is also highly scalable and runs tasks in parallel in a distributed environment. That makes Beam very powerful as an orchestrator for other pipelines, including TFX.
#
# When using the InteractiveContext in a notebook, running each cell orchestrates the creation and running of each of the components in the TFX pipeline. When using a separate orchestrator, as in this example, the components are only run once the TFX pipeline DAG has been defined and the orchestrator has been triggered to start an execution run.
#
# In this example you will define all the supporting code for the TFX components before instantiating the components and running the TFX pipeline using an Apache Beam orchestrator. This is the pattern which is typically used in a production deployment of TFX.
# + [markdown] id="2GivNBNYjb3b" colab_type="text"
# ## Setup
# First, we install the necessary packages, download data, import modules and set up paths.
#
# ### Install TFX and TensorFlow
#
# > #### Note
# > Because of some of the updates to packages you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, you should rerun this cell.
# + id="7zhcJBLoMXQE" colab_type="code" colab={}
# !pip install -q -U \
# tensorflow==2.0.0 \
# tfx==0.15.0rc0 \
# pyarrow==0.14.1
# + [markdown] id="N-ePgV0Lj68Q" colab_type="text"
# ### Import packages
# We import necessary packages, including standard TFX component classes.
# + id="YIqpWK9efviJ" colab_type="code" colab={}
import os
import tempfile
import urllib
import tensorflow as tf
import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.example_validator.component import ExampleValidator
from tfx.components.model_validator.component import ModelValidator
from tfx.components.pusher.component import Pusher
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.proto import evaluator_pb2
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner
from tfx.utils.dsl_utils import external_input
# + [markdown] id="KfS6otsARvJC" colab_type="text"
# Check the versions
# + id="XZY7Pnoxmoe8" colab_type="code" colab={}
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
# + [markdown] id="n2cMMAbSkGfX" colab_type="text"
# ### Download example data
# We download the sample dataset for use in our TFX pipeline. We're working with a variant of the [Online News Popularity](https://archive.ics.uci.edu/ml/datasets/online+news+popularity) dataset, which summarizes a heterogeneous set of features about articles published by Mashable in a period of two years. The goal is to predict how popular the article will be on social networks. Specifically, in the original dataset the objective was to predict the number of times each article will be shared on social networks. In this variant, the goal is to predict the article's popularity percentile. For example, if the model predicts a score of 0.7, then it means it expects the article to be shared more than 70% of all articles.
# + id="BywX6OUEhAqn" colab_type="code" colab={}
# Download the example data.
DATA_PATH = 'https://raw.githubusercontent.com/ageron/open-datasets/master/' \
'online_news_popularity_for_course/online_news_popularity_for_course.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
# + [markdown] id="pXu5IR6dSDwJ" colab_type="text"
# Take a quick look at the CSV file
# + id="Hqn4wST2Bex5" colab_type="code" colab={}
# !head {_data_filepath}
# + [markdown] id="ufJKQ6OvkJlY" colab_type="text"
# ### Set up pipeline paths
#
# We create the filenames for the Python modules for the Transform and Trainer components, and a directory for the Serving model.
# + id="ad5JLpKbf6sN" colab_type="code" colab={}
# Set up paths.
_constants_module_file = 'online_news_constants.py'
_transform_module_file = 'online_news_transform.py'
_trainer_module_file = 'online_news_trainer.py'
_serving_model_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/online_news_simple')
# + [markdown] id="0TlleGZPSqdn" colab_type="text"
# Define some constants and functions for both the `Transform` component and the `Trainer` component. Define them in a Python module, in this case saved to disk using the `%%writefile` magic command since you are working in a notebook.
# + id="_GpU9-JNXw-_" colab_type="code" colab={}
# %%writefile {_constants_module_file}
DENSE_FLOAT_FEATURE_KEYS = [
"timedelta", "n_tokens_title", "n_tokens_content",
"n_unique_tokens", "n_non_stop_words", "n_non_stop_unique_tokens",
"n_hrefs", "n_self_hrefs", "n_imgs", "n_videos", "average_token_length",
"n_keywords", "kw_min_min", "kw_max_min", "kw_avg_min", "kw_min_max",
"kw_max_max", "kw_avg_max", "kw_min_avg", "kw_max_avg", "kw_avg_avg",
"self_reference_min_shares", "self_reference_max_shares",
"self_reference_avg_shares", "is_weekend", "global_subjectivity",
"global_sentiment_polarity", "global_rate_positive_words",
"global_rate_negative_words", "rate_positive_words", "rate_negative_words",
"avg_positive_polarity", "min_positive_polarity", "max_positive_polarity",
"avg_negative_polarity", "min_negative_polarity", "max_negative_polarity",
"title_subjectivity", "title_sentiment_polarity", "abs_title_subjectivity",
"abs_title_sentiment_polarity"]
VOCAB_FEATURE_KEYS = ["data_channel"]
BUCKET_FEATURE_KEYS = ["LDA_00", "LDA_01", "LDA_02", "LDA_03", "LDA_04"]
CATEGORICAL_FEATURE_KEYS = ["weekday"]
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [6]
#UNUSED: date, slug
LABEL_KEY = "n_shares_percentile"
VOCAB_SIZE = 10
OOV_SIZE = 5
FEATURE_BUCKET_COUNT = 10
def transformed_name(key):
return key + '_xf'
# + [markdown] id="QH9NP8fYTEPE" colab_type="text"
# Now define a module containing the `preprocessing_fn()` function that we will pass to the `Transform` component.
# + id="v3EIuVQnBfH7" colab_type="code" colab={}
# %%writefile {_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
from online_news_constants import *
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=VOCAB_SIZE,
num_oov_buckets=OOV_SIZE)
for key in BUCKET_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), FEATURE_BUCKET_COUNT,
always_return_num_quantiles=False)
for key in CATEGORICAL_FEATURE_KEYS:
outputs[transformed_name(key)] = _fill_in_missing(inputs[key])
# How popular is this article?
outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
# + [markdown] id="7bkbpcULTt4p" colab_type="text"
# Create a Python module containing a `trainer_fn` function, which must return an estimator. If you prefer creating a Keras model, you can do so and then convert it to an estimator using `keras.model_to_estimator()`.
# + id="CaFFTBBeB4wf" colab_type="code" colab={}
# %%writefile {_trainer_module_file}
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from online_news_constants import *
def transformed_names(keys):
return [transformed_name(key) for key in keys]
# Tf.Transform considers these features as "raw"
def _get_raw_feature_spec(schema):
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _build_estimator(config, hidden_units=None, warm_start_from=None):
"""Build an estimator for predicting the popularity of online news articles
Args:
config: tf.estimator.RunConfig defining the runtime environment for the
estimator (including model_dir).
hidden_units: [int], the layer sizes of the DNN (input layer first)
warm_start_from: Optional directory to warm start from.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in transformed_names(DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=VOCAB_SIZE + OOV_SIZE, default_value=0)
for key in transformed_names(VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=FEATURE_BUCKET_COUNT, default_value=0)
for key in transformed_names(BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES)
]
return tf.estimator.DNNLinearCombinedRegressor(
config=config,
linear_feature_columns=categorical_columns,
dnn_feature_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25],
warm_start_from=warm_start_from)
def _example_serving_receiver_fn(tf_transform_output, schema):
"""Build the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
Tensorflow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
"""Build everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- Tensorflow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
features = serving_input_receiver.features.copy()
transformed_features = tf_transform_output.transform_raw_features(features)
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=serving_input_receiver.receiver_tensors,
labels=transformed_features[transformed_name(LABEL_KEY)])
def _input_fn(filenames, tf_transform_output, batch_size=200):
"""Generates features and labels for training or evaluation.
Args:
filenames: [str] list of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: int First dimension size of the Tensors returned by input_fn
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
filenames, batch_size, transformed_feature_spec, reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
# TFX will call this function
def trainer_fn(hparams, schema):
"""Build the estimator using the high level API.
Args:
hparams: Holds hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
train_batch_size = 40
eval_batch_size = 40
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=train_batch_size)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=eval_batch_size)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('online-news', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='online-news-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=999, keep_checkpoint_max=1)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = _build_estimator(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
],
config=run_config,
warm_start_from=hparams.warm_start_from)
# Create an input receiver for TFMA processing
receiver_fn = lambda: _eval_input_receiver_fn(
tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
# + [markdown] id="qAika7-6gLvI" colab_type="text"
# ## Create the Pipeline
#
# Creating the pipeline defines the dependencies between components and the artifacts that they require as input, which in turn defines the order in which they can be run. In this example you also use an ML-Metadata database, which is this case is backed by SQL Lite.
# + id="gNvMj9AWsmSt" colab_type="code" colab={}
_pipeline_name = 'online_news_beam'
_pipeline_root = tempfile.mkdtemp(prefix='tfx-pipelines')
_pipeline_root = os.path.join(_pipeline_root, 'pipelines', _pipeline_name)
# Sqlite ML-metadata db path.
_metadata_root = tempfile.mkdtemp(prefix='tfx-metadata')
_metadata_path = os.path.join(_metadata_root, 'metadata.db')
def _create_pipeline(pipeline_name, pipeline_root, data_root,
transform_module_file, trainer_module_file,
serving_model_dir, metadata_path):
"""Implements the online news pipeline with TFX."""
examples = external_input(data_root)
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = CsvExampleGen(input_base=examples)
# Computes statistics over data for visualization and example validation.
statistics_gen = StatisticsGen(input_data=example_gen.outputs.examples)
# Generates schema based on statistics files.
infer_schema = SchemaGen(
stats=statistics_gen.outputs.output)
# Performs anomaly detection based on statistics and data schema.
validate_stats = ExampleValidator(
stats=statistics_gen.outputs.output, schema=infer_schema.outputs.output)
# Performs transformations and feature engineering in training and serving.
transform = Transform(
input_data=example_gen.outputs.examples,
schema=infer_schema.outputs.output,
module_file=transform_module_file)
# Uses user-provided Python function that implements a model using
# TensorFlow's Estimators API.
trainer = Trainer(
module_file=trainer_module_file,
transformed_examples=transform.outputs.transformed_examples,
schema=infer_schema.outputs.output,
transform_output=transform.outputs.transform_output,
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000))
# Uses TFMA to compute a evaluation statistics over features of a model.
model_analyzer = Evaluator(
examples=example_gen.outputs.examples,
model_exports=trainer.outputs.output,
feature_slicing_spec=evaluator_pb2.FeatureSlicingSpec(specs=[
evaluator_pb2.SingleSlicingSpec(
column_for_slicing=['weekday'])
]))
# Performs quality validation of a candidate model (compared to a baseline).
model_validator = ModelValidator(
examples=example_gen.outputs.examples, model=trainer.outputs.output)
# Checks whether the model passed the validation steps and pushes the model
# to a file destination if check passed.
pusher = Pusher(
model_export=trainer.outputs.output,
model_blessing=model_validator.outputs.blessing,
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=serving_model_dir)))
return pipeline.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=[
example_gen, statistics_gen, infer_schema, validate_stats, transform,
trainer, model_analyzer, model_validator, pusher
],
enable_cache=True,
metadata_connection_config=metadata.sqlite_metadata_connection_config(
metadata_path),
additional_pipeline_args={},
)
online_news_pipeline = _create_pipeline(
pipeline_name=_pipeline_name,
pipeline_root=_pipeline_root,
data_root=_data_root,
transform_module_file=_transform_module_file,
trainer_module_file=_trainer_module_file,
serving_model_dir=_serving_model_dir,
metadata_path=_metadata_path)
# + [markdown] id="41nLEd9uW0UW" colab_type="text"
# ### Run the pipeline
#
# Create a `BeamDagRunner` and use it to run the pipeline.
#
# >#### Note
# > This same pattern is also used to create pipelines with other orchestrators, the only difference here being that a Beam orchestrator is used. When using a Beam orchestrator running the pipeline also triggers an execution run, while other orchestrators may only load the pipeline and wait for a trigger event.
#
# #### Results
#
# Running this pipeline produces a lot of log messages, which can be instructive to read through. For example, log messages like these show the sequencing of components through the pipeline.
#
# ```
# INFO:tensorflow:Component CsvExampleGen is running.
# INFO:tensorflow:Run driver for CsvExampleGen
# ...
# INFO:tensorflow:Run executor for CsvExampleGen
# ...
# INFO:tensorflow:Run publisher for CsvExampleGen
# ...
# INFO:tensorflow:Component CsvExampleGen is finished.
# INFO:tensorflow:Component StatisticsGen is running.
# ```
# + id="qqB2lRBbrvOt" colab_type="code" colab={}
BeamDagRunner().run(online_news_pipeline)
|
tfx_labs/Lab_3_Beam_Orchestrator.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# So far, you've used lambda functions to write short, simple functions as well as to redefine functions with simple functionality. The best use case for lambda functions, however, are for when you want these simple functionalities to be anonymously embedded within larger expressions. What that means is that the functionality is not stored in the environment, unlike a function defined with def. To understand this idea better, you will use a lambda function in the context of the map() function.
#
# Recall from the video that map() applies a function over an object, such as a list. Here, you can use lambda functions to define the function that map() will use to process the object. For example:
#
# nums = [2, 4, 6, 8, 10]
#
# result = map(lambda a: a ** 2, nums)
# You can see here that a lambda function, which raises a value a to the power of 2, is passed to map() alongside a list of numbers, nums. The map object that results from the call to map() is stored in result. You will now practice the use of lambda functions with map(). For this exercise, you will map the functionality of the add_bangs() function you defined in previous exercises over a list of strings.
# In the map() call, pass a lambda function that concatenates the string '!!!' to a string item; also pass the list of strings, spells. Assign the resulting map object to shout_spells.
# Convert shout_spells to a list and print out the list.
#
# +
# Create a list of strings: spells
spells = ["protego", "accio", "expecto patronum", "legilimens"]
# Use map() to apply a lambda function over spells: shout_spells
shout_spells = map(lambda x: x+'!!!', spells)
# Convert shout_spells to a list: shout_spells_list
shout_spells_list=list(shout_spells)
# Print the result
print(shout_spells_list)
|
Python Data Science Toolbox -Part 1/Lambda functions and error-handling/02.Map() and lambda functions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp core
# -
# # Dezero Core
#
# > Variable and Function
#export
from dezero.imports import *
#hide
from nbdev.showdoc import *
# # Variable
#export
class Variable():
def __init__(self, data):
self.data = data
# ### Variables as boxes
# ~~~python
# class Variable():
# def __init__(self, data):
# self.data = data
# ~~~
# 可以把变量想象成一个盒子,盒子里面放的是数据。
#
# 
x = Variable(np.array(1.0))
x.data
x.data = np.array(2.0)
x.data
# # Function
#export
class Function():
def __call__(self, input:Variable):
x = input.data
y = self.forward(x)
output = Variable(y)
return output
def forward(self, x):
raise NotImplementedError()
# 函数是用来定义一个变量与另一个变量之间的对应关系
#
# $y=f(x)$
#
# 
# ### Function to create a variable
# 例如,实现 $ f=x^2 $
# ~~~python
# class Function():
# def __call__(self, input:Variable):
# x = input.data
# y = self.forward(x)
# output = Variable(y)
# return output
#
# def forward(self, x):
# raise NotImplementedError()
# ~~~
class Square(Function):
def forward(self, x):
return x ** 2
f = Square()
f(Variable(2)).data
# ### Connecting Functions
class Exp(Function):
def forward(self, x):
return np.exp(x)
# 多个函数可以进行串联来实现复杂的操作,例如,实现 $f(x)=(e^{x^2})^2$
# +
A = Square()
B = Exp()
C = Square()
x = Variable(np.array(0.5))
a = A(x)
b = B(a)
y = C(b)
y.data
# -
# 计算图
#
# 
#
# ### Numerical Differentiation
# > 数值微分(numerical differentiation)根据函数在一些离散点的函数值,推算它在某点的导数或高阶导数的近似值的方法。通常用差商代替微商,或者用一个能够近似代替该函数的较简单的可微函数(如多项式或样条函数等)的相应导数作为能求导数的近似值。
# > $$f'(x)=\lim_{h \to 0}\frac{f(x+h)-f(x)}{h}$$
# > 割线斜率和切线斜率有些差异,差异大约和h成正比。若h近似于0,则割线斜率近似于切线斜率。
#
# 
# 另外一种二点估计法是用经过$(x-h,f(x-h))$和$(x+h,f(x+h))$二点的割线,其斜率为$\frac{f(x+h)-f(x-h)}{2h}$.
#
# 上述公式称为对称差分,其一次项误差相消,因此割线斜率和切线斜率的差和成正比。对于很小的h而言这个值比单边近似还要准确。特别的是公式虽计算x点的斜率,但不会用到函数在x点的数值。
#
# 
#
# 从图中可以看出对称差分相比单边差分近似更准确一些,下面就实现一个简单的对称差分算法
def numerical_diff(f, x, eps=1e-4):
x0 = Variable(x.data-eps)
x1 = Variable(x.data+eps)
y0 = f(x0)
y1 = f(x1)
return (y1.data - y0.data)/(2*eps)
f = Square()
x = Variable(np.array(2.0))
dy = numerical_diff(f, x)
dy
# 真实的导数按照 $f'(x)=2x$ 计算值为4,对称差分算法计算会有一定的误差。
def f(x):
A = Square()
B = Exp()
C = Square()
return C(B(A(x)))
x = Variable(0.5)
dy = numerical_diff(f, x)
dy
# 通过这种方式可以自动求复合函数的导数,在大部分情况这种方式计算出的误差比较小,但是某些计算可能会包含更大的误差。在神经网络中,有上百万的参数需要计算导数使用数值微分的方式会导致计算量非常大。
|
nbs/00_core.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:ml-mipt]
# language: python
# name: conda-env-ml-mipt-py
# ---
# # week1_01: Fun with Word Embeddings
#
# Today we gonna play with word embeddings: train our own little embedding, load one from gensim model zoo and use it to visualize text corpora.
#
# This whole thing is gonna happen on top of embedding dataset.
#
# __Requirements:__ `pip install --upgrade nltk gensim bokeh` , but only if you're running locally.
# + tags=[]
# Download the data. For installing wget on macOS run "brew install wget"
# !wget https://www.dropbox.com/s/obaitrix9jyu84r/quora.txt?dl=1 -O ../datasets/quora.txt
# Alternative download link:
# # !wget https://yadi.sk/i/BPQrUu1NaTduEw -O ../datasets/quora.txt
# +
with open("../datasets/quora.txt") as data_file:
data = data_file.readlines()
data[50]
# -
# __Tokenization:__ a typical first step for an nlp task is to split raw data into words.
# The text we're working with is in raw format: with all the punctuation and smiles attached to some words, so a simple str.split won't do.
#
# Let's use __`nltk`__ - a library that handles many nlp tasks like tokenization, stemming or part-of-speech tagging.
from nltk.tokenize import WordPunctTokenizer
# +
tokenizer = WordPunctTokenizer()
tokenizer.tokenize(data[50])
# + jupyter={"outputs_hidden": true}
# YOUR CODE HERE: lowercase everything and extract tokens with tokenizer.
# data_tok should be a list of lists of tokens for each line in data.
data_tok = None
# + jupyter={"outputs_hidden": true}
def is_latin(token):
return all("a" <= x.lower() <= "z" for x in token)
assert all(
isinstance(row, (list, tuple)) for row in data_tok
), "please convert each line into a list of tokens (strings)"
assert all(
all(isinstance(tok, str) for tok in row) for row in data_tok
), "please convert each line into a list of tokens (strings)"
assert all(
map(lambda l: not is_latin(l) or l.islower(), map(" ".join, data_tok))
), "please make sure to lowercase the data"
# -
[" ".join(row) for row in data_tok[:2]]
# __Word vectors:__ as the saying goes, there's more than one way to train word embeddings. There's Word2Vec and GloVe with different objective functions. Then there's fasttext that uses character-level models to train word embeddings.
#
# The choice is huge, so let's start someplace small: __gensim__ is another nlp library that features many vector-based models incuding word2vec.
from gensim.models import Word2Vec
# + jupyter={"outputs_hidden": true}
model = Word2Vec(
data_tok,
size=32, # embedding vector size
min_count=5, # consider words that occured at least 5 times
window=5, # define context as a 5-word window around the target word
).wv
# +
# now you can get word vectors!
model.get_vector("anything")
# +
# or query similar words directly. Go play with it!
model.most_similar("bread")
# -
# ## Using pre-trained model
#
# Took it a while, huh? Now imagine training life-sized (100~300D) word embeddings on gigabytes of text: wikipedia articles or twitter posts.
#
# Thankfully, nowadays you can get a pre-trained word embedding model in 2 lines of code (no sms required, promise).
import gensim.downloader as api
# + jupyter={"outputs_hidden": true}
model = api.load("glove-twitter-100")
# -
model.most_similar(positive=["coder", "money"], negative=["brain"])
# ## Visualizing word vectors
#
# One way to see if our vectors are any good is to plot them. Thing is, those vectors are in 30D+ space and we humans are more used to 2-3D.
#
# Luckily, we machine learners know about __dimensionality reduction__ methods.
#
# Let's use that to plot 1000 most frequent words
import numpy as np
# +
words = sorted(model.vocab.keys(), key=lambda word: model.vocab[word].count, reverse=True)[:1000]
print(words[::100])
# +
# YOUR CODE HERE: for each word, compute it's vector with model
word_vectors = None
# + jupyter={"outputs_hidden": true}
assert isinstance(word_vectors, np.ndarray)
assert word_vectors.shape == (len(words), 100)
assert np.isfinite(word_vectors).all()
# -
# #### Linear projection: PCA
#
# The simplest linear dimensionality reduction method is __P__rincipial __C__omponent __A__nalysis.
#
# In geometric terms, PCA tries to find axes along which most of the variance occurs. The "natural" axes, if you wish.
#
# <img src="https://github.com/yandexdataschool/Practical_RL/raw/master/yet_another_week/_resource/pca_fish.png" style="width:30%">
#
#
# Under the hood, it attempts to decompose object-feature matrix $X$ into two smaller matrices: $W$ and $\hat W$ minimizing _mean squared error_:
#
# $$\|(X W) \hat{W} - X\|^2_2 \to_{W, \hat{W}} \min$$
# - $X \in \mathbb{R}^{n \times m}$ - object matrix (**centered**);
# - $W \in \mathbb{R}^{m \times d}$ - matrix of direct transformation;
# - $\hat{W} \in \mathbb{R}^{d \times m}$ - matrix of reverse transformation;
# - $n$ samples, $m$ original dimensions and $d$ target dimensions;
#
#
from sklearn.decomposition import PCA # noqa: F401
# + jupyter={"outputs_hidden": true}
# YOUR CODE HERE: map word vectors onto 2d plane with PCA. Use good old sklearn api (fit, transform)
# after that, normalize vectors to make sure they have zero mean and unit variance
word_vectors_pca = None
# + jupyter={"outputs_hidden": true}
assert word_vectors_pca.shape == (len(word_vectors), 2), "there must be a 2d vector for each word"
assert max(abs(word_vectors_pca.mean(0))) < 1e-5, "points must be zero-centered"
assert max(abs(1.0 - word_vectors_pca.std(0))) < 1e-2, "points must have unit variance"
# -
# ### Let's draw it!
import bokeh.models as bm
import bokeh.plotting as pl
from bokeh.io import output_notebook
# +
output_notebook()
def draw_vectors(
x, y, radius=10, alpha=0.25, color="blue", width=600, height=400, show=True, **kwargs
):
"""Draws an interactive plot for data points with auxilirary info on hover"""
if isinstance(color, str):
color = [color] * len(x)
data_source = bm.ColumnDataSource({"x": x, "y": y, "color": color, **kwargs})
fig = pl.figure(active_scroll="wheel_zoom", width=width, height=height)
fig.scatter("x", "y", size=radius, color="color", alpha=alpha, source=data_source)
fig.add_tools(bm.HoverTool(tooltips=[(key, "@" + key) for key in kwargs.keys()]))
if show:
pl.show(fig)
return fig
# +
draw_vectors(word_vectors_pca[:, 0], word_vectors_pca[:, 1], token=words)
# hover a mouse over there and see if you can identify the clusters
# -
# ### Visualizing neighbors with t-SNE
# PCA is nice but it's strictly linear and thus only able to capture coarse high-level structure of the data.
#
# If we instead want to focus on keeping neighboring points near, we could use TSNE, which is itself an embedding method. Here you can read __[more on TSNE](https://distill.pub/2016/misread-tsne/)__.
from sklearn.manifold import TSNE
# +
# YOUR CODE HERE: map word vectors onto 2d plane with TSNE. Hint: use
# verbose=100 to see what it's doing. Normalize them in the same way as with PCA
word_tsne = None
# + jupyter={"outputs_hidden": true}
draw_vectors(word_tsne[:, 0], word_tsne[:, 1], color="green", token=words)
# -
# ### Visualizing phrases
#
# Word embeddings can also be used to represent short phrases. The simplest way is to take __an average__ of vectors for all tokens in the phrase with some weights.
#
# This trick is useful to identify what data are you working with: find if there are any outliers, clusters or other artefacts.
#
# Let's try this new hammer on our data!
#
# + jupyter={"outputs_hidden": true}
def get_phrase_embedding(phrase):
"""
Convert phrase to a vector by aggregating it's word embeddings. See description above.
"""
# 1. lowercase phrase
# 2. tokenize phrase
# 3. average word vectors for all words in tokenized phrase
# skip words that are not in model's vocabulary
# if all words are missing from vocabulary, return zeros
vector = np.zeros([model.vector_size], dtype="float32")
# YOUR CODE HERE
return vector
# + jupyter={"outputs_hidden": true}
vector = get_phrase_embedding("I'm very sure. This never happened to me before...")
assert np.allclose(
vector[::10],
np.array(
[
0.31807372,
-0.02558171,
0.0933293,
-0.1002182,
-1.0278689,
-0.16621883,
0.05083408,
0.17989802,
1.3701859,
0.08655966,
],
dtype=np.float32,
),
)
# + jupyter={"outputs_hidden": true}
# let's only consider ~5k phrases for a first run.
chosen_phrases = data[:: len(data) // 1000]
# YOUR CODE HERE: compute vectors for chosen phrases
phrase_vectors = None
# + jupyter={"outputs_hidden": true}
assert isinstance(phrase_vectors, np.ndarray) and np.isfinite(phrase_vectors).all()
assert phrase_vectors.shape == (len(chosen_phrases), model.vector_size)
# + jupyter={"outputs_hidden": true}
# map vectors into 2d space with pca, tsne or your other method of choice
# don't forget to normalize
phrase_vectors_2d = TSNE(verbose=1000).fit_transform(phrase_vectors)
ph_mean = phrase_vectors_2d.mean(axis=0)
ph_std = phrase_vectors_2d.std(axis=0)
phrase_vectors_2d = (phrase_vectors_2d - ph_mean) / ph_std
# + jupyter={"outputs_hidden": true}
draw_vectors(
phrase_vectors_2d[:, 0],
phrase_vectors_2d[:, 1],
phrase=[phrase[:50] for phrase in chosen_phrases],
radius=20,
)
# -
# Finally, let's build a simple "similar question" engine with phrase embeddings we've built.
from sklearn.metrics.pairwise import cosine_similarity # noqa: F401
# + jupyter={"outputs_hidden": true}
# compute vector embedding for all lines in data
data_vectors = np.array([get_phrase_embedding(line) for line in data])
# + jupyter={"outputs_hidden": true}
def find_nearest(query, k=10):
"""
Given text line (query), return k most similar lines from data, sorted from most to least similar.
Similarity should be measured as cosine between query and line embedding vectors.
Hint: it's okay to use global variables: data and data_vectors.
See also: np.argpartition, np.argsort
"""
# YOUR CODE HERE: top-k lines starting from most similar
return None
# + jupyter={"outputs_hidden": true}
results = find_nearest(query="How do i enter the matrix?", k=10)
print("".join(results))
assert len(results) == 10 and isinstance(results[0], str)
assert results[0] == "How do I get to the dark web?\n"
assert results[3] == "What can I do to save the world?\n"
# + jupyter={"outputs_hidden": true}
find_nearest(query="How does Trump?", k=10)
# + jupyter={"outputs_hidden": true}
find_nearest(query="Why don't i ask a question myself?", k=10)
# -
# __Now what?__
# * Try running TSNE on all data, not just 1000 phrases
# * See what other embeddings are there in the model zoo: `gensim.downloader.info()`
# * Take a look at [FastText](https://github.com/facebookresearch/fastText) embeddings
# * Optimize find_nearest with locality-sensitive hashing: use [nearpy](https://github.com/pixelogik/NearPy) or `sklearn.neighbors`.
|
week1_01_word_embeddings/week1_01_fun_with_embeddings.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Top Down Experiments
#
# These are the experiments to test the performance of the top-down strategy. If you are running an experiment to test the validity of a particular design space, please make sure to load the approrpiate OFA network before doing so. The cells corresponding to each design space have been marked with "DESIGN SPACE: <NAME>" at the top of the cell.
#
# We do not recommend running any of the experiments as they may take a while (we average over 10 runs). Please run the cell under the DEMO section to see bottom-up in action. Before running it, make sure you run all cells in the PREP section
# # PREP
import os
import torch
import torch.nn as nn
from torchvision import transforms, datasets
import numpy as np
import time
import random
import math
import copy
from matplotlib import pyplot as plt
import ofa
import ofa.model_zoo
import ofa.tutorial
import ofa.nas.accuracy_predictor.arch_encoder
import ofa.nas.search_algorithm
import ofa.nas.efficiency_predictor.latency_lookup_table
# +
random_seed = 1
random.seed(random_seed)
np.random.seed(random_seed)
torch.manual_seed(random_seed)
print('Successfully imported all packages and configured random seed to %d!'%random_seed)
cuda_available = torch.cuda.is_available()
if cuda_available:
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark = True
torch.cuda.manual_seed(random_seed)
print('Using GPU.')
else:
print('Using CPU.')
# -
ofa_network = ofa.model_zoo.ofa_net('ofa_mbv3_d234_e346_k357_w1.2', pretrained=True)
#ofa_network = ofa.model_zoo.ofa_net('ofa_resnet50', pretrained=True)
print('The OFA Network is ready.')
target_hardware = 'note10'
latency_table = ofa.tutorial.LatencyTable(device=target_hardware)
print('The Latency lookup table on %s is ready!' % target_hardware)
if cuda_available:
# path to the ImageNet dataset
print("Please input the path to the ImageNet dataset.\n")
imagenet_data_path = input()
#imagenet_data_path = 'C:\School\once-for-all-master\imgnet'
# if 'imagenet_data_path' is empty, download a subset of ImageNet containing 2000 images (~250M) for test
if not os.path.isdir(imagenet_data_path):
os.makedirs(imagenet_data_path, exist_ok=True)
ofa.utils.download_url('https://hanlab.mit.edu/files/OnceForAll/ofa_cvpr_tutorial/imagenet_1k.zip', model_dir='data')
# ! cd data && unzip imagenet_1k 1>/dev/null && cd ..
# ! copy -r data/imagenet_1k/* $imagenet_data_path
# ! del -rf data
print('%s is empty. Download a subset of ImageNet for test.' % imagenet_data_path)
print('The ImageNet dataset files are ready.')
else:
print('Since GPU is not found in the environment, we skip all scripts related to ImageNet evaluation.')
if cuda_available:
# The following function build the data transforms for test
def build_val_transform(size):
return transforms.Compose([
transforms.Resize(int(math.ceil(size / 0.875))),
transforms.CenterCrop(size),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
),
])
data_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(
root=os.path.join(imagenet_data_path, 'val'),
transform=build_val_transform(224)
),
batch_size=250, # test batch size
shuffle=True,
num_workers=16, # number of workers for the data loader
pin_memory=True,
drop_last=False,
)
print('The ImageNet dataloader is ready.')
else:
data_loader = None
print('Since GPU is not found in the environment, we skip all scripts related to ImageNet evaluation.')
# +
#accuracy_predictor = ofa.nas.accuracy_predictor.AccuracyPredictor(
accuracy_predictor = ofa.tutorial.AccuracyPredictor(
pretrained=True,
device='cuda:0' if cuda_available else 'cpu'
)
print('The accuracy predictor is ready!')
# -
def run_top_down_evolutionary_search(latency_constraint):
# latency_constraint = (30,25,20) # ms, suggested range [15, 33] ms
P = 100 # The size of population in each generation
N = 500 # How many generations of population to be searched
N2 = 63
r = 0.25 # The ratio of networks that are used as parents for next generation
params = {
'constraint_type': target_hardware, # Let's do FLOPs-constrained search
'efficiency_constraint': latency_constraint,
'mutate_prob': 0.1, # The probability of mutation in evolutionary search
'mutation_ratio': 0.5, # The ratio of networks that are generated through mutation in generation n >= 2.
'efficiency_predictor': latency_table, # To use a predefined efficiency predictor.
'accuracy_predictor': accuracy_predictor, # To use a predefined accuracy_predictor predictor.
'population_size': P,
'max_time_budget': N,
'max_time_budget2': N2,
'parent_ratio': r,
}
# build the evolution finder
finder = ofa.tutorial.EvolutionFinder(**params)
# start searching
result_lis = []
st = time.time()
best_valids, best_info = finder.run_evolution_search_multi_mixed()
for i in range(len(latency_constraint)):
result_lis.append(best_info[i])
print('Found best architecture on %s with latency <= %.2f ms '
'It achieves %.2f%s predicted accuracy with %.2f ms latency on %s.' %
(target_hardware, latency_constraint[i], best_info[i][0] * 100, '%', best_info[i][-1], target_hardware))
# visualize the architecture of the searched sub-net
_, net_config, latency = best_info[i]
ofa_network.set_active_subnet(ks=net_config['ks'], d=net_config['d'], e=net_config['e'])
print('Architecture of the searched sub-net:')
print(ofa_network.module_str)
ed = time.time()
print("Time:", ed-st)
return ed-st
# # Experiments
# DESIGN SPACE: MobileNetV3
ofa_network = ofa.model_zoo.ofa_net('ofa_mbv3_d234_e346_k357_w1.2', pretrained=True)
#ofa_network = ofa.model_zoo.ofa_net('ofa_resnet50', pretrained=True)
print('The OFA Network is ready.')
# DESIGN SPACE: Resnet50D
ofa_network = ofa.model_zoo.ofa_net('ofa_resnet50', pretrained=True)
#ofa_network = ofa.model_zoo.ofa_net('ofa_resnet50', pretrained=True)
print('The OFA Network is ready.')
# DESIGN SPACE: ProxylessNAS
ofa_network = ofa.model_zoo.ofa_net('ofa_proxyless_d234_e346_k357_w1.3', pretrained=True)
#ofa_network = ofa.model_zoo.ofa_net('ofa_resnet50', pretrained=True)
print('The OFA Network is ready.')
# ## 1. Running time for k Latency Constraints
# ### MobileNetV3
latency_constraints = (35, 30, 25, 20, 15)
times = {}
for i in range(len(latency_constraints)):
latency_constraint = latency_constraints[:i+1]
temp_times = 0
for j in range(10):
temp_times += run_top_down_evolutionary_search(latency_constraint)
times[latency_constraint] = temp_times / 10
latency_constraints = (60, 55, 50, 45, 40, 35, 30, 25, 20, 15)
temp_times = 0
for j in range(10):
temp_times += run_top_down_evolutionary_search(latency_constraints)
times[latency_constraints] = temp_times / 10
times
# ### ResNet50D
latency_constraints = (35, 30, 25, 20, 15)
times = {}
for i in range(len(latency_constraints)):
latency_constraint = latency_constraints[:i+1]
temp_times = 0
for j in range(10):
temp_times += run_top_down_evolutionary_search(latency_constraint)
times[latency_constraint] = temp_times / 10
latency_constraints = (60, 55, 50, 45, 40, 35, 30, 25, 20, 15)
temp_times = 0
for j in range(10):
temp_times += run_top_down_evolutionary_search(latency_constraints)
times[latency_constraints] = temp_times / 10
times
# ### ProxylessNAS
latency_constraints = (35, 30, 25, 20, 15)
times = {}
for i in range(len(latency_constraints)):
latency_constraint = latency_constraints[:i+1]
temp_times = 0
for j in range(10):
temp_times += run_top_down_evolutionary_search(latency_constraint)
times[latency_constraint] = temp_times / 10
latency_constraints = (60, 55, 50, 45, 40, 35, 30, 25, 20, 15)
temp_times = 0
for j in range(10):
temp_times += run_top_down_evolutionary_search(latency_constraints)
times[latency_constraints] = temp_times / 10
times
# ## 2. Accuracy of Discovered Subnetworks
# ### MobileNetV3
latency_constraints = (60, 55, 50, 45, 40, 35, 30, 25, 20, 15)
run_top_down_evolutionary_search(latency_constraints)
# ### ResNet50D
latency_constraints = (60, 55, 50, 45, 40, 35, 30, 25, 20, 15)
run_top_down_evolutionary_search(latency_constraints)
# ### ProxylessNAS
latency_constraints = (60, 55, 50, 45, 40, 35, 30, 25, 20, 15)
run_top_down_evolutionary_search(latency_constraints)
# ## Cost of finding a single latency constraint subnetwork
times = {}
latency_constraints = (15, 20, 25, 30, 35, 40, 45, 50, 55, 60)
for i in range(len(latency_constraints)):
lc = latency_constraints[i:i+1]
times[lc] = run_top_down_evolutionary_search(lc)
times.values()
# # DEMO
#
# Run the cell below to see the top down strategy in action!
latency_constraints = (35, 30, 25, 20, 15)
run_top_down_evolutionary_search(latency_constraints)
|
topdownexperiments.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## String operations
from __future__ import print_function
import numpy as np
author = "kyubyong. https://github.com/Kyubyong/numpy_exercises"
np.__version__
# Q1. Concatenate x1 and x2.
x1 = np.array(['Hello', 'Say'], dtype=np.str)
x2 = np.array([' world', ' something'], dtype=np.str)
print(np.char.add(x1,x2))
# Q2. Repeat x three time element-wise.
x = np.array(['Hello ', 'Say '], dtype=np.str)
x1 = np.array(['Hello ', 'Say '], dtype=np.str)
print(np.char.multiply(x1,3))
# Q3-1. Capitalize the first letter of x element-wise.<br/>
# Q3-2. Lowercase x element-wise.<br/>
# Q3-3. Uppercase x element-wise.<br/>
# Q3-4. Swapcase x element-wise.<br/>
# Q3-5. Title-case x element-wise.<br/>
x = np.array(['Hello', 'Say'], dtype=np.str)
print(np.char.capitalize(x))
print(np.char.lower(x))
print(np.char.upper(x))
print(np.char.swapcase(x))
print(np.char.title(x))
x = np.array(['heLLo woRLd', 'Say sOmething'], dtype=np.str)
capitalized = ...
lowered = ...
uppered = ...
swapcased = ...
titlecased = ...
print("capitalized =", capitalized)
print("lowered =", lowered)
print("uppered =", uppered)
print("swapcased =", swapcased)
print("titlecased =", titlecased)
# Q4. Make the length of each element 20 and the string centered / left-justified / right-justified with paddings of `_`.
# +
x = np.array(['hello world', 'say something'], dtype=np.str)
centered = ...
left = ...
right = ...
print("centered =", centered)
print("left =", left)
print("right =", right)
# +
x = np.array(['hello world', 'say something'], dtype=np.str)
centered = np.char.rjust(a=x,fillchar="-",width=20)
left = np.char.ljust(a=x,fillchar="-",width=20)
right = np.char.center(a=x,fillchar="-",width=20)
print("centered =", centered)
print("left =", left)
print("right =", right)
# -
# Q5. Encode x in cp500 and decode it again.
x = np.array(['hello world', 'say something'], dtype=np.str)
encoded = ...
decoded = ...
print("encoded =", encoded)
print("decoded =", decoded)
x = np.array(['hello world', 'say something'], dtype=np.str)
encoded = np.char.encode(a=x,encoding='cp500')
decoded = np.char.decode(a=encoded,encoding='cp500')
print("encoded =", encoded)
print("decoded =", decoded)
# Q6. Insert a space between characters of x.
x = np.array(['hello world', 'say something'], dtype=np.str)
x = np.array(['hello world', 'say something'], dtype=np.str)
np.char.join(" ",x)
# Q7-1. Remove the leading and trailing whitespaces of x element-wise.<br/>
# Q7-2. Remove the leading whitespaces of x element-wise.<br/>
# Q7-3. Remove the trailing whitespaces of x element-wise.
x = np.array([' hello world ', '\tsay something\n'], dtype=np.str)
stripped = ...
lstripped = ...
rstripped = ...
print("stripped =", stripped)
print("lstripped =", lstripped)
print("rstripped =", rstripped)
x = np.array([' hello world ', '\tsay something\n'], dtype=np.str)
stripped = np.char.strip(a=x)
lstripped = np.char.lstrip(x)
rstripped = np.char.rstrip(x)
print("stripped =", stripped)
print("lstripped =", lstripped)
print("rstripped =", rstripped)
# Q8. Split the element of x with spaces.
x = np.array(['Hello my name is John'], dtype=np.str)
x = np.array(['Hello my name is John'], dtype=np.str)
print(np.char.split(x,sep=" "))
# Q9. Split the element of x to multiple lines.
x = np.array(['Hello\nmy name is John'], dtype=np.str)
x = np.array(['Hello\nmy name is John'], dtype=np.str)
np.char.splitlines(x)
# Q10. Make x a numeric string of 4 digits with zeros on its left.
x = np.array(['34'], dtype=np.str)
x = np.array(['34'], dtype=np.str)
np.char.rjust(x,fillchar="0",width=4)
np.char.zfill(x, 4)
# Q11. Replace "John" with "Jim" in x.
x = np.array(['Hello nmy name is John'], dtype=np.str)
x = np.array(['Hello nmy name is John'], dtype=np.str)
np.char.replace(x,old='John',new='Jim')
# ## Comparison
# Q12. Return x1 == x2, element-wise.
x1 = np.array(['Hello', 'my', 'name', 'is', 'John'], dtype=np.str)
x2 = np.array(['Hello', 'my', 'name', 'is', 'Jim'], dtype=np.str)
x1 = np.array(['Hello', 'my', 'name', 'is', 'John'], dtype=np.str)
x2 = np.array(['Hello', 'my', 'name', 'is', 'Jim'], dtype=np.str)
comp = list()
for i in range(5):
comp.append(x1[i]==x2[i])
print(comp)
np.char.equal(x1, x2)
# Q13. Return x1 != x2, element-wise.
x1 = np.array(['Hello', 'my', 'name', 'is', 'John'], dtype=np.str)
x2 = np.array(['Hello', 'my', 'name', 'is', 'Jim'], dtype=np.str)
x1 = np.array(['Hello', 'my', 'name', 'is', 'John'], dtype=np.str)
x2 = np.array(['Hello', 'my', 'name', 'is', 'Jim'], dtype=np.str)
comp = list()
for i in range(5):
comp.append(x1[i]!=x2[i])
print(comp)
np.char.not_equal(x1,x2)
# ## String information
# Q14. Count the number of "l" in x, element-wise.
x = np.array(['Hello', 'my', 'name', 'is', 'Lily'], dtype=np.str)
x = np.array(['Hello', 'my', 'name', 'is', 'Lily'], dtype=np.str)
np.char.count(x,sub='l')
# Q15. Count the lowest index of "l" in x, element-wise.
# +
x = np.array(['Hello', 'my', 'name', 'is', 'Lily'], dtype=np.str)
# -
x = np.array(['Hello', 'my', 'name', 'is', 'Lily'], dtype=np.str)
np.char.find(x,"l")
# Q16-1. Check if each element of x is composed of digits only.<br/>
# Q16-2. Check if each element of x is composed of lower case letters only.<br/>
# Q16-3. Check if each element of x is composed of upper case letters only.
x = np.array(['Hello', 'I', 'am', '20', 'years', 'old'], dtype=np.str)
out1 = ...
out2 = ...
out3 = ...
print("Digits only =", out1)
print("Lower cases only =", out2)
print("Upper cases only =", out3)
x = np.array(['Hello', 'I', 'am', '20', 'years', 'old'], dtype=np.str)
out1 = np.char.isdigit(x)
out2 = np.char.islower(x)
out3 = np.char.isupper(x)
print("Digits only =", out1)
print("Lower cases only =", out2)
print("Upper cases only =", out3)
# Q17. Check if each element of x starts with "hi".
x = np.array(['he', 'his', 'him', 'his'], dtype=np.str)
x = np.array(['he', 'his', 'him', 'his'], dtype=np.str)
np.char.startswith(x,'hi')
|
3_String_operations.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Q6_S5ifuoT-3"
# <div align="center">
# <img alt="if" height="200" style="border-radius:55px;" src=https://www.ifpb.edu.br/imagens/logotipos/ifpb-1>
#
# # **Estatística Aplicada à Computação/Telemática**
#
# **CURSO BACHARELADO EM ENGENHARIA DE COMPUTAÇÃO**
#
# **PROFESSOR:** *<NAME>*
#
# **ALUNO:** *<NAME>*
#
#
# ### **LISTA DE EXERCICIO**
#
#
# ---------
# + [markdown] id="FzrrCwD-oYQu"
# **Questao 1**
# + [markdown] id="hhhoEsYWnXLa"
#
# **Listas:** Uma lista (list) em Python é uma sequência ou coleção ordenada de valores. ... O valores que formam uma lista são chamados elementos ou itens
#
# + colab={"base_uri": "https://localhost:8080/"} id="5FR57c5cpKm2" outputId="7a713f9d-cffd-4e95-e93d-8f466ca11f0f"
dado=[1,2,3,4,5,6]
print(dado)
# + [markdown] id="73DKJIG3oiW9"
# **Dicionarios:** Dicionário é um tipo de coleção que diferente das listas seu mapeamento vem apartir de chaves e valores definidos por strings
# + colab={"base_uri": "https://localhost:8080/"} id="3DMm3VJcpZ-d" outputId="a3b00de3-95f7-491b-df69-d3bacad35d97"
dados={'nome':"Thiago", 'idade':"19"}
print(dados['nome'])
print(dados['idade'])
# + [markdown] id="t6QhoQTMqDMS"
# **Arrays numpy:** são estruturas de dados fornecidas pela biblioteca numpy. Esses arrays são projetados para lidar com grandes conjuntos de dados com eficiência e facilidade, principalmente quando se trata de conjuntos numéricos.E além disso, também é possível criar arrays numpy com mais de uma dimensão.
# + id="VAKvG71hqZNi"
import numpy as np
array_numpy = np.array([1, 2, 3, 4, 5])
array_numpy
# + id="7DmrJegbqtxZ"
array_numpy2 = np.array([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]])
array_numpy2
# + [markdown] id="Ot-1uSHuq6PA"
# **Series** são matrizes unidimensionais rotuladas capazes de armazenar dados de qualquer tipo (inteiro, string, float, objetos python, etc.). Os rótulos dos eixos são chamados coletivamente de índices
# + colab={"base_uri": "https://localhost:8080/"} id="PeoWjkQaN9Xf" outputId="d19f8f76-7a72-4a69-e04b-1e57150e7fc3"
import pandas as pd
ser = pd.Series([1,2,3,4,5,6,7])
print(ser)
# + [markdown] id="Jcq7zroPr0TW"
# **Questão 2**
# + colab={"base_uri": "https://localhost:8080/", "height": 269} id="hcipYjBAs7oN" outputId="7a3c57cb-840b-4215-bb1c-0b68aaf5f7a3"
import pandas as pd
import numpy as np
ser = pd.Series(np.random.randint(1, 10, 35))
ser_matriz = np.array(ser).reshape(7,5)
ser_df = pd.DataFrame(ser_m)
ser_df.head(7)
# + [markdown] id="UPFkespo4ze-"
# **Questão 3**
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="CbEfgtZw416V" outputId="13235c5f-7686-498e-efbc-5167a3d80d57"
import pandas as pd
df1=pd.read_csv("https://raw.githubusercontent.com/selva86/datasets/master/BostonHousing.csv",usecols=["crim","medv"])
dataframe1=pd.DataFrame(df1)
dataframe1.head()
# + [markdown] id="zD6nWqYwEzDY"
# **Questão 4**
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="gq4-ayCWE7XB" outputId="b5a755ea-1b31-47a8-c932-c69ce604a1c6"
df= pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/Cars93_miss.csv')
df.rename(columns={'Type':'CarType'}, inplace=True)
df2=list(df.keys())
tamanho=len(df2)
dic={}
for i in range(tamanho):
try:
x=df2[i].replace(".","_")
except:
continue
else:
dic[df2[i]]=x
df.rename(columns=dic)
# + [markdown] id="v83d4yhiPUFt"
# **Questão 5**
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="V-Ra2345i_YH" outputId="430e32a8-5772-488f-cb8f-47188b87a8f8"
import pandas as pd
import numpy as np
df=pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/Cars93_miss.csv')
df.isnull().values.any()
# + [markdown] id="MfULNmcykVX0"
# **Questão 6**
# + colab={"base_uri": "https://localhost:8080/"} id="H9yJpi6jkXVT" outputId="8306172d-e055-4244-deb4-531c4975f1aa"
import pandas as pd
import numpy as np
df=pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/Cars93_miss.csv')
df2=df.isnull().sum()
print(sum(df2))
print(df2.idxmax())
# + [markdown] id="4N_7-XO-lxiR"
# **Questão 7**
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="PLAuiIoCl2C5" outputId="a51cba47-5159-416b-97db-82105838577a"
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(20).reshape(-1, 5), columns=list('abcde'))
dataframe1=pd.DataFrame(df["a"])
dataframe1.head()
# + [markdown] id="9pH2gp-JnIkX"
# **Questão 8**
# + [markdown] id="IzOAA4tBpvZB"
# Letra **A**
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="o7A8QQY6nLcv" outputId="477e30f7-5770-4040-84ef-c639f6bb4514"
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(20).reshape(-1, 5), columns=list('abcde'))
df=df[['c','b','a','d','e']]
df
# + [markdown] id="F_GWhDDDp0JR"
# Letra **B**
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="FokBMecyqiSg" outputId="d68a2030-9ad9-4e3d-c677-3a09e5df4b6a"
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(20).reshape(-1, 5), columns=list('abcde'))
cols = list(df)
cols = [cols[-1]] + cols[:-1]
df = df[cols]
df
# + [markdown] id="flAIyBOxrgHn"
# Letra **C**
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="Zlx9Ew5urii3" outputId="5a3b2776-4abc-4f06-b147-a0197fa48a40"
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(20).reshape(-1, 5), columns=list('abcde'))
x=df.columns.to_list()
x.reverse()
df[x]
# + [markdown] id="h9VLECoGszQD"
# **Questão 9**
# + colab={"base_uri": "https://localhost:8080/", "height": 394} id="CxLDkzK6s1Ys" outputId="f02fb6f9-c223-4fdf-f124-3f334c0ef09b"
import pandas as pd
import numpy as np
df=pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/Cars93_miss.csv')
df2=df.loc[0:10]
df3=df2.iloc[:,0:10]
df3
# + [markdown] id="gOhhk36XuW85"
# **Questão 10**
# + colab={"base_uri": "https://localhost:8080/", "height": 707} id="zy0uUCr0ugpJ" outputId="81849a80-e0b4-4f7f-8029-0a19d50601da"
import pandas as pd
import numpy as np
df=pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/Cars93_miss.csv')
df2=df.loc[0:20]
df3=df2.iloc[:,0:3]
df3
# + [markdown] id="WZQkGIRJuyNA"
# **Questão 11**
# + colab={"base_uri": "https://localhost:8080/", "height": 112} id="HdBzqIiDu1ev" outputId="1285e644-db99-489e-8d61-be068003a91e"
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(10, 40, 60).reshape(-1, 4), columns=list(range(1,5)))
df = df.loc[df.sum(axis=1) > 100]
df.tail(2)
# + [markdown] id="ZvGXVpGPvrcV"
# **Questão 12**
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="meCsCUDzvuxN" outputId="875672af-932e-409a-ab38-1b9de8aacbac"
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(25).reshape(5, -1))
df.loc[[0,2,1,3,4]]
|
ATIVIDADES/Lista 1/Atividade_Estatistica_Aplicada 1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="eLVaBudcMH4K" executionInfo={"status": "ok", "timestamp": 1605519905548, "user_tz": -180, "elapsed": 7327, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "<KEY>", "userId": "18283115603852129832"}} outputId="3cdd53d5-0903-47a1-c89d-63af0b3d2ef9" colab={"base_uri": "https://localhost:8080/"}
# !pip install gdown
# !pip install gensim
# + id="EoaTSElSSl3n" executionInfo={"status": "ok", "timestamp": 1605520616186, "user_tz": -180, "elapsed": 8313, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "<KEY>", "userId": "18283115603852129832"}} outputId="011a34a0-7db4-41fc-8feb-dd03f919585c" colab={"base_uri": "https://localhost:8080/"}
# !gdown https://drive.google.com/uc?id=1NNpbV5N9MDj4vn-k5ANT1nqY9-z_vGRa
# !gdown https://drive.google.com/uc?id=18iBU4VFLg8zCrh5iZ66mbLJ51iJaWs4Z
#Get pretrained word2vec vectors - this file size is around 350 MB
# !gdown https://drive.google.com/uc?id=1r0A1UNRIlZy9QgTU_aMDgpw1FcI8HnDE
# + id="rXQlYHeG-Nxv"
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#some libraries cause this future warnings when the newer versions will be released. To prevent showing them, this code is used.
# + [markdown] id="WA4gLEPs-R9C"
# # WORD2VEC OVERVIEW
# + id="8gBilbeU-S39"
from gensim.models import KeyedVectors
word2vec = KeyedVectors.load_word2vec_format(r"/content/turkish_word2vec_300D" , binary=True)
# + [markdown] id="gaAOatP_-bqO"
# All word vectors are 300 dimensional dense vectors.
# + id="EqSplCCa-cgf" executionInfo={"status": "ok", "timestamp": 1605520311576, "user_tz": -180, "elapsed": 697, "user": {"displayName": "\u00d6zcan G\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="98dc0282-aa01-42d6-a09b-7d7ab349aecc" colab={"base_uri": "https://localhost:8080/"}
word2vec["gözlük"].shape
# + [markdown] id="aC5UEQMU-gxR"
# Thanks to these dense vectors, semantic and syntactic similarities can be captured between the words because the similar word vectors are closer to each other in the 300-d vector space compared to different words
# + id="RsPsosmc-gST" executionInfo={"status": "ok", "timestamp": 1605520345919, "user_tz": -180, "elapsed": 642, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="86244913-b971-4a0d-805d-f6bfd565133c" colab={"base_uri": "https://localhost:8080/"}
word2vec.most_similar("kalem",topn=3) #top 3 similar word for cat: as seen, kalemi (syntactic similarity) and mürekkepli & hokka (semantic similarity)
# + id="Ssum_3-n-eI5" executionInfo={"status": "ok", "timestamp": 1605520376702, "user_tz": -180, "elapsed": 738, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AO<KEY>SLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="46c9749a-d6bd-4877-b011-547d2a6da74c" colab={"base_uri": "https://localhost:8080/"}
word2vec.most_similar("hollanda",topn=3)
# + [markdown] id="MiqVOfWe-kzy"
# By utilizing this relationships, it is possible to create both syntactic and semantic analogies.
# + id="jnOPcY0M-nPw"
def analogy(x1, x2, y1):
result = word2vec.most_similar(positive=[y1, x2], negative=[x1])
return result[0][0]
# + id="w5JbOJok-nMV" executionInfo={"status": "ok", "timestamp": 1605520417203, "user_tz": -180, "elapsed": 641, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="8ffda95c-cb3f-4a4b-bad1-e76a16e65b28" colab={"base_uri": "https://localhost:8080/", "height": 35}
#semantic analogies
analogy('kral', 'erkek', 'kraliçe')
# + id="gj79p4DS-qX5" executionInfo={"status": "ok", "timestamp": 1605520432919, "user_tz": -180, "elapsed": 1712, "user": {"displayName": "\u00d<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="5ace55ad-412b-454d-eebf-72339f831191" colab={"base_uri": "https://localhost:8080/", "height": 35}
#syntactic analogies
analogy('uzun', 'uzunluk', 'kalın')
# + [markdown] id="RoLYp7id-uub"
# Let's see the words and their positions in the 2-dimensional vector space by using dimension reduction techniques (PCA)!
# + id="XMer1pCv-vsX"
import numpy as np
from sklearn.decomposition import PCA
# %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def display_pca_scatterplot(model, words):
word_vectors = np.array([model[w] for w in words])
twodim = PCA().fit_transform(word_vectors)[:,:2]
plt.figure(figsize=(8,8))
plt.scatter(twodim[:,0], twodim[:,1], edgecolors='k', c='r')
for word, (x,y) in zip(words, twodim):
plt.text(x+0.05, y+0.05, word)
plt.show()
# + id="eTajUpIH-xyS" executionInfo={"status": "ok", "timestamp": 1605520520918, "user_tz": -180, "elapsed": 1004, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="d8274ada-6965-4791-e3b7-512c129dde79" colab={"base_uri": "https://localhost:8080/", "height": 483}
words=['kahve', 'çay', 'bira', 'şarap', 'süt',
'makarna', 'sosis', 'pizza', 'kebap',
'fransa', 'italya', 'almanya', 'türkiye']
display_pca_scatterplot(word2vec,words)
# + [markdown] id="Xq6J5zfS-1vL"
# # kitapyurdu.com - REVIEWS AND RATINGS DATASET
# + [markdown] id="coY9c68Z07m8"
# Original kitapyurdu.com dataset contains around 125K reviews and ratings. However, training the model with this amount takes a lot of time. Therefore, we previously prepare the data as highly balanced and smaller size. The reason for that **handling imbalanced data is advanced topic, if you are interestedn in this, [you can follow this paper.](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5128907)** The script includes data preparation steps will be shared with participants after the workshop.
# + id="3ybq5z-VTIpL" executionInfo={"status": "ok", "timestamp": 1605520698620, "user_tz": -180, "elapsed": 884, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="c6049842-584a-4520-e451-bd0913bb77f9" colab={"base_uri": "https://localhost:8080/"}
import pandas as pd
df_train = pd.read_csv(r"train.csv",index_col=[0])
# Report the number of sentences.
print('Number of training sentences: {:,}\n'.format(df_train.shape[0]))
print()
print(df_train["label"].value_counts())
df_test = pd.read_csv(r"test.csv",index_col=[0])
# Report the number of sentences.
print()
print('Number of test sentences: {:,}\n'.format(df_test.shape[0]))
print()
print(df_test["label"].value_counts())
# + [markdown] id="BszOL51aTtqg"
# **Classes:**
#
#
#
# 0. Negative
# 1. Neutral
# 2. Positive
#
# + id="OBn9PCAmhMj1" executionInfo={"status": "ok", "timestamp": 1605520698623, "user_tz": -180, "elapsed": 509, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="242b6dda-8b78-4503-e73f-bcaed93cbcab" colab={"base_uri": "https://localhost:8080/", "height": 195}
df_train.head()
# + id="q6ms7mlP21GO" executionInfo={"status": "ok", "timestamp": 1605520698867, "user_tz": -180, "elapsed": 602, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="32cd5b24-9778-4eef-d9df-7560936aaae1" colab={"base_uri": "https://localhost:8080/", "height": 195}
df_test.head()
# + [markdown] id="NKa4xUcZUUxv"
# ## TEXT PREPROCESSING
# + id="e6weP98lz794" executionInfo={"status": "ok", "timestamp": 1605520699580, "user_tz": -180, "elapsed": 686, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="99cc499d-8c0f-4079-ebda-7c5ff50804cd" colab={"base_uri": "https://localhost:8080/", "height": 69}
df_train.review[0]
# + id="EFfBYYckIyFy" executionInfo={"status": "ok", "timestamp": 1605520700840, "user_tz": -180, "elapsed": 669, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="fc6878c7-9601-4943-f8c4-06c27c886778" colab={"base_uri": "https://localhost:8080/"}
import numpy as np
import nltk
import string as s
import re
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('punkt')
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from wordcloud import WordCloud
from sklearn.metrics import f1_score,accuracy_score
from sklearn.metrics import confusion_matrix,classification_report
# + id="sMVVlIeMw90L"
def preprocess(text,remove_stop_punc=False):
text=text.lower()
text=text.replace("\n"," ")
#removing URL
text = re.sub(r'https?:\/\/.*[\r\n]*', '', text)
text = re.sub(r'http?:\/\/.*[\r\n]*', '', text)
#Replace &, <, > with &,<,> respectively
text=text.replace(r'&?',r'and')
text=text.replace(r'<',r'<')
text=text.replace(r'>',r'>')
#remove hashtags
text=re.sub(r"#[A-Za-z0-9]+","",text)
#remove \
text=re.sub(r"\\ "," ",text)
#remove punctuations and stop words
stop_words=stopwords.words('turkish')
tokens=nltk.word_tokenize(text)
if remove_stop_punc:
tokens_new=[i for i in tokens if not i in stop_words and i.isalpha()] #isalpha() method returns True if all the characters are alphabet letters
else:
tokens_new=tokens
#remove excess whitespace
text= ' '.join(tokens_new)
return text
df_train["review"]=df_train["review"].apply(preprocess,remove_stop_punc=True)
df_test["review"]=df_test["review"].apply(preprocess,remove_stop_punc=True)
#Remove reviews which have no word in them
df_train["Text_length"] = [len(text.split(' ')) for text in df_train.review]
df_train = df_train[df_train["Text_length"]>1]
#Remove reviews which have no word in them
df_test["Text_length"] = [len(text.split(' ')) for text in df_test.review]
df_test = df_test[df_test["Text_length"]>1]
# + id="rXkB_rhm3hWN" executionInfo={"status": "ok", "timestamp": 1605520705315, "user_tz": -180, "elapsed": 931, "user": {"displayName": "\u00d6zcan G\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="a05cb5ad-af59-42b4-89fc-c9aa741768c4" colab={"base_uri": "https://localhost:8080/", "height": 52}
df_train.review[0]
# + [markdown] id="AtO8WRMT4J3o"
# ##### **BEFORE PREPROCESSING:**
#
# > Kitabı bugün 3. defa kaldığım yerden devam etmek için elime aldım. Ancak yine daha iyi anlamak için baştan başladım(3. defa). Hatta kararsızım okuyup okumamakta, çünkü bir türlü ilerleyemiyorum kitapta bayağı sıkıyor beni. Hadi bakalım inşallah bu sefer bitireceğim kitabı. Son yapılan yoruma istinaden umarım 140. sayfadan sonra ben de aynı düşüncelere sahip olurum.
#
# + [markdown] id="2uMXRAqONVhW"
# # FEATURE EXTRACTION WITH WORD2VEC
# + [markdown] id="_yVYL1hM_KA3"
# To obtain sentence or news vector representations, the aggregation of word vectors is required. There are lots of approaches in the applications. In this workshop, we will take average of word vectors to represent news.
#
# Also, in the related function, we add if condition which controls whether the related word exists in vocabular or not because Word2vec may not have all possible word vectors. Therefore, out-of-vocabulary (OOV) problem can occur if words do not exist.
# + id="kXTPQlnq_Nis"
def vocab_control(word):
if word in word2vec.vocab.keys():
print("{} exists in the vocabulary".format(word))
else:
print("{} does not exist in the vocabulary and it is the example for OOV problem".format(word))
# + id="FU-l7oSm_OlV" executionInfo={"status": "ok", "timestamp": 1605520722513, "user_tz": -180, "elapsed": 708, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="a9ec950a-019b-4f7c-f500-5aae26245f83" colab={"base_uri": "https://localhost:8080/"}
vocab_control("isveç")
# + id="-Cr1zcUc_OXG" executionInfo={"status": "ok", "timestamp": 1605520756715, "user_tz": -180, "elapsed": 709, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="dd78948f-ba35-4e94-9c22-bcb134f71f05" colab={"base_uri": "https://localhost:8080/"}
vocab_control("avusturalya") #typo exists on purpose
# + id="7sU5NBaf_SmS"
from nltk.tokenize import WordPunctTokenizer
WPT = WordPunctTokenizer()
def news_embed(sentence):
sentence=sentence.lower()
tokens=WPT.tokenize(sentence)
new_tokens=[token for token in tokens]
sent_list=[]
for word in new_tokens:
if word in word2vec.vocab.keys():
wv=word2vec[word]
sent_list.append(wv)
else:
continue
if len(sent_list)<1:
dummy=np.random.normal(0.5, 0.25, 300)
sent_list.append(dummy)
sent_embed= np.mean(sent_list,axis=0)
return sent_embed
# + id="enTsfO00_VJz"
sentence="Principai özelleştirilmiş NLP çözümleri sunan bir şirkettir."
# + id="mjalYSkL_VfD" executionInfo={"status": "ok", "timestamp": 1605520995740, "user_tz": -180, "elapsed": 630, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="3227ae3e-9998-44e1-c156-c3879abe48ad" colab={"base_uri": "https://localhost:8080/"}
a=news_embed(sentence)
a.shape
# + id="ovq3Srr3_XGz" executionInfo={"status": "ok", "timestamp": 1605520995742, "user_tz": -180, "elapsed": 434, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="6cd1c32e-43fc-4b53-f100-832f8b2024c7" colab={"base_uri": "https://localhost:8080/"}
type(a)
# + id="uCfpccZ88oXv"
texts = df_train.review
labels = df_train.label
from sklearn.model_selection import train_test_split
train_x, valid_x, train_y, valid_y = train_test_split(texts, labels, random_state=42, test_size=0.2)
test_x=df_test.review
test_y=df_test.label
# + id="rlKrauOgNaWt" executionInfo={"status": "ok", "timestamp": 1605520997227, "user_tz": -180, "elapsed": 1234, "user": {"displayName": "\u00d6z<NAME>\u00fcnde\u015f", "photoUrl": "<KEY>", "userId": "18283115603852129832"}} outputId="4b7a26a4-cf97-4c0f-c0af-8d3348fc1b86" colab={"base_uri": "https://localhost:8080/"}
train_array=np.array([news_embed(news) for news in train_x])
valid_array=np.array([news_embed(news) for news in valid_x])
test_array=np.array([news_embed(news) for news in test_x])
train_array.shape
# + [markdown] id="Kep8nsKO869a"
# # MODELLING
# + [markdown] id="UA5HEfqbUmKR"
# The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification). However, **it requires non-negative inputs.** Since w2v vectors may have negative values, we will apply Linear Support Vector Classifier algorith via [sci-kit learn](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC) library.
# + id="LD-G4BlpPPOd" executionInfo={"status": "ok", "timestamp": 1605521011141, "user_tz": -180, "elapsed": 12755, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="d2cb058f-63e9-492b-e18b-e94a54427a66" colab={"base_uri": "https://localhost:8080/"}
from sklearn.metrics import classification_report
from sklearn.svm import LinearSVC
model = LinearSVC(random_state=0,max_iter=2000)
model.fit(train_array, train_y)
pred=model.predict(valid_array)
from sklearn.metrics import accuracy_score,confusion_matrix
print("\nAccuracy of W2V and LinearSVC over validation set is:",accuracy_score(valid_y, pred))
print(classification_report(valid_y, pred))
# + [markdown] id="acW1Hhw1Ah_U"
# #TESTING THE MODEL
# + id="aB4qjYQkRLpT" executionInfo={"status": "ok", "timestamp": 1605521016305, "user_tz": -180, "elapsed": 599, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AO<KEY>SL<KEY>y=s64", "userId": "18283115603852129832"}} outputId="06ea10d9-4c7b-4976-c464-7df4f856bc3e" colab={"base_uri": "https://localhost:8080/"}
test_pred=model.predict(test_array)
from sklearn.metrics import accuracy_score,confusion_matrix
print("\nAccuracy of W2V and LinearSVC over test set is:",accuracy_score(test_y, test_pred))
print(classification_report(test_y, test_pred))
# + id="0-_1WqWtBVml" executionInfo={"status": "ok", "timestamp": 1605521018561, "user_tz": -180, "elapsed": 739, "user": {"displayName": "\u00d6<NAME>\u00fcnde\u015f", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhGIhvRjWYIAWgTEo9MZUquRSLycnEWlRtvZg8y=s64", "userId": "18283115603852129832"}} outputId="b2443875-4e83-476b-b679-385fb45b9a71" colab={"base_uri": "https://localhost:8080/", "height": 520}
cm = confusion_matrix(test_y, test_pred)
labels = ['Negative', 'Neutral', 'Positive']
from mlxtend.plotting import plot_confusion_matrix
plt.figure()
plot_confusion_matrix(cm, figsize=(8,8), hide_ticks=True, cmap=plt.cm.Blues)
plt.xticks(range(3), labels, fontsize=12)
plt.yticks(range(3), labels, fontsize=12)
plt.show()
# + id="mJzjFwmWIu1q"
|
Sentiment Analysis - TR/Sentiment_Analysis_TR_W2V.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Equations
# The equations in the previous lab included one variable, for which you solved the equation to find its value. Now let's look at equations with multiple variables. For reasons that will become apparent, equations with two variables are known as linear equations.
#
# ## Solving a Linear Equation
# Consider the following equation:
#
# \begin{equation}2y + 3 = 3x - 1 \end{equation}
#
# This equation includes two different variables, **x** and **y**. These variables depend on one another; the value of x is determined in part by the value of y and vice-versa; so we can't solve the equation and find absolute values for both x and y. However, we *can* solve the equation for one of the variables and obtain a result that describes a relative relationship between the variables.
#
# For example, let's solve this equation for y. First, we'll get rid of the constant on the right by adding 1 to both sides:
#
# \begin{equation}2y + 4 = 3x \end{equation}
#
# Then we'll use the same technique to move the constant on the left to the right to isolate the y term by subtracting 4 from both sides:
#
# \begin{equation}2y = 3x - 4 \end{equation}
#
# Now we can deal with the coefficient for y by dividing both sides by 2:
#
# \begin{equation}y = \frac{3x - 4}{2} \end{equation}
#
# Our equation is now solved. We've isolated **y** and defined it as <sup>3x-4</sup>/<sub>2</sub>
#
# While we can't express **y** as a particular value, we can calculate it for any value of **x**. For example, if **x** has a value of 6, then **y** can be calculated as:
#
# \begin{equation}y = \frac{3\cdot6 - 4}{2} \end{equation}
#
# This gives the result <sup>14</sup>/<sub>2</sub> which can be simplified to 7.
#
# You can view the values of **y** for a range of **x** values by applying the equation to them using the following Python code:
# +
import pandas as pd
# Create a dataframe with an x column containing values from -10 to 10
df = pd.DataFrame ({'x': range(-10, 11)})
# Add a y column by applying the solved equation to x
df['y'] = (3*df['x'] - 4) / 2
#Display the dataframe
df
# -
# We can also plot these values to visualize the relationship between x and y as a line. For this reason, equations that describe a relative relationship between two variables are known as *linear equations*:
# +
# %matplotlib inline
from matplotlib import pyplot as plt
plt.plot(df.x, df.y, color="grey", marker = "o")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.show()
# -
# In a linear equation, a valid solution is described by an ordered pair of x and y values. For example, valid solutions to the linear equation above include:
# - (-10, -17)
# - (0, -2)
# - (9, 11.5)
#
# The cool thing about linear equations is that we can plot the points for some specific ordered pair solutions to create the line, and then interpolate the x value for any y value (or vice-versa) along the line.
# ## Intercepts
# When we use a linear equation to plot a line, we can easily see where the line intersects the X and Y axes of the plot. These points are known as *intercepts*. The *x-intercept* is where the line intersects the X (horizontal) axis, and the *y-intercept* is where the line intersects the Y (horizontal) axis.
#
# Let's take a look at the line from our linear equation with the X and Y axis shown through the origin (0,0).
# +
plt.plot(df.x, df.y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
## add axis lines for 0,0
plt.axhline()
plt.axvline()
plt.show()
# -
# The x-intercept is the point where the line crosses the X axis, and at this point, the **y** value is always 0. Similarly, the y-intercept is where the line crosses the Y axis, at which point the **x** value is 0. So to find the intercepts, we need to solve the equation for **x** when **y** is 0.
#
# For the x-intercept, our equation looks like this:
#
# \begin{equation}0 = \frac{3x - 4}{2} \end{equation}
#
# Which can be reversed to make it look more familar with the x expression on the left:
#
# \begin{equation}\frac{3x - 4}{2} = 0 \end{equation}
#
# We can multiply both sides by 2 to get rid of the fraction:
#
# \begin{equation}3x - 4 = 0 \end{equation}
#
# Then we can add 4 to both sides to get rid of the constant on the left:
#
# \begin{equation}3x = 4 \end{equation}
#
# And finally we can divide both sides by 3 to get the value for x:
#
# \begin{equation}x = \frac{4}{3} \end{equation}
#
# Which simplifies to:
#
# \begin{equation}x = 1\frac{1}{3} \end{equation}
#
# So the x-intercept is 1<sup>1</sup>/<sub>3</sub> (approximately 1.333).
#
# To get the y-intercept, we solve the equation for y when x is 0:
#
# \begin{equation}y = \frac{3\cdot0 - 4}{2} \end{equation}
#
# Since 3 x 0 is 0, this can be simplified to:
#
# \begin{equation}y = \frac{-4}{2} \end{equation}
#
# -4 divided by 2 is -2, so:
#
# \begin{equation}y = -2 \end{equation}
#
# This gives us our y-intercept, so we can plot both intercepts on the graph:
# +
plt.plot(df.x, df.y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
## add axis lines for 0,0
plt.axhline()
plt.axvline()
plt.annotate('x-intercept',(1.333, 0))
plt.annotate('y-intercept',(0,-2))
plt.show()
# -
# The ability to calculate the intercepts for a linear equation is useful, because you can calculate only these two points and then draw a straight line through them to create the entire line for the equation.
# ## Slope
# It's clear from the graph that the line from our linear equation describes a slope in which values increase as we travel up and to the right along the line. It can be useful to quantify the slope in terms of how much **x** increases (or decreases) for a given change in **y**. In the notation for this, we use the greek letter Δ (*delta*) to represent change:
#
# \begin{equation}slope = \frac{\Delta{y}}{\Delta{x}} \end{equation}
#
# Sometimes slope is represented by the variable ***m***, and the equation is written as:
#
# \begin{equation}m = \frac{y_{2} - y_{1}}{x_{2} - x_{1}} \end{equation}
#
# Although this form of the equation is a little more verbose, it gives us a clue as to how we calculate slope. What we need is any two ordered pairs of x,y values for the line - for example, we know that our line passes through the following two points:
# - (0,-2)
# - (6,7)
#
# We can take the x and y values from the first pair, and label them x<sub>1</sub> and y<sub>1</sub>; and then take the x and y values from the second point and label them x<sub>2</sub> and y<sub>2</sub>. Then we can plug those into our slope equation:
#
# \begin{equation}m = \frac{7 - -2}{6 - 0} \end{equation}
#
# This is the same as:
#
# \begin{equation}m = \frac{7 + 2}{6 - 0} \end{equation}
#
# That gives us the result <sup>9</sup>/<sub>6</sub> which is 1<sup>1</sup>/<sub>2</sub> or 1.5 .
#
# So what does that actually mean? Well, it tells us that for every change of **1** in x, **y** changes by 1<sup>1</sup>/<sub>2</sub> or 1.5. So if we start from any point on the line and move one unit to the right (along the X axis), we'll need to move 1.5 units up (along the Y axis) to get back to the line.
#
# You can plot the slope onto the original line with the following Python code to verify it fits:
# +
plt.plot(df.x, df.y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
# set the slope
m = 1.5
# get the y-intercept
yInt = -2
# plot the slope from the y-intercept for 1x
mx = [0, 1]
my = [yInt, yInt + m]
plt.plot(mx,my, color='red', lw=5)
plt.show()
# -
# ### Slope-Intercept Form
# One of the great things about algebraic expressions is that you can write the same equation in multiple ways, or *forms*. The *slope-intercept form* is a specific way of writing a 2-variable linear equation so that the equation definition includes the slope and y-intercept. The generalised slope-intercept form looks like this:
#
# \begin{equation}y = mx + b \end{equation}
#
# In this notation, ***m*** is the slope and ***b*** is the y-intercept.
#
# For example, let's look at the solved linear equation we've been working with so far in this section:
#
# \begin{equation}y = \frac{3x - 4}{2} \end{equation}
#
# Now that we know the slope and y-intercept for the line that this equation defines, we can rewrite the equation as:
#
# \begin{equation}y = 1\frac{1}{2}x + -2 \end{equation}
#
# You can see intuitively that this is true. In our original form of the equation, to find y we multiply x by three, subtract 4, and divide by two - in other words, x is half of 3x - 4; which is 1.5x - 2. So these equations are equivalent, but the slope-intercept form has the advantages of being simpler, and including two key pieces of information we need to plot the line represented by the equation. We know the y-intecept that the line passes through (0, -2), and we know the slope of the line (for every x, we add 1.5 to y.
#
# Let's recreate our set of test x and y values using the slope-intercept form of the equation, and plot them to prove that this describes the same line:
# +
# %matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
# Create a dataframe with an x column containing values from -10 to 10
df = pd.DataFrame ({'x': range(-10, 11)})
# Define slope and y-intercept
m = 1.5
yInt = -2
# Add a y column by applying the slope-intercept equation to x
df['y'] = m*df['x'] + yInt
# Plot the line
from matplotlib import pyplot as plt
plt.plot(df.x, df.y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
# label the y-intercept
plt.annotate('y-intercept',(0,yInt))
# plot the slope from the y-intercept for 1x
mx = [0, 1]
my = [yInt, yInt + m]
plt.plot(mx,my, color='red', lw=5)
plt.show()
|
Python/Module01/01-02-Linear Equations.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.8 64-bit (''base'': conda)'
# language: python
# name: python36864bitbaseconda8927a157b823400bbd9b5c06398ca37f
# ---
# ## Simulation for different routes generated when # of vehicles is changed for a constant demand.
#imports
from __future__ import print_function
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
from latlong_vrp import append_list_as_row, update_file, makefile, plotgraph,update_fixed
import sys
import pandas as pd
import time
import math
import random
filepath = r'/Users/adityagoel/Documents/Thesis/Vrp-CurrentProgress/'
update_file(filepath)
sitepositions = pd.read_csv(''.join([filepath, 'site_position.csv']))
count = len(sitepositions)
sapids = sitepositions['SAP_ID'].tolist()
def vehicle_demand_initialiser(count):
vehicle_demands = []
for i in range(count):
vehicle_demands.append(random.randint(1, 10))
with open('data_model_demand.txt', 'w') as f:
for demand in vehicle_demands:
f.write(str(demand) +'\n')
f.close()
# ## Data Model
# +
def create_data_model():
"""Stores the data for the problem."""
matrix_data = pd.read_csv('newdata.csv')
matrix = matrix_data.values.tolist()
data = {}
data['distance_matrix'] = []
for distance_data_row in matrix:
data['distance_matrix'].append(distance_data_row)
data['depot'] = 0
data['num_vehicles'] = 10
data['vehicle_capacities'] = [1] * data['num_vehicles']
# Only needs to be done once
vehicle_demand_initialiser(len(data['distance_matrix'][0]))
data['demands'] = []
with open('data_model_demand.txt', 'r') as f:
for line in f:
data['demands'].append((int(line)))
f.close()
return data
# -
# ## Printer Function
def print_solution(data, manager, routing, solution):
"""Prints solution on console."""
total_distance = 0
total_load = 0
max_route_distance = 0
for vehicle_id in range(data['num_vehicles']):
sap_index = []
index = routing.Start(vehicle_id)
plan_output = 'Route for vehicle {}:\n'.format(vehicle_id)
route_distance = 0
route_load = 0
while not routing.IsEnd(index):
node_index = manager.IndexToNode(index)
route_load += data['demands'][node_index]
plan_output += ' {0} -> '.format(node_index)
sap_index.append(manager.IndexToNode(index))
previous_index = index
index = solution.Value(routing.NextVar(index))
route_distance += routing.GetArcCostForVehicle(previous_index, index, vehicle_id)
plan_output += ' {0} \n'.format(manager.IndexToNode(index))
plan_output += 'Distance of the route: {}\n'.format(route_distance)
plan_output += 'Load of the route: {}\n'.format(route_load)
sap_index.append(manager.IndexToNode(index))
print(plan_output)
for z in sap_index:
print(sapids[z],end=" -> ")
print("\n")
total_distance += route_distance
total_load += route_load
with open('output/{}_vehicle_result.txt'.format(data['num_vehicles']), 'a') as f:
print(plan_output, file=f)
for z in sap_index:
print(sapids[z],end=" -> ",file=f)
print("\n",file = f)
max_route_distance = max(route_distance, max_route_distance)
print('Maximum of the route distances: {}'.format(max_route_distance))
print('Total distance of all routes: {}'.format(total_distance))
#print('Total load of all routes: {}'.format(total_load))
# ### Callbacks and finding solution
# +
def vehicle_simulation_wrapper(data, number_of_vehicles, vehicle_capacity):
# Maximum demand = 556 * 10 = 5560
# Average demand = 556 * 5 = 2780
# Capacity of Truck = vehicle_capacity
# Min number of vehicle for solution = 2780/vehicle_capacity + 1
'''UPDATE: DATA MODEL'''
data['num_vehicles'] = number_of_vehicles
data['vehicle_capacities'] = [vehicle_capacity] * number_of_vehicles
'''CREATE MANAGER'''
manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']), data['num_vehicles'], data['depot'])
routing = pywrapcp.RoutingModel(manager)
'''CALLBACK FUNCTIONS -- DISTANCE AND DEMAND'''
def distance_callback(from_index, to_index):
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
def demand_callback(from_index):
from_node = manager.IndexToNode(from_index)
return data['demands'][from_node]
'''CALLBACKS -- TRANSIT (DISTANCE) AND DEMAND'''
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
demand_callback_index = routing.RegisterUnaryTransitCallback(demand_callback)
'''ROUTING DIMENSION'''
# Dimensions keep track of quantities that accumulate along a vehicle's route
# Such as travel time or the total weight it is carrying.
# If such a quantity exists in constraint (CVRP) or objective function (distance travelled)
# Then we need to define a dimension for it
routing.AddDimensionWithVehicleCapacity(
demand_callback_index,
0, # null capacity slack
data['vehicle_capacities'], # vehicle maximum capacities - handles general case of varying capacity
True, # start cumulative to zero
'Capacity')
''' OBJECTIVE FUNCTION'''
# Define cost of each arc.
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
''' SEARCH PARAMETERS'''
# Setting first solution heuristic.
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
search_parameters.local_search_metaheuristic = (routing_enums_pb2.LocalSearchMetaheuristic.GUIDED_LOCAL_SEARCH)
search_parameters.time_limit.FromSeconds(1)
# Solve the problem.
solution = routing.SolveWithParameters(search_parameters)
''' PRINT TO FILE '''
print("Solution with {} vehicles\n".format(number_of_vehicles))
if solution:
with open('output/{}_vehicle_routes'.format(number_of_vehicles), 'w') as f:
f.write(" Solution with {} vehicles\n".format(number_of_vehicles))
f.write('\n' + '-x-'*50 + '\n')
# +
def main():
# Instantiate the data problem.
data = create_data_model()
number_of_vehicles = [7, 10]
#Capacity can be different for different vehicle,[100, 200, 500, 50, 750].
#TODO: Change function definition and use args for vehicle capacity
vehicle_capacity = 1000
for number in number_of_vehicles:
vehicle_simulation_wrapper(data, number, vehicle_capacity)
# -
if __name__ == '__main__':
main()
# # Testing Things
# +
# def create_data_model(count, max_num_of_vehicles):
def create_data_model():
"""Stores the data for the problem."""
matrix_data = pd.read_csv('newdata.csv')
matrix = matrix_data.values.tolist()
data = {}
# data['distance_matrix'] = []
data['distance_matrix'] = [
[
0, 548, 776, 696, 582, 274, 502, 194, 308, 194, 536, 502, 388, 354,
468, 776, 662
],
[
548, 0, 684, 308, 194, 502, 730, 354, 696, 742, 1084, 594, 480, 674,
1016, 868, 1210
],
[
776, 684, 0, 992, 878, 502, 274, 810, 468, 742, 400, 1278, 1164,
1130, 788, 1552, 754
],
[
696, 308, 992, 0, 114, 650, 878, 502, 844, 890, 1232, 514, 628, 822,
1164, 560, 1358
],
[
582, 194, 878, 114, 0, 536, 764, 388, 730, 776, 1118, 400, 514, 708,
1050, 674, 1244
],
[
274, 502, 502, 650, 536, 0, 228, 308, 194, 240, 582, 776, 662, 628,
514, 1050, 708
],
[
502, 730, 274, 878, 764, 228, 0, 536, 194, 468, 354, 1004, 890, 856,
514, 1278, 480
],
[
194, 354, 810, 502, 388, 308, 536, 0, 342, 388, 730, 468, 354, 320,
662, 742, 856
],
[
308, 696, 468, 844, 730, 194, 194, 342, 0, 274, 388, 810, 696, 662,
320, 1084, 514
],
[
194, 742, 742, 890, 776, 240, 468, 388, 274, 0, 342, 536, 422, 388,
274, 810, 468
],
[
536, 1084, 400, 1232, 1118, 582, 354, 730, 388, 342, 0, 878, 764,
730, 388, 1152, 354
],
[
502, 594, 1278, 514, 400, 776, 1004, 468, 810, 536, 878, 0, 114,
308, 650, 274, 844
],
[
388, 480, 1164, 628, 514, 662, 890, 354, 696, 422, 764, 114, 0, 194,
536, 388, 730
],
[
354, 674, 1130, 822, 708, 628, 856, 320, 662, 388, 730, 308, 194, 0,
342, 422, 536
],
[
468, 1016, 788, 1164, 1050, 514, 514, 662, 320, 274, 388, 650, 536,
342, 0, 764, 194
],
[
776, 868, 1552, 560, 674, 1050, 1278, 742, 1084, 810, 1152, 274,
388, 422, 764, 0, 798
],
[
662, 1210, 754, 1358, 1244, 708, 480, 856, 514, 468, 354, 844, 730,
536, 194, 798, 0
],
]
# for distance_data_row in matrix:
# data['distance_matrix'].append(distance_data_row)
data['depot'] = 0
data['num_vehicles'] = 1
data['vehicle_capacities'] = [100]
# Only needs to be done once
vehicle_demand_initialiser(len(data['distance_matrix']))
data['demands'] = []
with open('data_model_demand.txt', 'r') as f:
for line in f:
data['demands'].append((int(line)))
f.close()
return data
def update_data_model(data, number_of_vehicles):
data['num_vehicles'] = number_of_vehicles
data['vehicle_capacities'] = [500] * number_of_vehicles
# +
# Instantiate the data problem.
data = create_data_model()
# Create the routing index manager.
manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']),
data['num_vehicles'], data['depot'])
# Create Routing Model.
routing = pywrapcp.RoutingModel(manager)
# +
# Create and register a transit callback.
def distance_callback(from_index, to_index):
"""Returns the distance between the two nodes."""
# Convert from routing variable Index to distance matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
# Define cost of each arc.
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
def demand_callback(from_index):
"""Returns the demand of the node."""
# Convert from routing variable Index to demands NodeIndex.
from_node = manager.IndexToNode(from_index)
return data['demands'][from_node]
demand_callback_index = routing.RegisterUnaryTransitCallback(
demand_callback)
routing.AddDimensionWithVehicleCapacity(
demand_callback_index,
0, # null capacity slack
data['vehicle_capacities'], # vehicle maximum capacities
True, # start cumul to zero
'Capacity')
# Maximum vehicles in our simulation
number_of_vehicles = 5
update_data_model(data, number_of_vehicles)
# +
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
search_parameters.local_search_metaheuristic = (
routing_enums_pb2.LocalSearchMetaheuristic.GUIDED_LOCAL_SEARCH)
search_parameters.time_limit.FromSeconds(1)
# -
# Solve the problem.
solution = routing.SolveWithParameters(search_parameters)
# Print solution on console.
if solution:
print_solution(data, manager, routing, solution)
data['demands']
len(data['distance_matrix'][0])
# ### Finding routes, iterating over number of vehicles
number_of_vehicles = [5, 10, 15, 20]
for vehicles in number_of_vehicles
data['num_vehicles'] = vehicles
# do routing and figure out total cost
# print solutions to file
#
continue
|
Simulations_v1/Simulation_WRAPPER.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python (sysnet)
# language: python
# name: sysnet
# ---
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torch
import sys
sys.path.append('/Users/mehdi/github/sysnetdev')
from sysnet.sources.models import DNN
from sysnet.sources.io import load_checkpoint
# -
def load_l0weights(pid):
""" load model """
model = DNN(*(5, 20, 18, 1))
path = '../output/mock001_cp2p_adamw/model/'
load_checkpoint(f'{path}model_{pid}_2664485226/best.pth.tar', model)
fc0_weight = model.fc[0].weight.data.numpy()
return fc0_weight
def imshow(f0w):
fig, ax = plt.subplots()
ylabels = ['EBV', 'lnHI', 'nstar']\
+ ['-'.join([s, b]) for s in ['depth', 'seeing', 'skymag', 'exptime', 'mjd'] \
for b in 'rgz']
map1 = ax.imshow(f0w.T, origin='lower', cmap=plt.cm.bwr, vmin=-.5, vmax=.5)#, vmin=-0.3, vmax=0.3)
fig.colorbar(map1)
ax.set_yticks(np.arange(18))
ax.set_yticklabels(ylabels)
return ax
f0w = []
for pid in range(5):
f0w.append(load_l0weights(pid))
# %matplotlib inline
for f0wi in f0w:
imshow(f0wi)
# +
xlabels = ['EBV', 'lnHI', 'nstar']\
+ ['-'.join([s, b]) for s in ['depth', 'seeing', 'skymag', 'exptime', 'mjd'] \
for b in 'rgz']
for f0wi in f0w:
plt.scatter(np.arange(18), abs(f0wi.mean(axis=0)), alpha=0.4)
plt.ylabel('|weight_i|')
plt.ylim(ymax=0.26)
_=plt.xticks(np.arange(18), labels=xlabels, rotation=90)
# +
xlabels = ['EBV', 'lnHI', 'nstar']\
+ ['-'.join([s, b]) for s in ['depth', 'seeing', 'skymag', 'exptime', 'mjd'] \
for b in 'rgz']
for f0wi in f0w:
plt.scatter(np.arange(18), abs(f0wi.mean(axis=0)), alpha=0.4)
plt.ylabel('|weight_i|')
_=plt.xticks(np.arange(18), labels=xlabels, rotation=90)
# -
def imshow2(fc0w):
ylabels = ['EBV', 'lnHI', 'nstar']\
+ ['-'.join([s, b]) for s in ['depth', 'seeing', 'skymag', 'exptime', 'mjd'] \
for b in 'rgz']
plt.figure(figsize=(6, 10))
plt.imshow(abs(fc0w.mean(axis=0)[:, np.newaxis]),
cmap=plt.cm.Blues, extent=(0, 5, -0.5, 17.5),
origin='lower')
plt.yticks(np.arange(18), labels=ylabels)
plt.xticks([])
plt.colorbar()
for f0wi in f0w:
imshow2(f0wi)
# ## correlation
df = np.load('../input/001/cp2p/cp2p_001.hp.256.5.r.npy', allow_pickle=True).item()
df.keys()
dt = np.concatenate([df['test']['fold%d'%i] for i in range(5)])
dt
from scipy.stats import pearsonr
f(x, y) = 0.3 x + 0.7 y # x and y are correlated, e.g., cov (x, y) ~ 1
f(x, y) = 0.7 x + 0.3 y
fig, ax = plt.sub
|
notebooks/FeatureSelectionL1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:sitta]
# language: python
# name: conda-env-sitta-py
# ---
import pandas as pd
import numpy as np
import os
# +
os.listdir('../data')
dateparse = lambda x: pd.datetime.strptime(x, '%d %b %Y')
dataf = pd.read_csv('../data/auction_results.csv', parse_dates=['Date'], date_parser=dateparse).sort('Date')
# -
print dataf.describe()
print dataf.head()
# %matplotlib inline
dataf.hist()
dataf.plot(x='Date', y=['Clearance Rate'])
dataf.plot(x='Date', y=['Total Scheduled Auctions'])
|
books/Auction results.ipynb
|
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++17
// language: C++17
// name: xeus-cling-cpp17
// ---
// + [markdown] graffitiCellId="id_dwzadwu"
// ## Experiment with Jupyter Notebooks
// Press the `Run Code` button below to run the code in the terminal. When the terminal commands are executed you'll see the code within the cell has been saved to `./code/main.cpp` and is then compiled and executed.
//
// Try writing and running some code to see how it works!
// + graffitiCellId="id_u5bdxi7" graffitiConfig={"executeCellViaGraffiti": "hsznf5f_v797hj5"}
// function example
#include <iostream>
// Write a simple function to add two integers
int Addition (int a, int b){
int result;
result = a + b;
return result;
}
// Define a main() function to test your addition() function
int main ()
{
int z;
int a = 2;
int b = 2;
z = Addition (a, b);
std::cout << a << " + " << b << " = " << z << " Yay!" << "\n";
}
// + [markdown] graffitiCellId="id_hsznf5f"
// <span class="graffiti-highlight graffiti-id_hsznf5f-id_v797hj5"><i></i><button>Run Code</button></span>
// + [markdown] graffitiCellId="id_rjsor21" graffitiConfig={"rows": 6, "terminalId": "id_rjsor21", "type": "terminal"}
// <i>Loading terminal (id_rjsor21), please wait...</i>
|
home/.ipynb_checkpoints/Example_Notebook-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
df = pd.read_excel('https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx',
sheet_name = 'Canada by Citizenship',
skiprows = range(20),
skip_footer = 2
)
print('Data including done!')
print('Size:', df.shape)
df.head()
df.drop(['Type', 'Coverage', 'AREA', 'REG', 'DEV'], axis = 1, inplace = True)
df.head()
df.rename(columns={'OdName':'Country', 'AreaName':'Continent', 'RegName':'Region'}, inplace = True)
print('Type', type(df.columns))
df.head()
df['Country']
df.columns = list(map(str, df.columns))
df['Total'] = df.sum(axis=1)
df.head()
#Set Country as Index
df.set_index('Country', inplace = True)
df.head()
# years that we will be using in this lesson - useful for plotting later on
years = list(map(str, range(1980, 2014)))
df_japan = df.loc[['Japan'], years].transpose()
df_japan.head()
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
# +
df_japan.plot(kind='box',figsize=(8,6))
plt.title('Box plot of Japanese Immigrants from 1980 - 2013')
plt.ylabel('Number of Immigrants')
plt.show()
# -
df_japan.describe()
df_ci = df.loc[['China', 'India'], years].transpose()
df_ci.head()
df_ci.describe()
# +
df_ci.plot(kind='box', figsize=(10,7))
plt.title('Box plots of Immigrants from China and India (1980 - 2013)')
plt.xlabel('Number of Immigrants')
plt.show()
# +
df_ci.plot(kind = 'box',
figsize = (10, 7),
color = 'blue',
vert = False
)
plt.title('Box plots of Immigrants from China and India (1980 - 2013)')
plt.xlabel('Number of Immigrants')
plt.show()
# +
fig = plt.figure() # create figure
ax0 = fig.add_subplot(1, 2, 1) # add subplot 1 (1 row, 2 columns, first plot)
ax1 = fig.add_subplot(1, 2, 2) # add subplot 2 (1 row, 2 columns, second plot). See tip below**
# Subplot 1: Box plot
df_ci.plot(kind='box', color='blue', vert=False, figsize=(20, 6), ax=ax0) # add to subplot 1
ax0.set_title('Box Plots of Immigrants from China and India (1980 - 2013)')
ax0.set_xlabel('Number of Immigrants')
ax0.set_ylabel('Countries')
# Subplot 2: Line plot
df_ci.plot(kind='line', figsize=(20, 6), ax=ax1) # add to subplot 2
ax1.set_title ('Line Plots of Immigrants from China and India (1980 - 2013)')
ax1.set_ylabel('Number of Immigrants')
ax1.set_xlabel('Years')
plt.show()
|
Coursera/Data Visualization with Python-IBM/Week-2/Excercise/Box-Plots.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import matplotlib.pyplot as plt
import pandas as pd
import networkx as nx
import numpy as np
from math import log
from CIoTS import *
from tqdm import trange
import json
from time import time
runs = 20
max_p = 20
dimensions = 3
data_length = 10000
alpha = 0.05
incoming_edges = 3
results = pd.DataFrame(columns=['run', 'p', 'method', 'iteration p', 'f1', 'fpr', 'precision', 'recall', 'bic', 'time'])
data = []
for p in trange(2, max_p+2, 2):
for run in trange(runs):
# generate graph and data
generator = CausalTSGenerator(dimensions=dimensions, max_p=p, data_length=data_length,
incoming_edges=incoming_edges)
ts = generator.generate()
data.append({'graph': generator.graph, 'ts': ts})
# incremental pc
f1 = []
fpr = []
precision = []
recall = []
p_iters = []
time_iters = []
bic_iters = []
_, graphs, times, bics = pc_incremental(partial_corr_test, ts, alpha, 2*max_p,
verbose=True, patiency=2*max_p)
for p_iter, g in graphs.items():
eval_result = evaluate_edges(generator.graph, g)
f1.append(eval_result['f1-score'])
precision.append(eval_result['precision'])
recall.append(eval_result['TPR'])
fpr.append(eval_result['FPR'])
p_iters.append(p_iter)
time_iters.append(times[p_iter])
bic_iters.append(bics[p_iter])
results = results.append(pd.DataFrame({'run': [run]*len(f1), 'p': [p]*len(f1), 'iteration p': p_iters,
'f1': f1, 'fpr': fpr, 'recall': recall, 'precision': precision,
'bic': bic_iters, 'time': time_iters, 'method': ['incremental']*len(f1)}),
ignore_index=True, sort=True)
# incremental pc extensive
f1 = []
fpr = []
precision = []
recall = []
p_iters = []
time_iters = []
bic_iters = []
_, graphs, times, bics = pc_incremental_extensive(partial_corr_test, ts, alpha, 2*max_p,
verbose=True, patiency=2*max_p)
for p_iter, g in graphs.items():
eval_result = evaluate_edges(generator.graph, g)
f1.append(eval_result['f1-score'])
precision.append(eval_result['precision'])
recall.append(eval_result['TPR'])
fpr.append(eval_result['FPR'])
p_iters.append(p_iter)
time_iters.append(times[p_iter])
bic_iters.append(bics[p_iter])
results = results.append(pd.DataFrame({'run': [run]*len(f1), 'p': [p]*len(f1), 'iteration p': p_iters,
'f1': f1, 'fpr': fpr, 'recall': recall, 'precision': precision,
'bic': bic_iters, 'time': time_iters, 'method': ['extensive']*len(f1)}),
ignore_index=True)
# incremental pc subsets
f1 = []
fpr = []
precision = []
recall = []
p_iters = []
time_iters = []
bic_iters = []
_, graphs, times, bics = pc_incremental_subsets(partial_corr_test, ts, alpha, 2*max_p,
verbose=True, patiency=2*max_p)
for p_iter, g in graphs.items():
eval_result = evaluate_edges(generator.graph, g)
f1.append(eval_result['f1-score'])
precision.append(eval_result['precision'])
recall.append(eval_result['TPR'])
fpr.append(eval_result['FPR'])
p_iters.append(p_iter)
time_iters.append(times[p_iter])
bic_iters.append(bics[p_iter])
results = results.append(pd.DataFrame({'run': [run]*len(f1), 'p': [p]*len(f1), 'iteration p': p_iters,
'f1': f1, 'fpr': fpr, 'recall': recall, 'precision': precision,
'bic': bic_iters, 'time': time_iters, 'method': ['subsets']*len(f1)}),
ignore_index=True)
results.to_csv('results/iterations/result.csv', index=False)
# +
def dump_data(data, file):
json_data = []
for d in data:
json_data.append({'graph': nx.to_dict_of_lists(d['graph']), 'ts': d['ts'].to_dict()})
with open(file, 'w+') as fp:
json.dump(json_data, fp)
def load_data(file):
data = []
with open(file, 'r') as fp:
json_data = json.load(fp)
for d in json_data:
graph = nx.from_dict_of_lists(d['graph'], nx.DiGraph())
ts = pd.DataFrame.from_dict(d['ts'])
ts.index = ts.index.astype(int)
ts = ts.sort_index()
data.append({'graph': graph,'ts': ts})
return data
# +
# dump_data(data, 'results/iterations/data.json')
# -
loaded_data = load_data('results/iterations/data.json')
comp_results = pd.DataFrame(columns=['run', 'p', 'method','iteration p', 'f1', 'fpr', 'precision', 'recall', 'time'])
for i in trange(len(loaded_data)):
graph = loaded_data[i]['graph']
ts = loaded_data[i]['ts']
run = i % 20
p = int(len(graph.nodes())/dimensions - 1)
start_time = time()
predicted_graph = pc_chen_modified(partial_corr_test, ts, p, alpha)
runtime = time() - start_time
eval_result = evaluate_edges(graph, predicted_graph)
comp_results = comp_results.append({'run': run, 'p': p, 'iteration p': p, 'method': 'real',
'f1': eval_result['f1-score'],
'precision': eval_result['precision'],
'recall': eval_result['TPR'],
'fpr': eval_result['FPR'],
'time': runtime},
ignore_index=True)
start_time = time()
var_ranking, _ = var_order_select(ts, 2*(max_p-2), ['bic'])
p_est = var_ranking['bic'][0]
predicted_graph = pc_chen_modified(partial_corr_test, ts, p_est, alpha)
runtime = time() - start_time
eval_result = evaluate_edges(graph, predicted_graph)
comp_results = comp_results.append({'run': run, 'p': p, 'iteration p': p_est, 'method': 'bic',
'f1': eval_result['f1-score'],
'precision': eval_result['precision'],
'recall': eval_result['TPR'],
'fpr': eval_result['FPR'],
'time': runtime},
ignore_index=True)
comp_results.to_csv('results/iterations/comp_result.csv', index=False)
|
iterations experiment.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bad posterior geometry and how to deal with it
# HMC and its variant NUTS use gradient information to draw (approximate) samples from a posterior distribution.
# These gradients are computed in a *particular coordinate system*, and different choices of coordinate system can make HMC more or less efficient.
# This is analogous to the situation in constrained optimization problems where, for example, parameterizing a positive quantity via an exponential versus softplus transformation results in distinct optimization dynamics.
#
# Consequently it is important to pay attention to the *geometry* of the posterior distribution.
# Reparameterizing the model (i.e. changing the coordinate system) can make a big practical difference for many complex models.
# For the most complex models it can be absolutely essential. For the same reason it can be important to pay attention to some of the hyperparameters that control HMC/NUTS (in particular the `max_tree_depth` and `dense_mass`).
#
# In this tutorial we explore models with bad posterior geometries---and what one can do to get achieve better performance---with a few concrete examples.
# !pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
# +
from functools import partial
import numpy as np
import jax.numpy as jnp
from jax import random
import numpyro
import numpyro.distributions as dist
from numpyro.diagnostics import summary
from numpyro.infer import MCMC, NUTS
assert numpyro.__version__.startswith("0.9.0")
# NB: replace cpu by gpu to run this notebook on gpu
numpyro.set_platform("cpu")
# -
# We begin by writing a helper function to do NUTS inference.
def run_inference(
model, num_warmup=1000, num_samples=1000, max_tree_depth=10, dense_mass=False
):
kernel = NUTS(model, max_tree_depth=max_tree_depth, dense_mass=dense_mass)
mcmc = MCMC(
kernel,
num_warmup=num_warmup,
num_samples=num_samples,
num_chains=1,
progress_bar=False,
)
mcmc.run(random.PRNGKey(0))
summary_dict = summary(mcmc.get_samples(), group_by_chain=False)
# print the largest r_hat for each variable
for k, v in summary_dict.items():
spaces = " " * max(12 - len(k), 0)
print("[{}] {} \t max r_hat: {:.4f}".format(k, spaces, np.max(v["r_hat"])))
# ## Evaluating HMC/NUTS
#
# In general it is difficult to assess whether the samples returned from HMC or NUTS represent accurate (approximate) samples from the posterior.
# Two general rules of thumb, however, are to look at the effective sample size (ESS) and `r_hat` diagnostics returned by `mcmc.print_summary()`.
# If we see values of `r_hat` in the range `(1.0, 1.05)` and effective sample sizes that are comparable to the total number of samples `num_samples` (assuming `thinning=1`) then we have good reason to believe that HMC is doing a good job.
# If, however, we see low effective sample sizes or large `r_hat`s for some of the variables (e.g. `r_hat = 1.15`) then HMC is likely struggling with the posterior geometry.
# In the following we will use `r_hat` as our primary diagnostic metric.
# ## Model reparameterization
#
# ### Example #1
#
# We begin with an example (horseshoe regression; see [examples/horseshoe_regression.py](https://github.com/pyro-ppl/numpyro/blob/master/examples/horseshoe_regression.py) for a complete example script) where reparameterization helps a lot.
# This particular example demonstrates a general reparameterization strategy that is useful in many models with hierarchical/multi-level structure.
# For more discussion of some of the issues that can arise in hierarchical models see reference [1].
# In this unreparameterized model some of the parameters of the distributions
# explicitly depend on other parameters (in particular beta depends on lambdas and tau).
# This kind of coordinate system can be a challenge for HMC.
def _unrep_hs_model(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
betas = numpyro.sample("betas", dist.Normal(scale=tau * lambdas))
mean_function = jnp.dot(X, betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
# To deal with the bad geometry that results form this coordinate system we change coordinates using the following re-write logic.
# Instead of
#
# $$ \beta \sim {\rm Normal}(0, \lambda \tau) $$
#
# we write
#
# $$ \beta^\prime \sim {\rm Normal}(0, 1) $$
#
# and
#
# $$ \beta \equiv \lambda \tau \beta^\prime $$
#
# where $\beta$ is now defined *deterministically* in terms of $\lambda$, $\tau$,
# and $\beta^\prime$.
# In effect we've changed to a coordinate system where the different
# latent variables are less correlated with one another.
# In this new coordinate system we can expect HMC with a diagonal mass matrix to behave much better than it would in the original coordinate system.
#
# There are basically two ways to implement this kind of reparameterization in NumPyro:
#
# - manually (i.e. by hand)
# - using [numpyro.infer.reparam](http://num.pyro.ai/en/stable/reparam.html), which automates a few common reparameterization strategies
#
# To begin with let's do the reparameterization by hand.
# In this reparameterized model none of the parameters of the distributions
# explicitly depend on other parameters. This model is exactly equivalent
# to _unrep_hs_model but is expressed in a different coordinate system.
def _rep_hs_model1(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
unscaled_betas = numpyro.sample(
"unscaled_betas", dist.Normal(scale=jnp.ones(X.shape[1]))
)
scaled_betas = numpyro.deterministic("betas", tau * lambdas * unscaled_betas)
mean_function = jnp.dot(X, scaled_betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
# Next we do the reparameterization using [numpyro.infer.reparam](http://num.pyro.ai/en/stable/reparam.html).
# There are at least two ways to do this.
# First let's use [LocScaleReparam](http://num.pyro.ai/en/stable/reparam.html#numpyro.infer.reparam.LocScaleReparam).
# +
from numpyro.infer.reparam import LocScaleReparam
# LocScaleReparam with centered=0 fully "decenters" the prior over betas.
config = {"betas": LocScaleReparam(centered=0)}
# The coordinate system of this model is equivalent to that in _rep_hs_model1 above.
_rep_hs_model2 = numpyro.handlers.reparam(_unrep_hs_model, config=config)
# -
# To show the versatility of the [numpyro.infer.reparam](http://num.pyro.ai/en/stable/reparam.html) library let's do the reparameterization using [TransformReparam](http://num.pyro.ai/en/stable/reparam.html#numpyro.infer.reparam.TransformReparam) instead.
# +
from numpyro.distributions.transforms import AffineTransform
from numpyro.infer.reparam import TransformReparam
# In this reparameterized model none of the parameters of the distributions
# explicitly depend on other parameters. This model is exactly equivalent
# to _unrep_hs_model but is expressed in a different coordinate system.
def _rep_hs_model3(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
# instruct NumPyro to do the reparameterization automatically.
reparam_config = {"betas": TransformReparam()}
with numpyro.handlers.reparam(config=reparam_config):
betas_root_variance = tau * lambdas
# in order to use TransformReparam we have to express the prior
# over betas as a TransformedDistribution
betas = numpyro.sample(
"betas",
dist.TransformedDistribution(
dist.Normal(0.0, jnp.ones(X.shape[1])),
AffineTransform(0.0, betas_root_variance),
),
)
mean_function = jnp.dot(X, betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
# -
# Finally we verify that `_rep_hs_model1`, `_rep_hs_model2`, and `_rep_hs_model3` do indeed achieve better `r_hat`s than `_unrep_hs_model`.
# +
# create fake dataset
X = np.random.RandomState(0).randn(100, 500)
Y = X[:, 0]
print("unreparameterized model (very bad r_hats)")
run_inference(partial(_unrep_hs_model, X, Y))
print("\nreparameterized model with manual reparameterization (good r_hats)")
run_inference(partial(_rep_hs_model1, X, Y))
print("\nreparameterized model with LocScaleReparam (good r_hats)")
run_inference(partial(_rep_hs_model2, X, Y))
print("\nreparameterized model with TransformReparam (good r_hats)")
run_inference(partial(_rep_hs_model3, X, Y))
# -
# ### Aside: numpyro.deterministic
#
# In `_rep_hs_model1` above we used [numpyro.deterministic](http://num.pyro.ai/en/stable/primitives.html?highlight=deterministic#numpyro.primitives.deterministic) to define `scaled_betas`.
# We note that using this primitive is not strictly necessary; however, it has the consequence that `scaled_betas` will appear in the trace and will thus appear in the summary reported by `mcmc.print_summary()`.
# In other words we could also have written:
#
# ```
# scaled_betas = tau * lambdas * unscaled_betas
# ```
#
# without invoking the `deterministic` primitive.
# ## Mass matrices
# By default HMC/NUTS use diagonal mass matrices.
# For models with complex geometries it can pay to use a richer set of mass matrices.
#
#
# ### Example #2
# In this first simple example we show that using a full-rank (i.e. dense) mass matrix leads to a better `r_hat`.
# +
# Because rho is very close to 1.0 the posterior geometry
# is extremely skewed and using the "diagonal" coordinate system
# implied by dense_mass=False leads to bad results
rho = 0.9999
cov = jnp.array([[10.0, rho], [rho, 0.1]])
def mvn_model():
numpyro.sample("x", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov))
print("dense_mass = False (bad r_hat)")
run_inference(mvn_model, dense_mass=False, max_tree_depth=3)
print("dense_mass = True (good r_hat)")
run_inference(mvn_model, dense_mass=True, max_tree_depth=3)
# -
# ### Example #3
#
# Using `dense_mass=True` can be very expensive when the dimension of the latent space `D` is very large.
# In addition it can be difficult to estimate a full-rank mass matrix with `D^2` parameters using a moderate number of samples if `D` is large. In these cases `dense_mass=True` can be a poor choice.
# Luckily, the argument `dense_mass` can also be used to specify structured mass matrices that are richer than a diagonal mass matrix but more constrained (i.e. have fewer parameters) than a full-rank mass matrix ([see the docs](http://num.pyro.ai/en/stable/mcmc.html#hmc)).
# In this second example we show how we can use `dense_mass` to specify such a structured mass matrix.
# +
rho = 0.9
cov = jnp.array([[10.0, rho], [rho, 0.1]])
# In this model x1 and x2 are highly correlated with one another
# but not correlated with y at all.
def partially_correlated_model():
x1 = numpyro.sample(
"x1", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov)
)
x2 = numpyro.sample(
"x2", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov)
)
y = numpyro.sample("y", dist.Normal(jnp.zeros(100), 1.0))
numpyro.sample("obs", dist.Normal(x1 - x2, 0.1), jnp.ones(2))
# -
# Now let's compare two choices of `dense_mass`.
# +
print("dense_mass = False (very bad r_hats)")
run_inference(partially_correlated_model, dense_mass=False, max_tree_depth=3)
print("\ndense_mass = True (bad r_hats)")
run_inference(partially_correlated_model, dense_mass=True, max_tree_depth=3)
# We use dense_mass=[("x1", "x2")] to specify
# a structured mass matrix in which the y-part of the mass matrix is diagonal
# and the (x1, x2) block of the mass matrix is full-rank.
# Graphically:
#
# x1 x2 y
# x1 | * * 0 |
# x2 | * * 0 |
# y | 0 0 * |
print("\nstructured mass matrix (good r_hats)")
run_inference(partially_correlated_model, dense_mass=[("x1", "x2")], max_tree_depth=3)
# -
# ## `max_tree_depth`
#
# The hyperparameter `max_tree_depth` can play an important role in determining the quality of posterior samples generated by NUTS. The default value in NumPyro is `max_tree_depth=10`. In some models, in particular those with especially difficult geometries, it may be necessary to increase `max_tree_depth` above `10`. In other cases where computing the gradient of the model log density is particularly expensive, it may be necessary to decrease `max_tree_depth` below `10` to reduce compute. As an example where large `max_tree_depth` is essential, we return to a variant of example #2. (We note that in this particular case another way to improve performance would be to use `dense_mass=True`).
#
# ### Example #4
# +
# Because rho is very close to 1.0 the posterior geometry is extremely
# skewed and using small max_tree_depth leads to bad results.
rho = 0.999
dim = 200
cov = rho * jnp.ones((dim, dim)) + (1 - rho) * jnp.eye(dim)
def mvn_model():
x = numpyro.sample(
"x", dist.MultivariateNormal(jnp.zeros(dim), covariance_matrix=cov)
)
print("max_tree_depth = 5 (bad r_hat)")
run_inference(mvn_model, max_tree_depth=5)
print("max_tree_depth = 10 (good r_hat)")
run_inference(mvn_model, max_tree_depth=10)
# -
# ## Other strategies
#
# - In some cases it can make sense to use variational inference to *learn* a new coordinate system. For details see [examples/neutra.py](https://github.com/pyro-ppl/numpyro/blob/master/examples/neutra.py) and reference [2].
# ## References
#
# [1] "Hamiltonian Monte Carlo for Hierarchical Models,"
# <NAME>, <NAME>.
#
# [2] "NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport,"
# <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>.
#
# [3] "Reparameterization" in the Stan user's manual.
# https://mc-stan.org/docs/2_27/stan-users-guide/reparameterization-section.html
|
notebooks/source/bad_posterior_geometry.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from scipy.stats import norm
from scipy import stats
from scipy.stats import skew
from scipy.stats.stats import pearsonr
from sklearn.preprocessing import StandardScaler
from sklearn import preprocessing
from sklearn.model_selection import StratifiedKFold,train_test_split
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import xgboost as xgb
import lightgbm as lgb
import os
import gc
import pickle
import warnings
warnings.filterwarnings('ignore')
# -
#Reduce_memory
def reduce_memory(df):
print("Reduce_memory...");
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype('category')
return df
# +
def make_day_feature(df, offset=0, tname='TransactionDT'):
"""
Creates a day of the week feature, encoded as 0-6.
Parameters:
-----------
df : pd.DataFrame
df to manipulate.
offset : float (default=0)
offset (in days) to shift the start/end of a day.
tname : str
Name of the time column in df.
"""
# found a good offset is 0.58
days = df[tname] / (3600*24)
encoded_days = np.floor(days-1+offset) % 7
return encoded_days
def make_hour_feature(df, tname='TransactionDT'):
"""
Creates an hour of the day feature, encoded as 0-23.
Parameters:
-----------
df : pd.DataFrame
df to manipulate.
tname : str
Name of the time column in df.
"""
hours = df[tname] / (3600)
encoded_hours = np.floor(hours) % 24
return encoded_hours
# +
# Load Data
train_identity = pd.read_csv('train_identity.csv',index_col='TransactionID')
train_transaction = pd.read_csv('train_transaction.csv',index_col='TransactionID')
test_identity = pd.read_csv('test_identity.csv',index_col='TransactionID')
test_transaction = pd.read_csv('test_transaction.csv',index_col='TransactionID')
# Create train and test dataset by left outer join
train = train_transaction.merge(train_identity, how='left', left_index=True, right_index=True)
test = test_transaction.merge(test_identity, how='left', left_index=True, right_index=True)
# Delete variables to save memory
del train_identity,train_transaction,test_identity,test_transaction
y=train['isFraud'].astype('uint8')
train.drop(['isFraud'], axis=1, inplace=True)
# The column of "TransactionDT" is essentially a measure of time. It was found that the hours have some correlation with the fraud
# 0.58 is recommended by a kaggle kernel to fit the meaning of transactional day.
train['hours'] = make_hour_feature(train)
test['hours'] = make_hour_feature(test)
train.drop(['TransactionDT'], axis=1, inplace=True)
test.drop(['TransactionDT'], axis=1, inplace=True)
# +
# Get names of domains and countries from raw email data
emails = {'gmail': 'google', 'att.net': 'att', 'twc.com': 'spectrum',
'scranton.edu': 'other', 'optonline.net': 'other',
'hotmail.co.uk': 'microsoft', 'comcast.net': 'other', 'yahoo.com.mx': 'yahoo',
'yahoo.fr': 'yahoo', 'yahoo.es': 'yahoo', 'charter.net': 'spectrum',
'live.com': 'microsoft', 'aim.com': 'aol', 'hotmail.de': 'microsoft',
'centurylink.net': 'centurylink', 'gmail.com': 'google', 'me.com': 'apple',
'earthlink.net': 'other', 'gmx.de': 'other', 'web.de': 'other', 'cfl.rr.com': 'other',
'hotmail.com': 'microsoft', 'protonmail.com': 'other', 'hotmail.fr': 'microsoft',
'windstream.net': 'other', 'outlook.es': 'microsoft', 'yahoo.co.jp': 'yahoo',
'yahoo.de': 'yahoo', 'servicios-ta.com': 'other', 'netzero.net': 'other', 'suddenlink.net': 'other',
'roadrunner.com': 'other', 'sc.rr.com': 'other', 'live.fr': 'microsoft',
'verizon.net': 'yahoo', 'msn.com': 'microsoft', 'q.com': 'centurylink', 'prodigy.net.mx': 'att',
'frontier.com': 'yahoo', 'anonymous.com': 'other', 'rocketmail.com': 'yahoo', 'sbcglobal.net': 'att',
'frontiernet.net': 'yahoo', 'ymail.com': 'yahoo', 'outlook.com': 'microsoft', 'mail.com': 'other',
'bellsouth.net': 'other', 'embarqmail.com': 'centurylink', 'cableone.net': 'other',
'hotmail.es': 'microsoft', 'mac.com': 'apple', 'yahoo.co.uk': 'yahoo', 'netzero.com': 'other',
'yahoo.com': 'yahoo', 'live.com.mx': 'microsoft', 'ptd.net': 'other', 'cox.net': 'other',
'aol.com': 'aol', 'juno.com': 'other', 'icloud.com': 'apple'}
us_emails = ['gmail', 'net', 'edu']
for c in ['P_emaildomain', 'R_emaildomain']:
# Domain
train[c + '_bin'] = train[c].map(emails)
test[c + '_bin'] = test[c].map(emails)
# Country
train[c + '_suffix'] = train[c].map(lambda x: str(x).split('.')[-1])
test[c + '_suffix'] = test[c].map(lambda x: str(x).split('.')[-1])
train[c + '_suffix'] = train[c + '_suffix'].map(lambda x: x if str(x) not in us_emails else 'us')
test[c + '_suffix'] = test[c + '_suffix'].map(lambda x: x if str(x) not in us_emails else 'us')
# +
labels = {np.nan: 0, 'nan': 0}
for c1, c2 in train.dtypes.reset_index().values:
if c2=='O':
for c in list(set(train[c1].unique())|set(test[c1].unique())):
if c not in labels:
labels[c] = len(labels) - 1
for c1, c2 in train.dtypes.reset_index().values:
if c2=='O':
train[c1] = train[c1].map(lambda x: labels[str(x)])
test[c1] = test[c1].map(lambda x: labels[str(x)])
# +
# According to kaggel kernels, recommend dropping the following columns
# Get duplicate columns
duplicates = []
cols = train.columns
i = 0
for c1 in cols:
i += 1
for c2 in cols[i:]:
if c1 != c2:
if (np.sum((train[c1].values == train[c2].values).astype(int)) / len(train))>0.95:
duplicates.append(c2)
print(c1, c2, np.sum((train[c1].values == train[c2].values).astype(int)) / len(train))
duplicates = list(set(duplicates))
print(duplicates)
drop_col = duplicates
# Explicitly list drop_col to save time
# drop_col = ['V300', 'V309', 'V111', 'C3', 'V124', 'V106',
# 'V125', 'V315', 'V134', 'V102', 'V123', 'V316', 'V113', 'V136',
# 'V305', 'V110', 'V299', 'V289', 'V286', 'V318', 'V103', 'V304',
# 'V116', 'V298', 'V284', 'V293', 'V137', 'V295', 'V301', 'V104',
# 'V311', 'V115', 'V109', 'V119', 'V321', 'V114', 'V133', 'V122',
# 'V319', 'V105', 'V112', 'V118', 'V117', 'V121', 'V108', 'V135',
# 'V320', 'V303', 'V297', 'V120']
# +
train.drop(drop_col , axis=1, inplace=True)
test.drop(drop_col , axis=1, inplace=True)
train_size = train.shape[0]
test_size = test.shape[0]
print('Max NA counts in train dataset is',train.isnull().sum().max())
print('Max NA counts in test dataset is',test.isnull().sum().max())
# Decision tree method dose not require feature scaling.
# Label Encoding qualitative features (using labels shown above to encode for now)
# for c in train.columns:
# if train[c].dtype=='object':
# lbl = preprocessing.LabelEncoder()
# lbl.fit(list(train[c].values)+list(test[c].values))
# train[c] = lbl.transform(list(train[c].values))
# test[c] = lbl.transform(list(test[c].values))
# Fill missing values after label encoding.
# The values in the orginal datasets are all positive, so fill NA with a large negative number
train = train.fillna(-999)
test = test.fillna(-999)
print('NA counts in train dataset now becomes',train.isnull().sum().max())
print('NA counts in test dataset now becomes',test.isnull().sum().max())
# -
# Reducing memory by change the dtypes of some columns
train= reduce_memory(train)
test= reduce_memory(test)
# +
xgb_path = './xgb_models_stack/'
lgb_path = './lgb_models_stack/'
# Create dir for models
# os.mkdir(xgb_path)
# os.mkdir(lgb_path)
#XGBoost Model
def fit_xgb(X_fit, y_fit, X_val, y_val, counter, xgb_path, name):
model = xgb.XGBClassifier(n_estimators=1000, max_depth=9, learning_rate=0.02, subsample=0.7,
colsample_bytree=0.7,missing=-999,tree_method='hist')
model.fit(X_fit, y_fit,eval_set=[(X_val, y_val)],verbose=0,eval_metric="auc",early_stopping_rounds=100)
cv_val = model.predict_proba(X_val)[:,1]
#Save XGBoost Model
save_to = '{}{}_fold{}.dat'.format(xgb_path, name, counter+1)
pickle.dump(model, open(save_to, "wb"))
del X_fit, y_fit, X_val, y_val
return cv_val
#LightGBM Model
def fit_lgb(X_fit, y_fit, X_val, y_val, counter, lgb_path, name):
model = lgb.LGBMClassifier(learning_rate=0.02,max_depth=9, boosting_type='gbdt',
objective= 'binary', metric='auc', seed= 4, num_iterations= 2000,
num_leaves= 64, feature_fraction= 0.4,
bagging_fraction= 0.4, bagging_freq= 5)
model.fit(X_fit, y_fit,eval_set=[(X_val, y_val)],verbose=200,early_stopping_rounds=100)
cv_val = model.predict_proba(X_val)[:,1]
#Save LightGBM Model
save_to = '{}{}_fold{}.txt'.format(lgb_path, name, counter+1)
model.booster_.save_model(save_to)
del X_fit, y_fit, X_val, y_val
return cv_val
# -
# Create train and validation datasets from original train dataset
X_train_, X_val_, y_train_, y_val_ = train_test_split(train, y, test_size=0.1, random_state=42)
NumFold=5
skf = StratifiedKFold(n_splits=NumFold, shuffle=True, random_state=42)
# del train,y
# +
# %%time
xgb_cv_result = np.zeros(X_train_.shape[0])
print('\nModel Fitting...')
for counter, (tr_idx, val_idx) in enumerate(skf.split(X_train_, y_train_)):
print('\nFold {}'.format(counter+1))
X_fit, y_fit = X_train_.iloc[tr_idx,:], y_train_.iloc[tr_idx]
X_val, y_val = X_train_.iloc[val_idx,:], y_train_.iloc[val_idx]
print('XGBoost')
xgb_cv_result[val_idx] = fit_xgb(X_fit, y_fit, X_val, y_val, counter, lgb_path , name='xgb')
del X_fit, X_val, y_fit, y_val
# Free meomory by running garbarge collector
gc.collect()
from sklearn.metrics import roc_auc_score
auc_xgb = round(roc_auc_score(y_train_, xgb_cv_result),4)
print('\nXGBoost VAL AUC: {}'.format(auc_xgb))
# +
# %%time
lgb_cv_result = np.zeros(X_train_.shape[0])
for counter, (tr_idx, val_idx) in enumerate(skf.split(X_train_, y_train_)):
print('\nFold {}'.format(counter+1))
X_fit, y_fit = X_train_.iloc[tr_idx,:], y_train_.iloc[tr_idx]
X_val, y_val = X_train_.iloc[val_idx,:], y_train_.iloc[val_idx]
print('LigthGBM')
lgb_cv_result[val_idx] = fit_lgb(X_fit, y_fit, X_val, y_val, counter, lgb_path, name='lgb')
del X_fit, X_val, y_fit, y_val
# Free meomory by running garbarge collector
gc.collect()
from sklearn.metrics import roc_auc_score
auc_lgb = round(roc_auc_score(y_train_, lgb_cv_result),4)
print('\nLGBoost TRAIN AUC: {}'.format(auc_lgb))
# +
# %%time
xgb_models = sorted(os.listdir(xgb_path))
xgb_result_val = np.zeros(X_val_.shape[0])
xgb_result_test = np.zeros(test.shape[0])
print('With XGBoost...')
for m_name in xgb_models:
#Load Xgboost Model
model = pickle.load(open('{}{}'.format(xgb_path, m_name), "rb"))
xgb_result_val += model.predict_proba(X_val_)[:,1]
xgb_result_test += model.predict_proba(test)[:,1]
del model
xgb_result_val /= len(xgb_models)
xgb_result_test /= len(xgb_models)
auc_xgb = round(roc_auc_score(y_val_, xgb_result_val),4)
print('\nXGBoost VAL AUC: {}'.format(auc_xgb))
# +
# %%time
from sklearn.metrics import roc_auc_score
lgb_models = sorted(os.listdir(lgb_path))
lgb_result_val = np.zeros(X_val_.shape[0])
lgb_result_test = np.zeros(test.shape[0])
print('With LightGBM...')
for m_name in lgb_models:
#Load LightGBM Model
model = lgb.Booster(model_file='{}{}'.format(lgb_path, m_name))
lgb_result_val += model.predict(X_val_)
lgb_result_test += model.predict(test)
del model
lgb_result_val /= len(lgb_models)
lgb_result_test /= len(lgb_models)
auc_lgb = round(roc_auc_score(y_val_, lgb_result_val),4)
print('\nLGBoost VAL AUC: {}'.format(auc_lgb))
# -
# Submitting results
submission = pd.read_csv('sample_submission.csv', index_col='TransactionID')
submission['isFraud'] = lgb_result_test
submission.to_csv('lgb_finer_submission.csv')
|
IEEE Fraud detection.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.12 64-bit (''hpe'': conda)'
# language: python
# name: python3812jvsc74a57bd039dd61696b6e1fc6334e8188df1c4e4efcacf6a9ef4cab895eb1837aea6db278
# ---
# +
# %matplotlib inline
import sys
sys.path.append('/home/wt/py_projects/HPE-3d')
sys.path.append('/home/wt/py_projects/HPE-3d/datasets')
import numpy as np
import matplotlib.pyplot as plt
from datasets.h36m import Human36mDataset
# -
dir_path = '../data/'
data_path = dir_path + 'data_2d_h36m_gt.npz'
data_path_3d = dir_path + 'data_3d_h36m.npz'
keypoints = np.load(data_path, allow_pickle=True)['positions_2d'].item()
dataset = Human36mDataset(data_path_3d,True)
print(dataset._skeleton.num_joints())
# +
subject = 'S1'
action = 'Photo'
camera = 0
kps = keypoints[subject][action][camera]
anim = dataset[subject][action]
jts = dataset[subject][action]['positions']
print(kps.shape)
print(jts.shape)
frame = 1000
coordinates_2d = kps[frame]
coordinates_3d = jts[frame] # global
print(coordinates_2d)
print(coordinates_3d)
# +
joint_pairs = [[0, 1], [1, 2], [2, 3], [0, 4], [4, 5], [5, 6],
[0, 7], [7, 8], [8, 9], [9, 10], [8, 11], [11, 12],
[12, 13], [8, 14], [14, 15], [15, 16]]
colors_kps = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0],
[50, 205, 50], [0, 255, 170], [0, 255, 255], [
0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255],
[170, 0, 255], [255, 0, 255]]
def plot_keypoint(coordinates):
plt.figure(figsize=(7,7))
x_max = coordinates_2d[:, 0].max()
x_min = coordinates_2d[:, 0].min()
y_max = coordinates_2d[:, 1].max()
y_min = coordinates_2d[:, 1].min()
# axes.set_xlim([0,h36m_cameras_intrinsic_params['res_w']])
# axes.set_ylim([h36m_cameras_intrinsic_params['res_h'],0])
plt.xlim(x_min-20,x_max+20)
plt.ylim(y_max+20,y_min-20)
plt.axis('equal')
for i in range(coordinates.shape[0]):
pts = coordinates[i]
for color_i, jp in zip(colors_kps, joint_pairs):
color_i = [c / 255 for c in color_i]
pt_a = coordinates[jp[0]]
pt_b = coordinates[jp[1]]
# pt_a_x, pt_a_y, pt_b_x, pt_b_y = int(pt_a[0]), int(
# pt_a[1]), int(pt_b[0]), int(pt_b[1])
pt_a_x, pt_a_y, pt_b_x, pt_b_y = pt_a[0],pt_a[1], pt_b[0], pt_b[1]
plt.plot((pt_a_x, pt_b_x), (pt_a_y, pt_b_y), color=color_i, lw=2.0)
plt.plot(pt_a_x,pt_a_y,marker='o',color=color_i)
plt.plot(pt_b_x,pt_b_y,marker='o',color=color_i)
plot_keypoint(coordinates_2d)
# +
# from common.camera import *
# cam = anim['cameras'][0]
# print(cam)
# pos_3d = world_to_camera(anim['positions'],R=cam['orientation'],t=cam['translation'])
# print(pos_3d.shape)
# print(pos_3d[:,:1].shape)
from common.camera import *
cam =anim['cameras'][camera]
pos_3d = world_to_camera(anim['positions'],R=cam['orientation'],t=cam['translation'])
# Remove global offset, but keep trajectory in first position
# 这里是将0号关节置于相机坐标系的中心
# 因此pos_3d[:, 0]本应该全置为0, 但是代码里没有在这里做,而是在训练阶段做
# 即 inputs_3d[:, :, 0] = 0
print(pos_3d[frame])
pos_3d[:, 1:] -= pos_3d[:, :1]
pos_3d[:,0] = 0
coordinates_3d = pos_3d[frame] # camera space
print(coordinates_3d)
# +
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(projection='3d')
ymin = coordinates_3d.min()
ymax = coordinates_3d.max()
ax.set_zlim([ymax,ymin])
ax.set_xlim([ymin,ymax])
ax.set_ylim([ymin,ymax])
# ax.set_aspect('equal')
for color_i, jp in zip(colors_kps, joint_pairs):
color_i = [c / 255 for c in color_i]
pt_a = coordinates_3d[jp[0]]
pt_b = coordinates_3d[jp[1]]
# print(pt_a,pt_b)
pt_a_x, pt_a_y, pt_a_z, pt_b_x, pt_b_y, pt_b_z = pt_a[
0], pt_a[1], pt_a[2], pt_b[0], pt_b[1], pt_b[2]
# print(pt_a_z,pt_b_z)
ax.plot((pt_a_x, pt_b_x), (pt_a_z, pt_b_z),
(pt_a_y, pt_b_y), lw=2.0, c=color_i)
ax.plot(pt_a_x, pt_a_z, pt_a_y, marker='o', color=color_i)
ax.plot(pt_b_x, pt_b_z, pt_b_y, marker='o', color=color_i)
# -
|
notebooks/visualize.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# # Investigate the Database
from sqlalchemy import inspect
inspector = inspect(engine)
inspector.get_table_names()
columns = inspector.get_columns('measurement')
for column in columns:
primarykeystr = ""
if column['primary_key'] == 1:
primarykeystr = "Primary Key"
print(column["name"],column["type"],primarykeystr)
columns = inspector.get_columns('station')
for column in columns:
primarykeystr = ""
if column['primary_key'] == 1:
primarykeystr = "Primary Key"
print(column["name"], column["type"], primarykeystr)
# # Exploratory Climate Analysis
# How many dates are recorded?
session.query(func.count(Measurement.date)).all()
earlieststr = session.query(Measurement.date).order_by(Measurement.date).first()
lateststr = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
print(f"Earliest: {earlieststr[0]} , Latest: {lateststr[0]}")
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
latestdate = dt.datetime.strptime(lateststr[0], '%Y-%m-%d')
querydate = dt.date(latestdate.year -1, latestdate.month, latestdate.day)
querydate
sel = [Measurement.date,Measurement.prcp]
queryresult = session.query(*sel).filter(Measurement.date >= querydate).all()
precipitation = pd.DataFrame(queryresult, columns=['Date','Precipitation'])
precipitation = precipitation.dropna(how='any') # clean up non value entries
precipitation = precipitation.sort_values(["Date"], ascending=True)
precipitation = precipitation.set_index("Date")
precipitation.head()
# Group by was not necessary
# groupbydate = df.groupby(["Date"])
# precipitation = pd.DataFrame({'Precipitation':groupbydate['Precipitation'].mean()})
# precipitation.head()
# +
# Use Pandas Plotting with Matplotlib to plot the data
xx = precipitation.index.tolist()
yy = precipitation['Precipitation'].tolist()
plt.figure(figsize=(10,7))
plt.bar(xx,yy,width = 5 ,color='b', alpha=0.5, align="center",label='Precipitation')
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
major_ticks = np.arange(0,400,80)
plt.xticks(major_ticks)
plt.title(f"Precipitation from {querydate} to {lateststr[0]}")
plt.xlabel("Date")
plt.ylabel("Precipitation")
plt.grid(which='major', axis='both', linestyle='-')
plt.legend()
plt.show()
# -
# Use Pandas to calcualte the summary statistics for the precipitation data
precipitation.describe()
# Design a query to show how many stations are available in this dataset?
session.query(Station.id).count() # id is the primary key
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
sel = [Measurement.station,func.count(Measurement.id)]
activestations = session.query(*sel).\
group_by(Measurement.station).\
order_by(func.count(Measurement.id).desc()).all()
activestations
# +
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature most active station?
sel = [func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs)]
mostactivestationdata = session.query(*sel).\
group_by(Measurement.station).\
order_by(func.count(Measurement.id).desc()).first()
mostactivestationdata
# +
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
queryresult = session.query(Measurement.tobs).\
filter(Measurement.station == activestations[0][0]).\
filter(Measurement.date >= querydate).all()
temperatures = list(np.ravel(queryresult))
sel = [Station.station,Station.name,Station.latitude,Station.longitude,Station.elevation]
queryresult = session.query(*sel).all()
stations_desc = pd.DataFrame(queryresult, columns=['Station','Name','Latitude','Longitude','Elevation'])
stationname = stations_desc.loc[stations_desc["Station"] == activestations[0][0],"Name"].tolist()[0]
# n, bins, patches = plt.hist(temperatures, bins=12,alpha=0.7, rwidth=1.0,label='tobs')
plt.hist(temperatures, bins=12,rwidth=1.0,label='tobs')
plt.grid(axis='both', alpha=0.75)
plt.ylabel('Frequency')
plt.title(f"Temperature from {querydate} to {lateststr[0]} \nmeasured at {stationname}")
plt.legend()
# maxfreq = n.max()
# plt.ylim(top=np.ceil(maxfreq / 10) * 10 if maxfreq % 10 else maxfreq + 10)
# +
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# -
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
startdate = '2017-01-01'
enddate = '2017-01-07'
tempresult = calc_temps(startdate,enddate)[0]
tempresult
# +
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
x_pos = [0]
y_pos = [tempresult[1]]
error = [(tempresult[2] - tempresult[0])]
w = 3
h = 5
d = 70
plt.figure(figsize=(w, h), dpi=d)
plt.bar(x_pos,y_pos,color='orange', yerr=error)
plt.xlim(-0.75,0.75)
plt.title("Trip Avg Temp")
plt.ylabel("Temp (F)")
plt.ylim(0, 100)
plt.tick_params(axis='x',which='both',bottom=False,top=False,labelbottom=False)
plt.grid(which='major', axis='x', linestyle='')
plt.grid(which='major', axis='y', linestyle='-')
plt.show()
# +
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# test= [('USC00516128', 'MANOA LYON ARBO 785.2, HI US', 21.3331, -157.8025, 152.4, 0.31),
# ('USC00519281', 'WAIHEE 837.5, HI US', 21.45167, -157.84888999999998, 32.9, 0.25),
# ('USC00518838', 'UPPER WAHIAWA 874.3, HI US', 21.4992, -158.0111, 306.6, 0.1),
# ('USC00513117', 'KANEOHE 838.1, HI US', 21.4234, -157.8015, 14.6, 0.060000000000000005),
# ('USC00511918', 'HONOLULU OBSERVATORY 702.2, HI US', 21.3152, -157.9992, 0.9, 0.0),
# ('USC00514830', 'KUALOA RANCH HEADQUARTERS 886.9, HI US', 21.5213, -157.8374, 7.0, 0.0),
# ('USC00517948', 'PEARL CITY, HI US', 21.3934, -157.9751, 11.9, 0.0),
# ('USC00519397', 'WAIKIKI 717.2, HI US', 21.2716, -157.8168, 3.0, 0.0),
# ('USC00519523', 'WAIMANALO EXPERIMENTAL FARM, HI US', 21.33556, -157.71139, 19.5, 0.0)]
# test = pd.DataFrame(test, columns=['Station','Name','Latitude','Longitude','Elevation','Prcp'])
# test
# startdate = '2017-01-01'
# enddate = '2017-01-07'
# sel = [Measurement.station,func.sum(Measurement.prcp)]
# queryresult = session.query(*sel).\
# group_by(Measurement.station).\
# filter(Measurement.date >= startdate).\
# filter(Measurement.date <= enddate).all()
# # order_by(func.sum(Measurement.prcp).desc()).all()
# stations_prec = pd.DataFrame(queryresult,columns=['Station','PrcpSum'])
# sel = [Station.station,Station.name,Station.latitude,Station.longitude,Station.elevation]
# queryresult = session.query(*sel).all()
# stations_desc = pd.DataFrame(queryresult, columns=['Station','Name','Latitude','Longitude','Elevation'])
# stations = pd.merge(stations_desc,stations_prec, on="Station", how="left")
# stations = stations.sort_values("PrcpSum",ascending=False)
# # stations = stations.fillna(value = {'PrcpSum':0})
# stations = stations.reset_index(drop=True)
# stations
startdate = '2017-01-01'
enddate = '2017-01-07'
sel = [Station.station,Station.name,Station.latitude,Station.longitude,Station.elevation,func.sum(Measurement.prcp)]
queryresult = session.query(*sel).\
filter(Station.station == Measurement.station).\
group_by(Measurement.station).\
filter(Measurement.date >= startdate).\
filter(Measurement.date <= enddate).\
order_by(func.sum(Measurement.prcp).desc()).\
all()
stations = pd.DataFrame(queryresult, columns=['Station','Name','Latitude','Longitude','Elevation','PrcpSum'])
stations
# -
# ## Optional Challenge Assignment
# +
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# +
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
startdate = '2018-01-01'
enddate = '2018-01-07'
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
dtobj = dt.datetime.strptime(startdate, '%Y-%m-%d')
enddtobj = dt.datetime.strptime(enddate, '%Y-%m-%d')
tripdates = []
normals =[]
while (dtobj <= enddtobj):
tripdates.append(dt.datetime.strftime(dtobj,'%Y-%m-%d'))
datestr = dt.datetime.strftime(dtobj,'%m-%d')
normals.append(list(np.ravel(daily_normals(datestr))))
dtobj = dtobj + dt.timedelta(days = 1)
normals
# -
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
thistory = pd.DataFrame(normals, columns=['tmin','tavg','tmax'])
thistory['Date'] = tripdates
thistory = thistory.set_index("Date")
thistory
# Plot the daily normals as an area plot with `stacked=False`
thistory.plot.area(stacked=False)
plt.xticks(rotation=45)
|
AValdes_climate.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
import astropy.units as u
import sys
sys.path.insert(0, '../')
# + deletable=true editable=true
from libra import (IRTFTemplate, magnitudes,
nirspec_pixel_wavelengths, throughput, trappist1,
background, poisson, spitzer_variability,
inject_flares)
sptype = 'M8V'
delta_teff = -200
u1, u2 = trappist1('b').u
mag = magnitudes['TRAPPIST-1']['J']
exptime = 1*u.s
n_spots = 3
times = np.arange(trappist1('b').t0 - 0.15, trappist1('b').t0 + 3, 1/60/24)
radius_multiplier = 40
spectrum_photo = IRTFTemplate(sptype)
spectrum_spots = spectrum_photo.scale_temperature(delta_teff)
spectrum_photo.plot(label='photosphere', ls='--', color='DodgerBlue', alpha=0.5)
spectrum_spots.plot(label='spots', ls="--", color='r', alpha=0.5)
# + deletable=true editable=true
from libra import Star, Spot, trappist1, transit_model
spots = [Spot.from_sunspot_distribution(radius_multiplier=radius_multiplier)
for i in range(n_spots)]
star = Star(rotation_period=3.3*u.day, spots=spots, contrast=0.8)
transit = transit_model(times, trappist1('b'))
flux = star.flux(times) * transit
area = star.spotted_area(times)
old_area = star.flux_weighted_area(times)
# -
wl = nirspec_pixel_wavelengths()
flares = inject_flares(wl, times)
plt.imshow(flares.T)
flares.max()
# + deletable=true editable=true
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
ax[0].plot(times, old_area, label='old')
ax[0].plot(times, area, label='new')
ax[0].legend()
ax[1].plot(times, flux/flux.mean())
ax[1].plot(times, 1 + np.sum(flares, axis=1))
# + deletable=true editable=true
fluxes = np.zeros((len(times), len(wl)))
spitzer_var = spitzer_variability(times)
for i in range(len(times)):
f_s = area[i]
combined_spectrum = (1 - f_s) * spectrum_photo + f_s * spectrum_spots
fluxes[i, :] = poisson(combined_spectrum.n_photons(wl, exptime, mag) * transit[i] *
throughput(wl) * spitzer_var[i] * (1 + flares[i, :]) +
background(wl, exptime))
# + deletable=true editable=true
spectral_fluxes = np.sum(fluxes, axis=1)
plt.scatter(times, spectral_fluxes/spectral_fluxes.mean(),
marker='.', s=4, label='spectrum model')
plt.legend()
plt.show()
# + deletable=true editable=true
fig, ax = plt.subplots(2, 2, figsize=(6, 6), sharex=True)
spectral_fluxes = np.sum(fluxes, axis=1)
ax[0, 0].scatter(times, spectral_fluxes/spectral_fluxes.mean(),
marker='.', s=4, label='spectrum model')
ax[0, 1].plot(times, area/area.max(), label='spot area')
ax[0, 1].legend()
ax[1, 1].plot(times, flux/flux.max(), label='stellar flux')
ax[1, 1].legend()
ax[1, 0].plot(times, spitzer_var, label='stellar flux')
#ax[0].plot()
fig.tight_layout()
plt.show()
# + deletable=true editable=true
fig, ax = plt.subplots(figsize=(6, 6))
spectral_fluxes = np.sum(fluxes, axis=1)
ax.scatter(times, spectral_fluxes/spectral_fluxes.max(),
marker='.', s=4, label='Simulated obs.', color='k')
ax.plot(times, star.flux(times)/star.flux(times).max(), label='Spot mod')
ax.plot(times, spitzer_var, label='Spitzer var.')
ax.plot(times, transit, label='transit')
ax.plot(times, 1 + np.sum(flares, axis=1), label='flares')
ax.legend()
ax.set_xlabel('Time [d]')
ax.set_ylabel('Flux')
#ax[0].plot()
fig.tight_layout()
fig.savefig('breakdown.png', bbox_inches='tight', dpi=200)
plt.show()
# -
fluxes.shape
# + deletable=true editable=true
fig, ax = plt.subplots(figsize=(6, 6))
#spectral_fluxes = np.sum(fluxes, axis=1)
short_bin = np.sum(fluxes[:, :100], axis=1)
mid_bin = np.sum(fluxes[:, 100:200], axis=1)
long_bin = np.sum(fluxes[:, 200:], axis=1)
ax.scatter(times, long_bin/long_bin.max(),
marker='.', s=4, label=r'Long $\lambda$', color='r')
ax.scatter(times, mid_bin/mid_bin.max(),
marker='.', s=4, label=r'Mid $\lambda$', color='C2')
ax.scatter(times, short_bin/short_bin.max(),
marker='.', s=4, label=r'Short $\lambda$', color='C0')
ax.plot(times, star.flux(times)/star.flux(times).max(), label='Spot mod')
ax.plot(times, spitzer_var, label='Spitzer var.')
ax.plot(times, transit, label='transit')
ax.plot(times, 1 + np.sum(flares, axis=1)/100, label='flares')
ax.legend()
ax.set_xlabel('Time [d]')
ax.set_ylabel('Flux')
#ax[0].plot()
fig.tight_layout()
fig.savefig('breakdown.png', bbox_inches='tight', dpi=200)
plt.show()
# -
1 + flares.min()
wl[100], wl[200],
np.isnan(spitzer_variability(times)).any()
|
notebooks/spot_modulation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Script to demo scikit for tweet popular/unpopular classification.
# %matplotlib inline
# +
from __future__ import division
from __future__ import print_function
import csv
import datetime as dt
import os
import platform
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas
from sklearn import clone
from sklearn import preprocessing
from sklearn import svm
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_extraction import DictVectorizer
from sklearn.metrics import classification_report
from sklearn.tree import DecisionTreeClassifier
# -
def csv_to_dict_cesar(csv_filename):
# Let's say, We are intersted in only count features
count_features = ['_char_count', '_hashtag_count', '_word_count', '_url_count']
with open(csv_filename) as f:
features = [({k: int(v) for k, v in row.items() if k in count_features}, row['_popular'])
for row in csv.DictReader(f, skipinitialspace=True)]
X = [f[0] for f in features]
Y = [f[1] for f in features]
return (X, Y)
def csv_to_dict(csv_filename):
"""Open feature table with csv library.
Task: Run with '_rt_count'. See the good results!
"""
non_numeric_features = ['', '_text', '_urls', '_mentions', '_hashtags',
'_tweet_datetime', '_popular', '_rt_count']
with open(csv_filename, 'rU') as f:
rows = csv.DictReader(f, skipinitialspace=True, delimiter='|')
labels = [row['_popular'] for row in rows]
features = []
with open(csv_filename, 'rU') as f:
rows = csv.DictReader(f, skipinitialspace=True, delimiter='|')
for row in rows:
#print(row)
row_dict = {}
for k, v in row.items():
if k not in non_numeric_features:
try:
row_dict[k] = int(v)
# these tries catch a few junk entries
except TypeError:
row_dict[k] = 0
except ValueError:
row_dict[k] = 0
#row_dict = {k: int(v) for k, v in row.items() if k not in non_numeric_features}
features.append(row_dict)
return features, labels
def csv_to_df(csv_file):
"""Open csv with Pandas DataFrame, then convert to dict
and return.
TODO: Fix this.
"""
dataframe = pandas.read_csv(csv_file,
encoding='utf-8',
engine='python',
sep='|',
delimiter='|',
index_col=0)
return dataframe
def load_data(csv_filename):
"""Open csv file and load into Scikit vectorizer.
"""
# Open .csv and load into df
#features = csv_to_dict_cesar(csv_filename)
#vec = DictVectorizer()
#data = features[0] # list of dict: [{'_word_count': 5, '_hashtag_count': 0, '_char_count': 50, '_url_count': 0}
#target = features[1] # list of str: ['TRUE', 'TRUE', 'FALSE', ...]
print('Loading CSV into dict ...')
t0 = dt.datetime.utcnow()
data, target = csv_to_dict(csv_filename)
print('... finished in {} secs.'.format(dt.datetime.utcnow() - t0))
print()
print('Loading dict into vectorizer')
t0 = dt.datetime.utcnow()
vec = DictVectorizer()
X = vec.fit_transform(data).toarray() # change to numpy array
Y = np.array(target) # change to numpy array
print('... finished in {} secs.'.format(dt.datetime.utcnow() - t0))
print()
'''
-In case we need to know the features
'''
feature_names = vec.get_feature_names()
'''
-Dividing the data into train and test
-random_state is pseudo-random number generator state used for
random sampling
'''
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=0)
return X_train, X_test, Y_train, Y_test
X_train, X_test, Y_train, Y_test = load_data("feature_tables/basics.csv")
def scale_data(X_train, X_test, Y_train, Y_test):
"""Take Vectors,
"""
# write models dir if not present
models_dir = 'models'
if not os.path.isdir(models_dir):
os.mkdir(models_dir)
'''
-PREPOCESSING
-Here, scaled data has zero mean and unit varience
-We save the scaler to later use with testing/prediction data
'''
print('Scaling data ...')
t0 = dt.datetime.utcnow()
scaler = preprocessing.StandardScaler().fit(X_train)
joblib.dump(scaler, 'models/scaler.pickle')
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print('... finished in {} secs.'.format(dt.datetime.utcnow() - t0))
print()
return X_train_scaled, X_test_scaled, Y_train, Y_test
X_train_scaled, X_test_scaled, Y_train, Y_test = scale_data(X_train, X_test, Y_train, Y_test)
def run_tree(X_train_scaled, X_test_scaled, Y_train, Y_test):
"""Run decision tree with scikit.
Experiment with: 'max_depth'
"""
'''
-This is where we define the models with pre-defined parameters
-We can learn these parameters given our data
'''
print('Defining and fitting models ...')
t0 = dt.datetime.utcnow()
dec_tree = DecisionTreeClassifier()
dec_tree.fit(X_train_scaled, Y_train)
joblib.dump(dec_tree, 'models/tree.pickle')
print('... finished in {} secs.'.format(dt.datetime.utcnow() - t0))
print()
Y_prediction_tree = dec_tree.predict(X_test_scaled)
print('tree_predictions ', Y_prediction_tree)
expected = Y_test
print('actual_values ', expected)
print()
print('----Tree_report--------------------------------')
print(classification_report(expected, Y_prediction_tree))
run_tree(X_train_scaled, X_test_scaled, Y_train, Y_test)
def run_svc(X_train_scaled, X_test_scaled, Y_train, Y_test):
"""Run SVC with scikit."""
# This is where we define the models with pre-defined parameters
# We can learn these parameters given our data
print('Defining and fitting models ...')
t0 = dt.datetime.utcnow()
scv = svm.LinearSVC(C=100.)
scv.fit(X_train_scaled, Y_train)
joblib.dump(scv, 'models/svc.pickle')
print('... finished in {} secs.'.format(dt.datetime.utcnow() - t0))
print()
Y_prediction_svc = scv.predict(X_test_scaled)
print('tree_predictions ', Y_prediction_svc)
expected = Y_test
print('actual_values ', expected)
print()
print('----SVC_report--------------------------------')
print(classification_report(expected, Y_prediction_svc))
run_svc(X_train_scaled, X_test_scaled, Y_train, Y_test)
def run_random_forest(X_train_scaled, X_test_scaled, Y_train, Y_test):
"""Scikit random forest
Experiment with 'n_estimators'
"""
n_estimators = 30
rf_model = RandomForestClassifier(n_estimators=n_estimators)
# Train
clf = clone(rf_model)
clf = rf_model.fit(X_train_scaled, Y_train)
joblib.dump(clf, 'models/random_forest.pickle')
scores = clf.score(X_train_scaled, Y_train)
Y_prediction = clf.predict(X_test_scaled)
print('tree_predictions ', Y_prediction)
expected = Y_test
print('actual_values ', expected)
print()
print('----Random forest report--------------------------------')
print(classification_report(expected, Y_prediction))
run_random_forest(X_train_scaled, X_test_scaled, Y_train, Y_test)
def run_ada_boost(X_train_scaled, X_test_scaled, Y_train, Y_test):
"""Scikit random forest.
For plotting see:
http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_iris.html
Experiment with 'n_estimators'
"""
n_estimators = 30
ada_classifier = AdaBoostClassifier(DecisionTreeClassifier(max_depth=3),
n_estimators=n_estimators)
# Train
clf = clone(ada_classifier)
clf = ada_classifier.fit(X_train_scaled, Y_train)
joblib.dump(clf, 'models/ada_boost.pickle')
scores = clf.score(X_train_scaled, Y_train)
Y_prediction = clf.predict(X_test_scaled)
print('tree_predictions ', Y_prediction)
expected = Y_test
print('actual_values ', expected)
print()
print(classification_report(expected, Y_prediction))
run_ada_boost(X_train_scaled, X_test_scaled, Y_train, Y_test)
|
public_talks/2016_02_26_columbia/do_ml_on_feature_tables.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from copy import deepcopy
import json
import os
import re
import sys
test='251315 | 131567 2157 28890 2290931 183963 2235 2236 2239 |'
item = [re.sub(r'\s+\|\s+', '', x) for x in test.rstrip(os.linesep).split('\t|\t')]
item_new=item[1].rstrip(' \t|')
kk=item[0]
nn=item_new.split(' ')
nn.reverse()
nn
# +
def new_read_dmp(dmpfile):
tmp_dict={}
with open(dmpfile) as rd:
for line in rd:
item=[re.sub(r'\s+\|\s+', '', x) for x in line.rstrip(os.linesep).split('\t|\t')]
key=int(item[0])
value=item[1].rstrip(' \t|')
tmp_dict[key]=value
return tmp_dict
def treat_value(v):
tmpv=v.split(' ')
tmpv.reverse()
tmpv.append('1')
return tmpv
dict1=new_read_dmp("/home/kechanglin/data/data_conversion/taxidlineage.dmp")
dict2={}
for key in dict1.keys():
dict2[key]=treat_value(dict1[key])
list1=sorted(dict2.items(),key=lambda sort:sort[0] ,reverse=False)
# -
dict1=new_read_dmp("/home/kechanglin/data/data_conversion/taxidlineage.dmp")
dict2={}
for key in dict1.keys():
dict2[key]=treat_value(dict1[key])
# +
# print(re.sub(r'\D',' ',str(list1[1])))
# +
# str(list1[1])
# -
with open('/home/kechanglin/data/data_conversion/result_taxidlineage','a+') as w:
for i in list1:
result=re.sub(r'\D',' ',str(i))
w.write(result+'\n')
|
file_convert/test_modules_conversion.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import psycopg2
from sql_queries import create_table_queries, drop_table_queries
def create_database():
"""
- Creates and connects to the sparkifydb
- Returns the connection and cursor to sparkifydb
"""
# connect to default database
conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=postgres password=<PASSWORD>")
conn.set_session(autocommit=True)
cur = conn.cursor()
# create sparkify database with UTF8 encoding
cur.execute("DROP DATABASE IF EXISTS sparkifydb")
cur.execute("CREATE DATABASE sparkifydb WITH ENCODING 'utf8' TEMPLATE template0")
# close connection to default database
conn.close()
# connect to sparkify database
conn = psycopg2.connect("host=127.0.0.1 dbname=sparkifydb user=postgres password=<PASSWORD>")
cur = conn.cursor()
return cur, conn
def drop_tables(cur, conn):
"""
Drops each table using the queries in `drop_table_queries` list.
"""
for query in drop_table_queries:
cur.execute(query)
conn.commit()
def create_tables(cur, conn):
"""
Creates each table using the queries in `create_table_queries` list.
"""
for query in create_table_queries:
cur.execute(query)
conn.commit()
def main():
"""
- Drops (if exists) and Creates the sparkify database.
- Establishes connection with the sparkify database and gets
cursor to it.
- Drops all the tables.
- Creates all tables needed.
- Finally, closes the connection.
"""
cur, conn = create_database()
drop_tables(cur, conn)
create_tables(cur, conn)
conn.close()
if __name__ == "__main__":
main()
# -
|
Data Modelling with postgressql/create_tables.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Objectives" data-toc-modified-id="Objectives-1"><span class="toc-item-num">1 </span>Objectives</a></span></li><li><span><a href="#More-Pandas" data-toc-modified-id="More-Pandas-2"><span class="toc-item-num">2 </span>More Pandas</a></span><ul class="toc-item"><li><span><a href="#Loading-the-Data" data-toc-modified-id="Loading-the-Data-2.1"><span class="toc-item-num">2.1 </span>Loading the Data</a></span></li></ul></li><li><span><a href="#Exploratory-Data-Analysis-(EDA)" data-toc-modified-id="Exploratory-Data-Analysis-(EDA)-3"><span class="toc-item-num">3 </span>Exploratory Data Analysis (EDA)</a></span><ul class="toc-item"><li><span><a href="#Inspecting-the-Data" data-toc-modified-id="Inspecting-the-Data-3.1"><span class="toc-item-num">3.1 </span>Inspecting the Data</a></span></li><li><span><a href="#Question-1:-What-animal-types-are-in-the-dataset?" data-toc-modified-id="Question-1:-What-animal-types-are-in-the-dataset?-3.2"><span class="toc-item-num">3.2 </span>Question 1: What animal types are in the dataset?</a></span></li><li><span><a href="#Question-2:-What-"Other"-animals-are-in-the-dataset?" data-toc-modified-id="Question-2:-What-"Other"-animals-are-in-the-dataset?-3.3"><span class="toc-item-num">3.3 </span>Question 2: What "Other" animals are in the dataset?</a></span></li><li><span><a href="#Question-3:-How-old-are-the-animals-in-our-dataset?" data-toc-modified-id="Question-3:-How-old-are-the-animals-in-our-dataset?-3.4"><span class="toc-item-num">3.4 </span>Question 3: How old are the animals in our dataset?</a></span><ul class="toc-item"><li><span><a href="#Series.map()" data-toc-modified-id="Series.map()-3.4.1"><span class="toc-item-num">3.4.1 </span><code>Series.map()</code></a></span></li><li><span><a href="#More-Sophisticated-Mapping" data-toc-modified-id="More-Sophisticated-Mapping-3.4.2"><span class="toc-item-num">3.4.2 </span>More Sophisticated Mapping</a></span></li><li><span><a href="#Lambda-Functions" data-toc-modified-id="Lambda-Functions-3.4.3"><span class="toc-item-num">3.4.3 </span>Lambda Functions</a></span></li></ul></li></ul></li><li><span><a href="#Handling-Missing-Data" data-toc-modified-id="Handling-Missing-Data-4"><span class="toc-item-num">4 </span>Handling Missing Data</a></span><ul class="toc-item"><li><span><a href="#Fill-with-a-Relevant-Value" data-toc-modified-id="Fill-with-a-Relevant-Value-4.1"><span class="toc-item-num">4.1 </span>Fill with a Relevant Value</a></span></li><li><span><a href="#Fill-with-a-Reasonable-Value" data-toc-modified-id="Fill-with-a-Reasonable-Value-4.2"><span class="toc-item-num">4.2 </span>Fill with a Reasonable Value</a></span></li><li><span><a href="#Specify-That-the-Data-Were-Missing" data-toc-modified-id="Specify-That-the-Data-Were-Missing-4.3"><span class="toc-item-num">4.3 </span>Specify That the Data Were Missing</a></span></li><li><span><a href="#Drop-Missing-Data" data-toc-modified-id="Drop-Missing-Data-4.4"><span class="toc-item-num">4.4 </span>Drop Missing Data</a></span></li><li><span><a href="#Comparing-Before-and-After" data-toc-modified-id="Comparing-Before-and-After-4.5"><span class="toc-item-num">4.5 </span>Comparing Before and After</a></span></li></ul></li><li><span><a href="#Level-Up:-.applymap()" data-toc-modified-id="Level-Up:-.applymap()-5"><span class="toc-item-num">5 </span>Level Up: <code>.applymap()</code></a></span></li><li><span><a href="#Level-Up:-Faster-NumPy-Methods" data-toc-modified-id="Level-Up:-Faster-NumPy-Methods-6"><span class="toc-item-num">6 </span>Level Up: Faster NumPy Methods</a></span><ul class="toc-item"><li><span><a href="#NumPy's-where()-Method" data-toc-modified-id="NumPy's-where()-Method-6.1"><span class="toc-item-num">6.1 </span>NumPy's <code>where()</code> Method</a></span></li><li><span><a href="#NumPy's-select()-Method" data-toc-modified-id="NumPy's-select()-Method-6.2"><span class="toc-item-num">6.2 </span>NumPy's <code>select()</code> Method</a></span></li></ul></li></ul></div>
# -
# 
# +
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
# %matplotlib inline
# + [markdown] heading_collapsed=true
# # Objectives
# + [markdown] hidden=true
# - Use lambda functions and DataFrame methods to transform data
# - Handle missing data
# + [markdown] heading_collapsed=true
# # More Pandas
# + [markdown] hidden=true
# Suppose you were interested in opening an animal shelter. To inform your planning, it would be useful to analyze data from other shelters to understand their operations. In this lecture, we'll analyze animal outcome data from the Austin Animal Center.
# + [markdown] heading_collapsed=true hidden=true
# ## Loading the Data
# + [markdown] hidden=true
# Let's take a moment to examine the [Austin Animal Center data set](https://data.austintexas.gov/Health-and-Community-Services/Austin-Animal-Center-Outcomes/9t4d-g238/data).
#
# We can also ingest the data right off the web, as we do below. The code below will load JSON data for the last 1000 animals to leave the center from this [JSON file](https://data.austintexas.gov/resource/9t4d-g238.json).
# + hidden=true
json_url = 'https://data.austintexas.gov/resource/9t4d-g238.json'
animals = pd.read_json(json_url)
# + [markdown] heading_collapsed=true
# # Exploratory Data Analysis (EDA)
# + [markdown] hidden=true
# Exploring a new dataset is essential for understanding what it contains. This will generate ideas for processing the data and questions to try to answer in further analysis.
# + [markdown] heading_collapsed=true hidden=true
# ## Inspecting the Data
# + [markdown] hidden=true
# Let's take a look at a few rows of data.
# + hidden=true
animals.head()
# + [markdown] hidden=true
# The `info()` and `describe()` provide a useful overview of the data.
# + hidden=true
animals.info()
# + [markdown] hidden=true
# > We can see we have some missing data. Specifically in the `outcome_type`, `outcome_subtype`, and `name` columns.
# + hidden=true
animals.describe()
# + hidden=true
# Use value counts to check a categorical feature's distribution
animals['color'].value_counts()
# + [markdown] hidden=true
# Now that we have a sense of the data available to us, we can focus in on some more specific questions to dig into. These questions may or may not be directly relevant to your goal (e.g. helping plan a new shelter), but will always help you gain a better understanding of your data.
#
# In your EDA notebooks, **markdown** will be especially helpful in tracking these questions and your methods of answering the questions.
# + [markdown] heading_collapsed=true hidden=true
# ## Question 1: What animal types are in the dataset?
# + [markdown] hidden=true
# We can then begin thinking about what parts of the DataFrame we need to answer the question.
# + [markdown] hidden=true
# * What features do we need?
# - "animal_type"
# * What type of logic and calculation do we perform?
# - Let's use `.value_counts()` to count the different animal types
# * What type of visualization would help us answer the question?
# - A bar chart would be good for this purpose
# + hidden=true
animals['animal_type'].value_counts()
# + hidden=true
fig, ax = plt.subplots()
animal_type_values = animals['animal_type'].value_counts()
ax.barh(
y=animal_type_values.index,
width=animal_type_values.values
)
ax.set_xlabel('count');
# + hidden=true
animals['animal_type'].hist()
# + [markdown] hidden=true
# Questions lead to other questions. For the above example, the visualization raises the question...
# + [markdown] heading_collapsed=true hidden=true
# ## Question 2: What "Other" animals are in the dataset?
# + [markdown] hidden=true
# To find out, we need to know whether the type of animal for "Other" is in our dataset - and if so, where to find it.
# + [markdown] hidden=true
# **Discussion**: Where might we look to find animal types within the Other category?
#
# <details>
# <summary>
# Answer
# </summary>
# The breed column.
# </details>
# + hidden=true
# Your exploration here
# + [markdown] hidden=true
# Let's use that column to answer our question.
# + hidden=true
mask_other_animals = animals['animal_type'] == 'Other'
animals[mask_other_animals]['breed'].value_counts()
# + [markdown] heading_collapsed=true hidden=true
# ## Question 3: How old are the animals in our dataset?
# + [markdown] hidden=true
# Let's try to answer this with the `age_upon_outcome` variable to learn some new `pandas` tools.
# + hidden=true
animals['age_upon_outcome'].value_counts()
# + [markdown] heading_collapsed=true hidden=true
# ### `Series.map()`
# + [markdown] hidden=true
# The `.map()` method applies a transformation to every entry in the Series. This transformation "maps" each value from the Series to a new value. A transformation can be defined by a function, Series, or dictionary - usually we'll use functions.
# + [markdown] hidden=true
# The `.apply()` method is similar to the `.map()` method for Series, but can only use functions. It has more powerful uses when working with DataFrames.
# + hidden=true
def one_year(age):
if age == '1 year':
return '1 years'
else:
return age
# + hidden=true
animals['new_age1'] = animals['age_upon_outcome'].map(one_year)
animals['new_age1'].value_counts()
# + [markdown] heading_collapsed=true hidden=true
# ### More Sophisticated Mapping
# + [markdown] hidden=true
# Let's use `.map()` to turn sex_upon_outcome into a category with three values (called **ternary**): male, female, or unknown.
# + [markdown] hidden=true
# First, explore the unique values:
# + hidden=true
animals['sex_upon_outcome'].unique()
# + hidden=true
def sex_mapper(status):
if status in ['Neutered Male', 'Intact Male']:
return 'Male'
elif status in ['Spayed Female', 'Intact Female']:
return 'Female'
else:
return 'Unknown'
# + hidden=true
animals['new_sex1'] = animals['sex_upon_outcome'].map(sex_mapper)
animals['new_sex1']
# + [markdown] heading_collapsed=true hidden=true
# ### Lambda Functions
# + [markdown] hidden=true
# Simple functions can be defined just when you need them, when you would call the function. These are called **lambda functions**. These functions are **anonymous** and disappear immediately after use.
# + [markdown] hidden=true
# Let's use a lambda function to get rid of 'Other' in the "animal_type' column.
# + hidden=true
animals[animals['animal_type'] == 'Other']
# + hidden=true
animals['animal_type'].value_counts()
# + hidden=true
animals['animal_type'].map(lambda x: np.nan if x == 'Other' else x).value_counts()
# + [markdown] heading_collapsed=true
# # Handling Missing Data
# + [markdown] hidden=true
# A lot of the times we'll have missing information in our data set. This can sometimes be troublesome in what we're trying to do.
# + [markdown] hidden=true
# So far, we've been doing some preprocessing/cleaning to answer questions. Now we're going to handle the missing values in our data.
#
# There are a few strategies we can choose from and they each have their special use case.
# + [markdown] hidden=true
# > Before making changes, it's convenient to make changes to a copy instead of overwriting data. We'll keep all our changes in `animals_clean` which will be a [copy](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.copy.html) of the original DataFrame.
# + hidden=true
animals_clean = animals.copy()
# + [markdown] heading_collapsed=true hidden=true
# ## Fill with a Relevant Value
# + [markdown] hidden=true
# A lot of times we already have an idea of how we want to specify that a value was missing and replace it with a value that makes more sense than an "empty" value.
# + [markdown] hidden=true
# For example, it might make sense to fill the value as "MISSING" or "UNKNOWN". This way it's clearer when do more analysis.
# + [markdown] hidden=true
# > We can use Pandas' [`fillna()` method](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html) to replace missing values with something specific
# + hidden=true
# Note this creates a copy of `animals` with the missing values replaced
animals_name_filled = animals.fillna({'name':'UNKNOWN'}) # {col_name:new_value}
animals_name_filled.head()
# + hidden=true
# `animals` DataFrame is left untouched
animals.head()
# + hidden=true
# Alternative way to fill missing values by specifying column(s) first
animals_only_names = animals[['name']].fillna(value='UNKNOWN')
animals_only_names.head()
# + hidden=true
# To keep changes in DataFrame, overwrite the column
animals_clean[['name']] = animals_only_names
animals_clean.head()
# + [markdown] heading_collapsed=true hidden=true
# ## Fill with a Reasonable Value
# + [markdown] hidden=true
# Other times we don't know what the missing value was but we might have a reasonable guess. This allows us to still use the data point (row) in our analysis.
# + [markdown] hidden=true
# > Beware that filling in missing values can lead to you drawing incorrect conclusions. If most of the data from a column are missing, it's going to appear that the value you filled it in with is more common that it actually was!
# + [markdown] hidden=true
# A lot of the time we'll use the _mean_ or _median_ for numerical values. Sometimes values like $0$ make sense since it might make sense in the context of how the data was collected.
#
# With categorical values, you might choose to fill the missing values with the most common value (the _mode_).
# + [markdown] hidden=true
# > Similar to the previous subsection, we can use the `fillna()` method after specifying the value to fill
# + hidden=true
## Let's find the most common value for `outcome_subtype`
outcome_subtype_counts = animals['outcome_subtype'].value_counts()
outcome_subtype_counts
# + hidden=true
# This gets us just the values in order of most frequent to least frequent
outcome_subtype_ordered = outcome_subtype_counts.index
print(outcome_subtype_ordered)
# Get the first one
most_common_outcome_subtype = outcome_subtype_ordered[0]
# + hidden=true
# Using the built-in mode() method
# Note this is Series so we have to get the first element (which is the value)
most_common_outcome_subtype = animals['outcome_subtype'].mode()[0]
most_common_outcome_subtype
# + hidden=true
# Similar to the previous subsection, we can use fillna() and update the DF
animals_clean['outcome_subtype'] = animals['outcome_subtype'].fillna(most_common_outcome_subtype)
animals_clean.head()
# + [markdown] heading_collapsed=true hidden=true
# ## Specify That the Data Were Missing
# + [markdown] hidden=true
# Even after filling in missing values, it might make sense to specify that there were missing data. You can document that the data was missing by creating a new column that represents whether the data was originally missing or not.
# + [markdown] hidden=true
# This can be helpful when you suspect that the fact the data was missing could be important for an analysis.
# + [markdown] hidden=true
# > Since we already removed some missing values, we're going to reference back to the original `animals` DataFrame. (Good thing we didn't overwrite it! 😉)
# + hidden=true
# Let's specify which values were originally missing in "outcome_subtype"
missing_outcome_subtypes = animals['outcome_subtype'].isna()
missing_outcome_subtypes
# + hidden=true
# Create new column for missing outcome subtypes matched w/ replaced values
animals_clean['outcome_subtype_missing'] = missing_outcome_subtypes
animals_clean.head()
# + [markdown] heading_collapsed=true hidden=true
# ## Drop Missing Data
# + [markdown] hidden=true
# You should try to keep as much relevant data as possible, but sometimes the other methods don't make as much sense and it's better to remove or **drop** the missing data.
# + [markdown] hidden=true
# We typically drop missing data if very little data would be lost and/or trying to fill in the values wouldn't make sense for our use case. For example, if you're trying to predict the outcome based on the other features/columns it might not make sense to fill in those missing values with something you can't confirm.
# + [markdown] hidden=true
# > We noticed that `outcome_type` had only two missing values. It might not be worth trying to handle those two missing values. We can pretend that the `outcome_type` was an important feature and without it the rest of the row's data is of little importance to us.
# >
# > So we'll decide to drop the row if a value from `outcome_type` is missing. We'll use Pandas' [`dropna()` method](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html).
# + hidden=true
# This will drop any row (axis=0) or column (axis=1) that has missing values
animals_clean = animals_clean.dropna( # Note we're overwriting animals_clean
axis=0, # This is the default & will drop rows; axis=1 for cols
subset=['outcome_type'] # Specific labels to consider (defaults to all)
)
animals_clean.head()
# + [markdown] heading_collapsed=true hidden=true
# ## Comparing Before and After
# + [markdown] hidden=true
# We can now see all the work we did!
# + hidden=true
# Original data
animals.info()
# + hidden=true
# Missing data cleaned
animals_clean.info()
# + [markdown] heading_collapsed=true
# # Level Up: `.applymap()`
# + [markdown] hidden=true
# `.applymap()` is used to apply a transformation to each element of a DataFrame.
# + hidden=true
# This line will apply the base `type()` function to
# all entries of the DataFrame.
animals.applymap(type)
# + [markdown] heading_collapsed=true
# # Level Up: Faster NumPy Methods
# + [markdown] hidden=true
# In general, `np.where()` and `np.select()` are faster than `map()`. This won't matter too much with reasonably-sized data but can be a consideration for ***big data***.
# + [markdown] heading_collapsed=true hidden=true
# ## NumPy's `where()` Method
# + hidden=true
animals['new_age2'] = np.where(animals['age_upon_outcome'] == '1 year',
'1 years', animals['age_upon_outcome'])
animals['new_age2']
# + hidden=true
# Check we got the same results with np.where()
(animals['new_age1'] != animals['new_age2']).sum()
# + hidden=true
# Let's time how long it takes .map() to run by running it multiple times
# %timeit animals['new_age1'] = animals['age_upon_outcome'].map(one_year)
# + hidden=true
# Let's time how long it takes np.where() to run by running it multiple times
# %timeit animals['new_age2'] = np.where(animals['age_upon_outcome'] == '1 year',\
# '1 years',animals['age_upon_outcome'])
# + [markdown] heading_collapsed=true hidden=true
# ## NumPy's `select()` Method
# + [markdown] hidden=true
# Again, `numpy` will be faster:
# + hidden=true
conditions = [animals['sex_upon_outcome'] == 'Neutered Male',
animals['sex_upon_outcome'] == 'Intact Male',
animals['sex_upon_outcome'] == 'Spayed Female',
animals['sex_upon_outcome'] == 'Intact Female',
animals['sex_upon_outcome'] == 'Unknown',
animals['sex_upon_outcome'] == 'NULL']
choices = ['Male', 'Male', 'Female', 'Female', 'Unknown', 'Unknown']
# + hidden=true
animals['new_sex2'] = np.select(conditions, choices)
animals['new_sex2']
# + hidden=true
# Check we got the same results with np.where()
(animals['new_sex1'] != animals['new_sex2']).sum()
# + hidden=true
# Let's time how long it takes .map() to run by running it multiple times
# %timeit animals['new_sex1'] = animals['sex_upon_outcome'].map(sex_mapper)
# + hidden=true
# Let's time how long it takes np.select() to run by running it multiple times
# %timeit animals['new_sex2'] = np.select(conditions, choices)
|
Phase_1/ds-pandas_data_cleaning-main/pandas_data_cleaning.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# # Single Qubit Gates
# -
# In the previous section we looked at all the possible states a qubit could be in. We saw that qubits could be represented by 2D vectors, and that their states are limited to the form:
#
# $$ |q\rangle = \cos{(\tfrac{\theta}{2})}|0\rangle + e^{i\phi}\sin{\tfrac{\theta}{2}}|1\rangle $$
#
# Where $\theta$ and $\phi$ are real numbers. In this section we will cover _gates,_ the operations that change a qubit between these states. Due to the number of gates and the similarities between them, this chapter is at risk of becoming a list. To counter this, we have included a few digressions to introduce important ideas at appropriate places throughout the chapter.
#
#
# In _The Atoms of Computation_ we came across some gates and used them to perform a classical computation. An important feature of quantum circuits is that, between initialising the qubits and measuring them, the operations (gates) are *_always_* reversible! These reversible gates can be represented as matrices, and as rotations around the Bloch sphere.
# + tags=["thebelab-init"]
from qiskit import *
from math import pi
from qiskit.visualization import plot_bloch_multivector
# -
# ## 1. The Pauli Gates <a id="pauli"></a>
# You should be familiar with the Pauli matrices from the linear algebra section. If any of the maths here is new to you, you should use the linear algebra section to bring yourself up to speed. We will see here that the Pauli matrices can represent some very commonly used quantum gates.
#
# ### 1.1 The X-Gate <a id="xgate"></a>
# The X-gate is represented by the Pauli-X matrix:
#
# $$ X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = |0\rangle\langle1| + |1\rangle\langle0| $$
#
# To see the effect a gate has on a qubit, we simply multiply the qubit’s statevector by the gate. We can see that the X-gate switches the amplitudes of the states $|0\rangle$ and $|1\rangle$:
#
# $$ X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} = |1\rangle$$
#
#
#
#
# <!-- ::: q-block.reminder -->
#
# ## Reminders
#
# <details>
# <summary>Multiplying Vectors by Matrices</summary>
# Matrix multiplication is a generalisation of the inner product we saw in the last chapter. In the specific case of multiplying a vector by a matrix (as seen above), we always get a vector back:
#
# $$ M|v\rangle = \begin{bmatrix}a & b \\ c & d \end{bmatrix}\begin{bmatrix}v_0 \\ v_1 \end{bmatrix}
# = \begin{bmatrix}a\cdot v_0 + b \cdot v_1 \\ c \cdot v_0 + d \cdot v_1 \end{bmatrix} $$
#
# In quantum computing, we can write our matrices in terms of basis vectors:
#
# $$X = |0\rangle\langle1| + |1\rangle\langle0|$$
#
# This can sometimes be clearer than using a box matrix as we can see what different multiplications will result in:
#
# $$
# \begin{aligned}
# X|1\rangle & = (|0\rangle\langle1| + |1\rangle\langle0|)|1\rangle \\
# & = |0\rangle\langle1|1\rangle + |1\rangle\langle0|1\rangle \\
# & = |0\rangle \times 1 + |1\rangle \times 0 \\
# & = |0\rangle
# \end{aligned}
# $$
#
# In fact, when we see a ket and a bra multiplied like this:
#
# $$ |a\rangle\langle b| $$
#
# this is called the _outer product_, which follows the rule:
#
# $$
# |a\rangle\langle b| =
# \begin{bmatrix}
# a_0 b_0 & a_0 b_1 & \dots & a_0 b_n\\
# a_1 b_0 & \ddots & & \vdots \\
# \vdots & & \ddots & \vdots \\
# a_n b_0 & \dots & \dots & a_n b_n \\
# \end{bmatrix}
# $$
#
# We can see this does indeed result in the X-matrix as seen above:
#
# $$
# |0\rangle\langle1| + |1\rangle\langle0| =
# \begin{bmatrix}0 & 1 \\ 0 & 0 \end{bmatrix} +
# \begin{bmatrix}0 & 0 \\ 1 & 0 \end{bmatrix} =
# \begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix} = X
# $$
# </details>
#
# <!-- ::: -->
#
# In Qiskit, we can create a short circuit to verify this:
# Let's do an X-gate on a |0> qubit
qc = QuantumCircuit(1)
qc.x(0)
qc.draw()
# Let's see the result of the above circuit. **Note:** Here we use <code>plot_bloch_multivector()</code> which takes a qubit's statevector instead of the Bloch vector.
# Let's see the result
backend = Aer.get_backend('statevector_simulator')
out = execute(qc,backend).result().get_statevector()
plot_bloch_multivector(out)
# We can indeed see the state of the qubit is $|1\rangle$ as expected. We can think of this as a rotation by $\pi$ radians around the *x-axis* of the Bloch sphere. The X-gate is also often called a NOT-gate, referring to its classical analogue.
#
# ### 1.2 The Y & Z-gates <a id="ynzgatez"></a>
# Similarly to the X-gate, the Y & Z Pauli matrices also act as the Y & Z-gates in our quantum circuits:
#
#
# $$ Y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \quad\quad\quad\quad Z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} $$
#
# $$ Y = -i|0\rangle\langle1| + i|1\rangle\langle0| \quad\quad Z = |0\rangle\langle0| - |1\rangle\langle1| $$
#
# And, unsurprisingly, they also respectively perform rotations by [[$\pi$|$2\pi$|$\frac{\pi}{2}$]] around the y and z-axis of the Bloch sphere.
#
# Below is a widget that displays a qubit’s state on the Bloch sphere, pressing one of the buttons will perform the gate on the qubit:
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli')
# In Qiskit, we can apply the Y and Z-gates to our circuit using:
qc.y(0) # Do Y-gate on qubit 0
qc.z(0) # Do Z-gate on qubit 0
qc.draw()
# ## 2. Digression: The X, Y & Z-Bases <a id="xyzbases"></a>
# <!-- ::: q-block.reminder -->
#
# ## Reminders
#
# <details>
# <summary>Eigenvectors of Matrices</summary>
# We have seen that multiplying a vector by a matrix results in a vector:
#
# $$
# M|v\rangle = |v'\rangle \leftarrow \text{new vector}
# $$
# If we chose the right vectors and matrices, we can find a case in which this matrix multiplication is the same as doing a multiplication by a scalar:
#
# $$
# M|v\rangle = \lambda|v\rangle
# $$
# (Above, $M$ is a matrix, and $\lambda$ is a scalar). For a matrix $M$, any vector that has this property is called an <i>eigenvector</i> of $M$. For example, the eigenvectors of the Z-matrix are the states $|0\rangle$ and $|1\rangle$:
#
# $$
# \begin{aligned}
# Z|0\rangle & = |0\rangle \\
# Z|1\rangle & = -|1\rangle
# \end{aligned}
# $$
# Since we use vectors to describe the state of our qubits, we often call these vectors <i>eigenstates</i> in this context. Eigenvectors are very important in quantum computing, and it is important you have a solid grasp of them.
# </details>
#
# <!-- ::: -->
#
# You may also notice that the Z-gate appears to have no effect on our qubit when it is in either of these two states. This is because the states $|0\rangle$ and $|1\rangle$ are the two _eigenstates_ of the Z-gate. In fact, the _computational basis_ (the basis formed by the states $|0\rangle$ and $|1\rangle$) is often called the Z-basis. This is not the only basis we can use, a popular basis is the X-basis, formed by the eigenstates of the X-gate. We call these two vectors $|+\rangle$ and $|-\rangle$:
#
# $$ |+\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle) = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix}$$
#
# $$ |-\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle) = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ -1 \end{bmatrix} $$
#
# Another less commonly used basis is that formed by the eigenstates of the Y-gate. These are called:
#
# $$ |\circlearrowleft\rangle, \quad |\circlearrowright\rangle$$
#
# We leave it as an exercise to calculate these. There are in fact an infinite number of bases; to form one, we simply need two orthogonal vectors.
#
# ### Quick Exercises
# 1. Verify that $|+\rangle$ and $|-\rangle$ are in fact eigenstates of the X-gate.
# 2. What eigenvalues do they have?
# 3. Why would we not see these eigenvalues appear on the Bloch sphere?
# 4. Find the eigenstates of the Y-gate, and their co-ordinates on the Bloch sphere.
#
# Using only the Pauli-gates it is impossible to move our initialised qubit to any state other than $|0\rangle$ or $|1\rangle$, i.e. we cannot achieve superposition. This means we can see no behaviour different to that of a classical bit. To create more interesting states we will need more gates!
#
# ## 3. The Hadamard Gate <a id="hgate"></a>
#
# The Hadamard gate (H-gate) is a fundamental quantum gate. It allows us to move away from the poles of the Bloch sphere and create a superposition of $|0\rangle$ and $|1\rangle$. It has the matrix:
#
# $$ H = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $$
#
# We can see that this performs the transformations below:
#
# $$ H|0\rangle = |+\rangle $$
#
# $$ H|1\rangle = |-\rangle $$
#
# This can be thought of as a rotation around the Bloch vector `[1,0,1]` (the line between the x & z-axis), or as transforming the state of the qubit between the X and Z bases.
#
# You can play around with these gates using the widget below:
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli+h')
# ### Quick Exercise
# 1. Write the H-gate as the outer products of vectors $|0\rangle$, $|1\rangle$, $|+\rangle$ and $|-\rangle$.
# 2. Show that applying the sequence of gates: HZH, to any qubit state is equivalent to applying an X-gate.
# 3. Find a combination of X, Z and H-gates that is equivalent to a Y-gate (ignoring global phase).
#
# ## 4. Digression: Measuring in Different Bases <a id="measuring"></a>
# We have seen that the Z-axis is not intrinsically special, and that there are infinitely many other bases. Similarly with measurement, we don’t always have to measure in the computational basis (the Z-basis), we can measure our qubits in any basis.
#
# As an example, let’s try measuring in the X-basis. We can calculate the probability of measuring either $|+\rangle$ or $|-\rangle$:
#
# $$ p(|+\rangle) = |\langle+|q\rangle|^2, \quad p(|-\rangle) = |\langle-|q\rangle|^2 $$
#
# And after measurement, we are guaranteed to have a qubit in one of these two states. Since Qiskit only allows measuring in the Z-basis, we must create our own using Hadamard gates:
# +
# Create the X-measurement function:
def x_measurement(qc,qubit,cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
qc.h(qubit)
return qc
initial_state = [0,1]
# Initialise our qubit and measure it
qc = QuantumCircuit(1,1)
qc.initialize(initial_state, 0)
x_measurement(qc, 0, 0) # measure qubit 0 to classical bit 0
qc.draw()
# -
# In the quick exercises above, we saw you could create an X-gate by sandwiching our Z-gate between two H-gates:
#
# $$ X = HZH $$
#
# Starting in the Z-basis, the H-gate switches our qubit to the X-basis, the Z-gate performs a NOT in the X-basis, and the final H-gate returns our qubit to the Z-basis.
#
# <img src="images/bloch_HZH.svg">
#
# We can verify this always behaves like an X-gate by multiplying the matrices:
#
# $$
# HZH =
# \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
# \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}
# \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
# =
# \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}
# =X
# $$
#
# Following the same logic, we have created an X-measurement by sandwiching our Z-measurement between two H-gates.
#
# <img src="images/x-measurement.svg">
#
# Let’s now see the results:
backend = Aer.get_backend('statevector_simulator') # Tell Qiskit how to simulate our circuit
out_state = execute(qc,backend).result().get_statevector() # Do the simulation, returning the state vector
plot_bloch_multivector(out_state) # Display the output state vector
# We initialised our qubit in the state $|1\rangle$, but we can see that, after the measurement, we have collapsed our qubit to the states $|+\rangle$ or $|-\rangle$. If you run the cell again, you will see different results, but the final state of the qubit will always be $|+\rangle$ or $|-\rangle$.
#
# ### Quick Exercises
# 1. If we initialise our qubit in the state $|+\rangle$, what is the probability of measuring it in state $|-\rangle$?
# 2. Use Qiskit to display the probability of measuring a $|0\rangle$ qubit in the states $|+\rangle$ and $|-\rangle$ (**Hint:** you might want to use <code>.get_counts()</code> and <code>plot_histogram()</code>).
# 3. Try to create a function that measures in the Y-basis.
#
# Measuring in different bases allows us to see Heisenberg’s famous uncertainty principle in action. Having certainty of measuring a state in the Z-basis removes all certainty of measuring a specific state in the X-basis, and vice versa. A common misconception is that the uncertainty is due to the limits in our equipment, but here we can see the uncertainty is actually part of the nature of the qubit.
#
# For example, if we put our qubit in the state $|0\rangle$, our measurement in the Z-basis is certain to be $|0\rangle$, but our measurement in the X-basis is completely random! Similarly, if we put our qubit in the state $|-\rangle$, our measurement in the X-basis is certain to be $|-\rangle$, but now any measurement in the Z-basis will be completely random.
#
# More generally: _Whatever state our quantum system is in, there is always a measurement that has a deterministic outcome._
#
# The introduction of the H-gate has allowed us to explore some interesting phenomena, but we are still very limited in our quantum operations. Let us now introduce a new type of gate:
#
# ## The R<sub>ϕ</sub>-gate
#
# The $R_\phi$-gate is _parametrised,_ that is, it needs a number ($\phi$) to tell it exactly what to do. The $R_\phi$-gate performs a rotation of $\phi$ around the Z-axis direction (and as such is sometimes also known as the $R_z$-gate). It has the matrix:
#
# $$
# R_\phi = \begin{bmatrix} 1 & 0 \\ 0 & e^{i\phi} \end{bmatrix}
# $$
#
# Where $\phi$ is a real number.
#
# You can use the widget below to play around with the $R_\phi$-gate, specify $\phi$ using the slider:
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo(gates='pauli+h+rz')
# In Qiskit, we specify an $R_\phi$-gate using `rz(phi, qubit)`:
qc = QuantumCircuit(1)
qc.rz(pi/4, 0)
qc.draw()
# You may notice that the Z-gate is a special case of the $R_\phi$-gate, with $\phi = \pi$. In fact there are three more commonly referenced gates we will mention in this chapter, all of which are special cases of the $R_\phi$-gate:
#
# ## 6. The I, S and T-gates <a id="istgates"></a>
#
# ### 6.1 The I-gate <a id="igate"></a>
#
# First comes the I-gate (aka ‘Id-gate’ or ‘Identity gate’). This is simply a gate that does nothing. Its matrix is the identity matrix:
#
# $$
# I = \begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}
# $$
#
# Applying the identity gate anywhere in your circuit should have no effect on the qubit state, so it’s interesting this is even considered a gate. There are two main reasons behind this, one is that it is often used in calculations, for example: proving the X-gate is its own inverse:
#
# $$ I = XX $$
#
# The second, is that it is often useful when considering real hardware to specify a ‘do-nothing’ or ‘none’ operation.
#
# #### Quick Exercise
# 1. What are the eigenstates of the I-gate?
#
# ### 6.2 The S-gates <a id="sgate"></a>
#
# The next gate to mention is the S-gate (sometimes known as the $\sqrt{Z}$-gate), this is an $R_\phi$-gate with $\phi = \pi/2$. It does a quarter-turn around the Bloch sphere. It is important to note that unlike every gate introduced in this chapter so far, the S-gate is **not** its own inverse! As a result, you will often see the $S^\dagger$-gate, (also “S-dagger”, “Sdg” or $\sqrt{Z}^\dagger$-gate). The $S^\dagger$-gate is clearly an $R_\phi$-gate with $\phi = -\pi/2$:
#
# $$ S = \begin{bmatrix} 1 & 0 \\ 0 & e^{\frac{i\pi}{2}} \end{bmatrix}, \quad S^\dagger = \begin{bmatrix} 1 & 0 \\ 0 & e^{-\frac{i\pi}{2}} \end{bmatrix}$$
#
# The name "$\sqrt{Z}$-gate" is due to the fact that two successively applied S-gates has the same effect as one Z-gate:
#
# $$ SS|q\rangle = Z|q\rangle $$
#
# This notation is common throughout quantum computing.
#
# To add an S-gate in Qiskit:
qc = QuantumCircuit(1)
qc.s(0) # Apply S-gate to qubit 0
qc.sdg(0) # Apply Sdg-gate to qubit 0
qc.draw()
# ### 6.3 The T-gate <a id="tgate"></a>
# The T-gate is a very commonly used gate, it is an $R_\phi$-gate with $\phi = \pi/4$:
#
# $$ T = \begin{bmatrix} 1 & 0 \\ 0 & e^{\frac{i\pi}{4}} \end{bmatrix}, \quad T^\dagger = \begin{bmatrix} 1 & 0 \\ 0 & e^{-\frac{i\pi}{4}} \end{bmatrix}$$
#
# As with the S-gate, the T-gate is sometimes also known as the $\sqrt[4]{Z}$-gate.
#
# In Qiskit:
qc = QuantumCircuit(1)
qc.t(0) # Apply T-gate to qubit 0
qc.tdg(0) # Apply Tdg-gate to qubit 0
qc.draw()
# You can use the widget below to play around with all the gates introduced in this chapter so far:
# Run the code in this cell to see the widget
from qiskit_textbook.widgets import gate_demo
gate_demo()
# ## 7. General U-gates <a id="generalU3"></a>
#
# As we saw earlier, the I, Z, S & T-gates were all special cases of the more general $R_\phi$-gate. In the same way, the $U_3$-gate is the most general of all single-qubit quantum gates. It is a parametrised gate of the form:
#
# $$
# U_3(\theta, \phi, \lambda) = \begin{bmatrix} \cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\
# e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2)
# \end{bmatrix}
# $$
#
# Every gate in this chapter could be specified as $U_3(\theta,\phi,\lambda)$, but it is unusual to see this in a circuit diagram, possibly due to the difficulty in reading this.
#
# Qiskit provides $U_2$ and $U_1$-gates, which are specific cases of the $U_3$ gate in which $\theta = \tfrac{\pi}{2}$, and $\theta = \phi = 0$ respectively. You will notice that the $U_1$-gate is equivalent to the $R_\phi$-gate.
#
# $$
# \begin{aligned}
# U_3(\tfrac{\pi}{2}, \phi, \lambda) = U_2 = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & -e^{i\lambda} \\
# e^{i\phi} & e^{i\lambda+i\phi}
# \end{bmatrix}
# & \quad &
# U_3(0, 0, \lambda) = U_1 = \begin{bmatrix} 1 & 0 \\
# 0 & e^{i\lambda}\\
# \end{bmatrix}
# \end{aligned}
# $$
#
# Before running on real IBM quantum hardware, all single-qubit operations are compiled down to $U_1$ , $U_2$ and $U_3$ . For this reason they are sometimes called the _physical gates_.
#
# It should be obvious from this that there are an infinite number of possible gates, and that this also includes $R_x$ and $R_y$-gates, although they are not mentioned here. It must also be noted that there is nothing special about the Z-basis, except that it has been selected as the standard computational basis. That is why we have names for the S and T-gates, but not their X and Y equivalents (e.g. $\sqrt{X}$ and $\sqrt[4]{Y}$).
#
import qiskit
qiskit.__qiskit_version__
|
notebooks/ch-states/single-qubit-gates.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## <u>What is reduce() ?<u>
# ### The reduce() in python is an useful tool when we want to apply a function cumulatively across all elements of an iterator. For example, when we want to add all the elements of a list, or find the maximum or minimum number from a list and so on.
# ### Currently, there are more than one ways to perform all these operations in python and reduce() might seem redundant, but in this example we will make use of reduce() for something really interesting.
# ### The first parameter of the reduce() method is a function and the second argument is an iterator like list. The reduce() method will run until all elements of the iterator is exhausted and it will generate one final result.
# ### reduce() is defined in the functools library in python and has an optional third parameter named initializer that can be used to provide a seed value for the operation.
# ## <u>Our Use Case<u>
# ### Consider that we have a complex JSON structure that we have parsed in to a python dictionary. Refer to <a href="https://github.com/chatterjeerajdeep/python-everyday/blob/main/load_json_from_file.ipynb"> this </a> to know how you can do so.
# +
import json
with open("./sample_data/sample_world_data.json") as json_file:
data_dict = json.load(json_file)
# -
data_dict
# ### Now, we want to extract the Population of India. The obvious way of doing this is this:
data_dict["Continent"]["Asia"]["Country"]["India"]["Population"]
# ### But what if we have this hierarchy in a list and we want to find the exact same thing using that list and the dictionary object without explicitly iterating through the list
hierarchy_list = ["Continent","Asia","Country","India","Population"]
# ### That's where we can use the reduce() method
# ### The seed value here will be the dictionary object and since accessing any value from a dictionary is like cumulative indexing of the dictionary, we can simply use the getitem() from operator library as the function and the hierarchy list as the iterator
# +
import operator
population_india = reduce(operator.getitem, hierarchy_list,data_dict)
# -
population_india
|
reduce_in_python.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Manish-code/DivineAI/blob/main/7_Dictionary.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Ip9whW8Sdp7R"
# #Dictionary
# + colab={"base_uri": "https://localhost:8080/"} id="lGt-FpgZbzkw" outputId="2fb94930-74a8-4add-a236-fa59a8ac8665"
person = {
'first_name': "Humayun",
'last_name': "Kabir",
'age': 25,
'sex': "Male",
'married': False
}
print(person['first_name'])
print(person['sex'])
for key in person.keys():
print(key)
for value in person.values():
print(value)
for key, value in person.items():
print(f"{key} : {value}")
if 'sex' in person.values():
print("Yes, it have!")
# + [markdown] id="vJdCQvWkdqRp"
# #Dictionary Method
# + colab={"base_uri": "https://localhost:8080/"} id="-mmJ5Dyqdm4Q" outputId="78d70737-8157-4b05-82ab-109ecdc96bce"
person = {
'first_name': "Humayun",
'last_name': "Kabir",
'age': 25,
'sex': "Male",
'married': False
}
# copy: copy a dictionary
individual = person.copy()
print(individual)
print(person is individual)
print(person == individual)
# clear: delete everything
individual.clear()
print(individual)
# fromkeys
new_user = {}.fromkeys(['name', 'score', 'email'], 'unknown')
print(new_user)
# get
name = person.get('first_name') # returns value of the key
email = person.get('email') # doesn't exist: None
print(email)
# pop
person.pop('married') # pop out the k-v pair of the given key
print(person)
# popitem
print(person.popitem()) # pop randomly one item
print(person)
# update
individual.update(person)
print(individual)
# + [markdown] id="BgJI4x9wdqh_"
# # Data Modeling
# + colab={"base_uri": "https://localhost:8080/"} id="EN7rZaQPdnIe" outputId="1b85f40a-4428-4373-9012-c03ca7e5ce8f"
playlist = {
'title': "Patagonia Bus",
'author': "Humayun",
'songs': [
{
'song_title': "Song One",
'artist': ['Artist 1', 'Artist 2'],
'duration': 4.31,
},
{
'song_title': "Song Two",
'artist': ['Artist 1'],
'duration': 2.53,
},
{
'song_title': "Song Three",
'artist': ['Artist 1', 'Artist 2', 'Artist 3'],
'duration': 3.43,
}
]
}
total_duration = 0.0
for song in playlist['songs']:
total_duration += song['duration']
print(song['song_title'])
print(total_duration)
# + [markdown] id="jus__Z0TdqxR"
# #Dictionary Comprehension
# + colab={"base_uri": "https://localhost:8080/"} id="1xu3jvJJdnYw" outputId="2c3ced19-f03c-4f2a-c443-fefaf6e5ed6f"
nums = {
'a': 1,
'b': 2,
'c': 3
}
sq_nums = {key: value **2 for key, value in nums.items()}
print(sq_nums)
sq_numbers = {num: num**2 for num in [1, 2, 3, 4, 5]}
print(sq_numbers)
str1 = 'ABC'
str2 = '123'
combo = {str1[i]:str2[i] for i in range(0, len(str1))}
print(combo)
num_list = [1, 2, 3, 4, 5, 6]
parity = {num:("even" if num % 2 == 0 else "odd") for num in num_list}
print(parity)
|
7_Dictionary.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys, time, math
import cv2, dlib
import numpy as np
import matplotlib.pyplot as plt
# http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
predictorPath = r"../../dep/shape_predictor_68_face_landmarks.dat"
predictorRef = [[1,3,31],[13,15,35]]
# predictorPath = r"../../dep/shape_predictor_5_face_landmarks.dat"
# predictorRef = [[0,1,4],[2,3,4]]
videoPath = r"../../data/video/C0033.MP4"
saveFile = open(r'output_detect.txt', 'w')
cv2.destroyAllWindows()
plt.close('all')
# +
def rect_to_bb(rect):
""" Transform a rectangle into a bounding box
Args:
rect: an instance of dlib.rectangle
Returns:
[x, y, w, h]: coordinates of the upper-left corner
and the width and height of the box
"""
x = rect.left()
y = rect.top()
w = rect.right() - x
h = rect.bottom() - y
return [x, y, w, h]
def shape_to_np(shape, dtype="int"):
""" Transform the detection results into points
Args:
shape: an instance of dlib.full_object_detection
Returns:
coords: an array of point coordinates
columns - x; y
"""
num = shape.num_parts
coords = np.zeros((num, 2), dtype=dtype)
for i in range(0, num):
coords[i] = (shape.part(i).x, shape.part(i).y)
return coords
def np_to_bb(coords, ratio=5, dtype="int"):
""" Choose ROI based on points and ratio
Args:
coords: an array of point coordinates
columns - x; y
ratio: the ratio of the length of the bounding box in each direction
to the distance between ROI and the bounding box
dtype: optional variable, type of the coordinates
Returns:
coordinates of the upper-left and bottom-right corner
"""
x = [xi for (xi,yi) in coords]
y = [yi for (xi,yi) in coords]
minx, maxx = min(x), max(x)
miny, maxy = min(y), max(y)
p, q = ratio - 1, ratio
roi = [minx * p / q + maxx / q, miny * p / q + maxy / q,
maxx * p / q + minx / q, maxy * p / q + miny / q]
return [int(i) for i in roi]
def resize(image, width=1200):
""" Resize the image with width
Args:
image: an instance of numpy.ndarray, the image
width: the width of the resized image
Returns:
resized: the resized image
size: size of the resized image
"""
r = width * 1.0 / image.shape[1]
size = (width, int(image.shape[0] * r))
resized = cv2.resize(image, size, interpolation=cv2.INTER_AREA)
return resized, size
def clip(img, size, rect):
""" Clip the frame and return the face region
Args:
img: an instance of numpy.ndarray, the image
size: size of the image when performing detection
rect: an instance of dlib.rectangle, the face region
Returns:
numpy.ndarray, the face region
"""
left = int(rect.left() / size[0] * img.shape[1])
right = int(rect.right() / size[0] * img.shape[1])
top = int(rect.top() / size[1] * img.shape[0])
bottom = int(rect.bottom() / size[1] * img.shape[0])
return img[top:bottom, left:right]
def meanOfChannels(image, bb):
return np.mean(np.mean(image[bb[1]:bb[3],bb[0]:bb[2]],0),0)
def dist(p1, p2):
return np.sqrt((p1.x-p2.x)**2+(p1.y-p2.y)**2)
# -
class Detector:
""" Detect and calculate ppg signal
roiRatio: a positive number, the roi gets bigger as it increases
smoothRatio: a real number between 0 and 1,
the landmarks get stabler as it increases
"""
roiRatio = 5
smoothRatio = 0.8
detectSize = 400
clipSize = 540
def __init__(self, detectorPath = None, predictorPath = None, predictorRef = None):
""" Initialize the instance of Detector
detector: dlib.fhog_object_detector
predictor: dlib.shape_predictor
rect: dlib.rectangle, face region in the last frame
landmarks: numpy.ndarray, coordinates of face landmarks in the last frame
columns - x; y
Args:
detectorPath: path of the face detector
predictorPath: path of the shape predictor
"""
self.detector = dlib.get_frontal_face_detector()
self.predictor = dlib.shape_predictor(predictorPath)
self.refs = predictorRef
self.face = None
self.landmarks = []
def __call__(self, image):
""" Detect the face region and returns the ROI value
Face detection is the slowest part.
Args:
image: an instance of numpy.ndarray, the image
Return:
val: an array of ROI value in each color channel
"""
val = [0., 0., 0.]
# Resize the image to limit the calculation
imageSize = image.shape
resized, detectionSize = resize(image, self.detectSize)
# Perform face detection on a grayscale image
gray = cv2.cvtColor(resized, cv2.COLOR_BGR2GRAY)
# No need for upsample, because its effect is the same as resize
if self.face == None:
faces = self.detector(gray, upsample_num_times = 0)
num = len(faces) # there should be one face
if num == 0:
print("No face in the frame!")
return val
if num >= 2:
print("More than one face!")
return val
face = faces[0]
else:
face = self.face
faceRect = dlib.rectangle(
int(face.left()*imageSize[1]/detectionSize[0]),
int(face.top()*imageSize[1]/detectionSize[0]),
int(face.right()*imageSize[1]/detectionSize[0]),
int(face.bottom()*imageSize[1]/detectionSize[0]))
self.face = face
# Perform landmark prediction on the face region
shape = self.predictor(image, faceRect)
landmarks = shape_to_np(shape)
landmarks = self.update(np.array(landmarks))
rects = [np_to_bb(landmarks[ref], self.roiRatio) for ref in self.refs]
vals = [meanOfChannels(image, bb) for bb in rects]
val = np.mean(vals, 0)
# Show detection results
if '-s' in sys.argv:
# Draw sample rectangles
for bb in rects:
cv2.rectangle(image, (bb[0], bb[1]), (bb[2], bb[3]), (0, 0, 255), 2)
# Draw feature points
for (i, (x, y)) in enumerate(landmarks):
cv2.putText(image, "{}".format(i), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
cv2.imshow("Face Detct #{}".format(i + 1), resize(image, self.detectSize)[0])
return val
def update(self, landmarks):
if len(self.landmarks):
landmarks = self.smoothRatio*self.landmarks+(1-self.smoothRatio)*landmarks
landmarks = landmarks.astype(int)
self.landmarks = landmarks
return landmarks
# +
# Initialization
detect = Detector(predictorPath = predictorPath, predictorRef = predictorRef)
times = []
data = []
video = cv2.VideoCapture(videoPath)
fps = video.get(cv2.CAP_PROP_FPS)
video.set(cv2.CAP_PROP_POS_FRAMES, 0*fps) # jump to certain frame
# Handle frame one by one
t = 0.0
ret, frame = video.read()
calcTime = time.time()
# +
plt.ion()
fig = plt.figure()
ax = plt.subplot()
ax.plot([0,0,0],'b')
while(video.isOpened() and t < 10):
t += 1.0/fps
# detect
v = detect(frame)
# show result
times.append(t)
data.append(v)
ax.lines.pop(0)
ax.plot(data,'b')
plt.draw()
plt.pause(1e-17)
print("%.2f\t%.3f\t%.3f\t%.3f\t%.1f\t%.1f"%(t, v[0], v[1], v[2], fps, 1/(time.time() - calcTime)) ) #, file=saveFile)
calcTime = time.time()
# check stop or quit
ret, frame = video.read()
if cv2.waitKey(1) & 0xFF == ord('q') or not ret:
break
# release memory and destroy windows
video.release()
cv2.destroyAllWindows()
saveFile.close()
# -
|
src/test/.ipynb_checkpoints/lightTest-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import tensorflow as tf
from keras.layers import Input, Dense, Lambda, Reshape
from keras.models import Model
from keras import backend as K
from keras import metrics
from keras.datasets import mnist
# -
batch_size = 100
original_dim = 784 # Height X Width
latent_dim = 2
intermediate_dim = 256
epochs = 50
epsilon_std = 1
def sampling(args: tuple):
# we grab the variables from the tuple
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
mean=0.,
stddev=epsilon_std)
return z_mean + K.exp(z_log_var / 2) * epsilon
# ### Defining the encoder
# +
# input to our encoder
x = Input(shape=(original_dim, ), name="input")
# intermediate layer
h = Dense(intermediate_dim, activation='relu', name="encoding")(x)
# defining the mean of the latent space
z_mean = Dense(latent_dim, name="mean")(h)
# defining the log variance of the latent space
z_log_var = Dense(latent_dim, name="log-variance")(h)
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim, ))([z_mean, z_log_var])
# defining the encoder as a keras model
encoder = Model(x, [z_mean, z_log_var, z], name="encoder")
# print out summary of what we just did
encoder.summary()
# -
# ### Defining the decoder
# +
# Input to the decoder
input_decoder = Input(shape=(latent_dim, ), name="decoder_input")
# taking the latent space to intermediate dimension
decoder_h = Dense(intermediate_dim, activation='relu',
name="decoder_h")(input_decoder)
# getting the mean from the original dimension
x_decoded = Dense(original_dim, activation='sigmoid',
name="flat_decoded")(decoder_h)
# defining the decoder as a keras model
decoder = Model(input_decoder, x_decoded, name="decoder")
decoder.summary()
# -
# ### Defining the Variational Autoencoder (VAE)
# grab the output. Recall, that we need to grab the 3rd element our sampling z
output_combined = decoder(encoder(x)[2])
# link the input and the overall output
vae = Model(x, output_combined)
# print out what the overall model looks like
vae.summary()
kl_loss = -0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var),
axis=-1)
vae.add_loss(K.mean(kl_loss) / 784.)
vae.compile(optimizer='rmsprop', loss="binary_crossentropy")
vae.summary()
# +
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
# -
vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size)
# display a 2D plot of the digit classes in the latent space
x_test_encoded = encoder.predict(x_test, batch_size=batch_size)[0]
plt.figure(figsize=(6, 6))
plt.scatter(x_test_encoded[:, 0],
x_test_encoded[:, 1],
c=y_test,
cmap='viridis')
plt.colorbar()
plt.show()
# +
# display a 2D manifold of the digits
n = 15 # figure with 15x15 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# linearly spaced coordinates on the unit square were transformed through the inverse CDF (ppf) of the Gaussian
# to produce values of the latent variables z, since the prior of the latent space is Gaussian
grid_x = norm.ppf(np.linspace(0.05, 0.95, n))
grid_y = norm.ppf(np.linspace(0.05, 0.95, n))
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z_sample = np.array([[xi, yi]])
x_decoded = decoder.predict(z_sample)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[i * digit_size:(i + 1) * digit_size,
j * digit_size:(j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
plt.imshow(figure, cmap='Greys_r')
plt.show()
|
GAN_AtoZ/01-Generating_MNIST.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# generate toy data label 3, 2 features
# +
from sklearn.datasets.samples_generator import make_blobs
from matplotlib.pyplot import figure
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os
# generate 2d classification dataset
# Área 1 0-5, 0-10 1: red 300
f11=np.random.uniform(0,5,300)
f21=np.random.uniform(0,10,300)
y1=np.full((300, 1), 1,dtype = int)
#Área 2 5-10 0-10 2 :green 200
f12=np.random.uniform(5,10,200)
f22=np.random.uniform(0,10,200)
y2=np.full((200, 1), 2,dtype = int)
#Disjunct 1 2-3.5 1-2.5 2: green 50
f13=np.random.uniform(2,3.5,50)
f23=np.random.uniform(6,7.5,50)
y3=np.full((50, 1), 2,dtype = int)
#Disjunct 2 6-7 2.6-5 1:red 50
f14=np.random.uniform(6,7,50)
f24=np.random.uniform(2.6,5,50)
y4=np.full((50, 1), 1,dtype = int)
f1=np.concatenate((f11,f12,f13,f14))
f2=np.concatenate((f21,f22,f23,f24))
X=np.vstack((f1, f2)).T
y=np.concatenate((y1,y2,y3,y4))
print(y)
#print("y´s shape is :"+str(y.shape))
#print("x´s shape is :"+str(X.shape))
data = X=np.concatenate([X,y],axis = 1)
print(data)
df = pd.DataFrame(data,columns=['feature1','feature2','label'])
colors = {1:'red', 2:'green'}
fig, ax = plt.subplots(ncols=1, nrows=1)
ax.scatter(df['feature1'], df['feature2'], c=df['label'].apply(lambda x: colors[x]))
figure(frameon=False,num=None, figsize=(10, 10), dpi=80)
#adds a title and axes labels
ax.set_title('classification data')
ax.set_xlabel('feature1')
ax.set_ylabel('feature2')
ax.set_xticks([0, 5, 10])
ax.set_yticks([0, 5, 10])
#adds major gridlines
ax.grid(color='orange', linestyle='-', linewidth=1, alpha=5)
plt.show()
# -
# generate configuration file
str_config = "algorithm = Chi Fuzzy Weighted Rule Learning Model" + "\n"
str_config = str_config + "inputData = "+"\""+"../few_disjuncts_1/clean-data-1tra.dat" +"\""+" "
str_config = str_config + "\"" + "../few_disjuncts_1/few_disjuncts_1tra.dat" +"\""+" "
str_config = str_config + "\"" + "../few_disjuncts_1/few_disjuncts_1tst.dat" +"\""+ "\n"
str_config = str_config + "outputData = "+"\""+" ../few_disjuncts_1/results/result0.tra" +"\""+" "
str_config = str_config + "\"" + "../few_disjuncts_1/results/result0.tst" +"\"" +" "
str_config = str_config + "\"" + "../few_disjuncts_1/results/result0e0.txt"+"\""+" "
str_config = str_config + "\"" + "../few_disjuncts_1/results/result0e1.txt" +"\""+ "\n"
str_config = str_config + "Number of Labels = 2" + "\n"
str_config = str_config + "T-norm for the Computation of the Compatibility Degree = Product" + "\n"
str_config = str_config + "Rule Weight = Penalized_Certainty_Factor" + "\n"
str_config = str_config + "Fuzzy Reasoning Method = Winning_Rule" + "\n"
print(str_config)
# import os
cwd = os.getcwd()
config_file = cwd+"\\few_disjuncts_1\\few_disjuncts_1_config0.txt"
with open(config_file,'w') as configuration_file :
configuration_file.write(str_config)
configuration_file.close()
# +
# generate train
# get config_max_min_array
data_num = len(y)
test_num = int(data_num/5)
train_num = data_num - test_num
print("data_num is :"+str(data_num))
print("test_num is :"+str(test_num))
print("train_num is :"+str(train_num))
data_train = np.empty((int(train_num),3))
data_train.shape=(int(train_num),3)
print("data.shape:")
print(data.shape)
#data_train.dtype=dt
print(data_train.dtype)
print("data_train.shape:")
print(data_train.shape)
data_test = np.empty((int(test_num),3))
k_tra=0
k_tst=0
for i in range(0,len(y)):
#save i/5==0 for test data
if i%5==0:
data_test[k_tst]=data[i]
k_tst = k_tst + 1
else:
data_train[k_tra]=data[i]
k_tra = k_tra + 1
print(" test data number is :" +str(k_tst))
print(" train data number is :" +str(k_tra))
#df_train = pd.DataFrame.from_records(data_train,columns=['feature1','feature2','label'])
config_max_min_array_train =[[None for y in range (3) ]for x in range (2)]
feature_names=['feature1','feature2']
print("data_train.shape: ")
print(data_train.shape)
column_feature1_array = np.array(data_train[:0])
print(column_feature1_array)
config_min_array_train = [None for x in range (2) ]
config_max_array_train = [None for x in range (2) ]
print(np.amin(data_train, axis=0))
config_min_array_train=np.amin(data_train, axis=0)
config_max_array_train =np.amax(data_train, axis=0)
print(np.amax(data_train, axis=0))
config_max_min_array_train[0][1]=config_min_array_train[0]
config_max_min_array_train[0][2]=config_max_array_train[0]
config_max_min_array_train[1][1]=config_min_array_train[1]
config_max_min_array_train[1][2]=config_max_array_train[1]
for i in range(0,2):
# store each feature name, min, max values
config_max_min_array_train[i][0]=feature_names[i]
print("feature name [" + str(i) +"]"+ " is: " + config_max_min_array_train[i][0])
print("feature min [" + str(i) +"]"+ " is: " + str(config_max_min_array_train[i][1]))
print("feature max [" + str(i) +"]"+ " is: " + str(config_max_min_array_train[i][2]))
# data detail
data_str = "@relation clean_data_1" + "\n"
for i in range(0,2):
data_str = data_str + "@attribute" + " " + str(config_max_min_array_train[i][0])+ " "
data_str = data_str + "real"+" "+"["+str(config_max_min_array_train[i][1])+","+str(config_max_min_array_train[i][2])+"]"
data_str = data_str + "\n"
data_str = data_str + "@attribute class {1, 2}" + "\n"
data_str = data_str + "@inputs" + " "
for i in range(0,2):
data_str = data_str + str(config_max_min_array_train[i][0])+ ","
data_str = data_str[:-1]#delete the last ,
data_str = data_str + "\n"
data_str = data_str + "@outputs class" + "\n"
data_str = data_str + "@data" + "\n"
for i in range(0,k_tra):
for j in range(0,3):
if j==2:
data_str = data_str + str(int(data_train[i][j])) + ","
else:
data_str = data_str + str(data_train[i][j]) + ","
data_str = data_str[:-1]
data_str = data_str + "\n"
#print(data_str)
cwd = os.getcwd()
train_file = cwd+"\\clean_data_1\\clean-data-1tra.dat"
with open(train_file,'w') as trafile :
trafile.write(data_str)
trafile.close()
# +
#draw test data
print(data_test.shape)
df = pd.DataFrame(data_test,columns=['feature1','feature2','label'])
colors = {1:'red', 2:'green'}
fig, ax = plt.subplots(ncols=1, nrows=1)
ax.scatter(df['feature1'], df['feature2'], c=df['label'].apply(lambda x: colors[x]))
figure(frameon=False,num=None, figsize=(10, 10), dpi=80)
#adds a title and axes labels
ax.set_title('classification test data')
ax.set_xlabel('feature1')
ax.set_ylabel('feature2')
ax.set_xticks([0, 5, 10])
ax.set_yticks([0, 5, 10])
#adds major gridlines
ax.grid(color='orange', linestyle='-', linewidth=1, alpha=5)
plt.show()
# +
# draw train data
print(data_train.shape)
df = pd.DataFrame(data_train,columns=['feature1','feature2','label'])
colors = {1:'red', 2:'green'}
fig, ax = plt.subplots(ncols=1, nrows=1)
ax.scatter(df['feature1'], df['feature2'], c=df['label'].apply(lambda x: colors[x]))
figure(frameon=False,num=None, figsize=(10, 10), dpi=80)
#adds a title and axes labels
ax.set_title('classification train data')
ax.set_xlabel('feature1')
ax.set_ylabel('feature2')
ax.set_xticks([0, 5, 10])
ax.set_yticks([0, 5, 10])
#adds major gridlines
ax.grid(color='orange', linestyle='-', linewidth=1, alpha=5)
plt.show()
# +
# generate Test file data
config_max_min_array_test =[[None for y in range (3) ]for x in range (2)]
feature_names=['feature1','feature2']
print("data_test.shape: ")
print(data_test.shape)
column_feature1_array = np.array(data_train[:0])
print(column_feature1_array)
config_min_array_test = [None for x in range (2) ]
config_max_array_test = [None for x in range (2) ]
print(np.amin(data_test, axis=0))
config_min_array_test=np.amin(data_test, axis=0)
config_max_array_test =np.amax(data_test, axis=0)
print(np.amax(data_test, axis=0))
config_max_min_array_test[0][1]=config_min_array_test[0]
config_max_min_array_test[0][2]=config_max_array_test[0]
config_max_min_array_test[1][1]=config_min_array_test[1]
config_max_min_array_test[1][2]=config_max_array_test[1]
for i in range(0,2):
# store each feature name, min, max values
config_max_min_array_test[i][0]=feature_names[i]
print("feature name [" + str(i) +"]"+ " is: " + config_max_min_array_test[i][0])
print("feature min [" + str(i) +"]"+ " is: " + str(config_max_min_array_test[i][1]))
print("feature max [" + str(i) +"]"+ " is: " + str(config_max_min_array_test[i][2]))
# data detail
data_str = "@relation clean_data_1" + "\n"
for i in range(0,2):
data_str = data_str + "@attribute" + " " + str(config_max_min_array_test[i][0])+ " "
data_str = data_str + "real"+" "+"["+str(config_max_min_array_test[i][1])+","+str(config_max_min_array_test[i][2])+"]"
data_str = data_str + "\n"
data_str = data_str + "@attribute class {1, 2}" + "\n"
data_str = data_str + "@inputs" + " "
for i in range(0,2):
data_str = data_str + str(config_max_min_array_test[i][0])+ ","
data_str = data_str[:-1]
data_str = data_str + "\n"
data_str = data_str + "@outputs class" + "\n"
data_str = data_str + "@data" + "\n"
for i in range(0,k_tst):
for j in range(0,3):
if j==2:
data_str = data_str + str(int(data_test[i][j])) + ","
else:
data_str = data_str + str(data_test[i][j]) + ","
data_str = data_str[:-1]
data_str = data_str + "\n"
#print(data_str)
cwd = os.getcwd()
train_file = cwd+"\\clean_data_1\\clean-data-1tst.dat"
with open(train_file,'w') as trafile :
trafile.write(data_str)
trafile.close()
|
.ipynb_checkpoints/generate_few_disjunct-checkpoint.ipynb
|