code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''pyenv38'': conda)'
# language: python
# name: python3
# ---
# # Python Bootcamp (Day 3)
#
# Let's learn Python together. :-)
temp_variable = 5
# ## Lists
#
# An array, or a number of elements stored in a single variable. A `list` object is Python native so the functions like `len`, `type`, work on it. You can also index the elements directly using the `[index]` operator. To add more elements you can use the `append` method. There are other methods that allow you to count, reverse and splice the list into new list.
#
# Python supports list comprehension, that is a flexible way to create lists from common data. You can sort the data using `sorted` function. In order to get the index while using a for loop, you should use the `enumerate` method. It provides the index and the value in the same call.
# +
literal_list = [ "Python", "Bootcamp" ]
print(literal_list)
# print(type(literal_list))
# list_style_list = list()
# print(literal_list[2])
element_index = 1
print(literal_list[element_index])
# print(type(literal_list[2]))
print(len(literal_list))
literal_list.append("Microsoft")
literal_list.append("Python")
print(literal_list)
for element in sorted(literal_list):
print(element)
for index, element in enumerate(literal_list):
print(literal_list[index])
# -
# ## Sets
#
# Sets are the basic example of map data types in Python. A set contains the distinct records only. Any duplicate will be removed (or ignored, but that's too technical to discuss; *ask question if you want me to elaborate how this works*). For a set you use `{}` brackets instead of `[]`. Similarly, you can use the `set` method but it only accepts a single parameter.
#
# The same comprehension works for a set too. Try it out.
#
# **Tip**: To quickly remove duplicates from a list, convert it to a set and then back to a list.
# +
literal_set = { "Python", "Bootcamp" }
print(literal_set)
print("Python" in literal_set)
if "Python" in literal_set:
print("Yes, Python will be in the Bootcamp")
literal_set.add("Microsoft")
literal_set.add("Python")
print(literal_set) # Come back to this point, why Python is moved?
# +
list_comp = [i for i in range(1, 20)]
print(list_comp)
list_even_odd = [number for number in range(1, 20) if number % 2 == 0]
print(list_even_odd)
set_comp = {i for i in range(1, 20)}
print(set_comp)
# -
# ## Dictionaries
#
# Dictionaries are advanced example of a map type. A dictionary is like a set (it is created with a `{}` bracket) but for every item, it can store a value in it too. You can traverse the dictionary using `for key, value in dictionary.items():` structure. The `key, value` corresponds to a tuple type. For more on that, read on.
# +
literal_dict = { "A": 1234, "B": 4321 }
language_dict = { "C++": "A low-level language", "Python": "A high-level language" }
my_todos = { "tasks_per_day": 6, "tasks_for_home": [ "buy eggs", "pay bills" ], "work": { "code" } }
if "tasks_for_home" in my_todos:
for task in my_todos["tasks_for_home"]:
print("I need to {}".format(task))
for key, value in my_todos.items():
print("{}: {}".format(key, value))
# -
# ## Tuples
#
# Tuples can be created using any type of data (including data structures). The tuple type is an **immutable** type. You cannot modify the tuple itself, but you can modify the data inside the tuple elements.
# +
literal_tuple = ("Python", "Bootcamp")
print(literal_tuple)
print(literal_tuple[0])
tuple_with_list = (["Python"], "Bootcamp")
print(tuple_with_list)
tuple_with_list[0].append("Microsoft")
print(tuple_with_list)
# -
# ## Take-home assignment
#
# * Create a list of strings using the `list` method.
# * Reverse a list using the Python's library.
# * Create a dictionary for student details and input:
# - Student ID
# - Name
# - Batch Number
# - Major Subject
# * In the dictionary created above: print the details of the student.
# ## Resources
#
# * [collections](https://docs.python.org/3/library/collections.html) in Python
# * [Data Structures](https://docs.python.org/3/tutorial/datastructures.html) in Python
| day 3/data-structures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/creamcheesesteak/test_visualization/blob/master/autompg_linearregression_service.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="Kr38yvjDmO_M"
# + id="1szwTxYBmp9n"
# + colab={"base_uri": "https://localhost:8080/"} id="dY1dAvGPmqOc" outputId="2c8e3384-fe1e-4bdd-9b84-d895380fa0d2"
# !ls -l ./autompg_lr.pkl
# + [markdown] id="eNt47XfVnflL"
# # load pickle with linear regression
# + id="1G1wUJN4mqeE"
import pickle
# + colab={"base_uri": "https://localhost:8080/"} id="rch0Q-Bumqvl" outputId="a85840f2-c9be-49a3-d245-ba2d77a87862"
pickle.load(open('./autompg_lr.pkl', 'rb'))
# 바이너리 파일은 rb 옵션을 넣어줘야함
# + colab={"base_uri": "https://localhost:8080/"} id="Q_DDf_oGmrA9" outputId="fa78528c-58b9-4b7f-9a18-4d36c367e2c0"
lr = pickle.load(open('./autompg_lr.pkl', 'rb'))
type(lr)
# + id="1pJTSVapmrTH"
# + colab={"base_uri": "https://localhost:8080/"} id="WM72uOA6mrqv" outputId="f6a49547-e543-41cd-d0f7-938dd69a02d7"
scaler = pickle.load(open('./autompg_standardscaler.pkl', 'rb'))
type(scaler)
# + [markdown] id="XujVRSlN_5sK"
# # predict with linear regression
# + id="9aM7Fa2Lmr-z"
# input을 받음
displacement = 307.0
horsepower = 130.0
weight = 3504.0
accel = 12.0
x_customer = [[displacement, horsepower, weight, accel]]
# [[307.0, 130.0, 3504.0, 12.0]] -> predit에게 보냄 -> 결과 값을 output
# + colab={"base_uri": "https://localhost:8080/"} id="aemBnhT3CAZ9" outputId="b7c39d0e-05df-4af4-b6e7-ed380e4ecb71"
scaler.transform(x_customer)
# + id="hSzQlct8CFQb"
x_customer = scaler.transform(x_customer)
# + id="QmPGr11umsQV" colab={"base_uri": "https://localhost:8080/"} outputId="8457c235-b538-4d8d-f037-72cbb2686c56"
lr.predict(x_customer)
# + id="d_r930HcmsfX" colab={"base_uri": "https://localhost:8080/"} outputId="26271dab-dc12-40dc-84d1-e0bddb16e0ce"
result = lr.predict(x_customer)
result
# result 의 return값이 array 이기 때문에 result[0] 으로 출력해야함
# + id="bbPcKU7zmsuW" colab={"base_uri": "https://localhost:8080/"} outputId="51a1cf28-fa45-4445-b51c-40f5c2c2780a"
result[0]
# + id="g94YYYUZCxIz"
| autompg_linearregression_service.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # STFT Model #
#
# An STFT analysis and synthesis notebook.
#
# First we set up the environment.
# +
# %matplotlib inline
import math, copy, sys, os
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
import IPython.display as ipd
import glob
from scipy.fftpack import fft, ifft, fftshift
from scipy.signal import blackmanharris, triang, get_window
from scipy.io.wavfile import write, read
from sys import platform
from ipywidgets import interact, interact_manual, interactive
tol = 1e-14 # threshold used to compute phase
INT16_FAC = (2**15)-1
INT32_FAC = (2**31)-1
INT64_FAC = (2**63)-1
norm_fact = {'int16':INT16_FAC, 'int32':INT32_FAC, 'int64':INT64_FAC,'float32':1.0,'float64':1.0}
global iF # The input file name
global xR # The raw input samples
global x # The input samples normalized
global fs # The input sample rate
global N # The FFT size
global w # The window
global wN # The window name
global M # The window size
global H # The hop size
global mX # The magnitude spectrum of the input
global pX # The phase spectrum of the input
global y # The re-synthesized output
global yR # The raw re-synthesized output
# -
# Now we define some methods to perform the different steps of the model
# ***dft_analysis***
#
# Analysis of a signal using the discrete Fourier transform
#
# Params
#
# * x: input signal
# * w: analysis window,
# * N: FFT size
#
# Returns
#
# * mX: magnitude spectrum
# * pX: phase spectrum
def dft_analysis(x, w, N):
if (w.size > N): # raise error if window size bigger than fft size
raise ValueError("Window size (M) is bigger than FFT size")
hN = (N//2)+1 # size of positive spectrum, it includes sample 0
hM1 = (w.size+1)//2 # half analysis window size by rounding
hM2 = w.size//2 # half analysis window size by floor
fftbuffer = np.zeros(N) # initialize buffer for FFT
w = w / sum(w) # normalize analysis window
xw = x*w # window the input sound
fftbuffer[:hM1] = xw[hM2:] # zero-phase window in fftbuffer
fftbuffer[-hM2:] = xw[:hM2]
X = fft(fftbuffer) # compute FFT
absX = abs(X[:hN]) # compute ansolute value of positive side
absX[absX<np.finfo(float).eps] = np.finfo(float).eps # if zeros add epsilon to handle log
mX = 20 * np.log10(absX) # magnitude spectrum of positive frequencies in dB
X[:hN].real[np.abs(X[:hN].real) < tol] = 0.0 # for phase calculation set to 0 the small values
X[:hN].imag[np.abs(X[:hN].imag) < tol] = 0.0 # for phase calculation set to 0 the small values
pX = np.unwrap(np.angle(X[:hN])) # unwrapped phase spectrum of positive frequencies
return mX, pX
# ***stft_analysis***
#
# Analysis of a sound using the short-time Fourier transform
#
# Params
#
# * x: input array sound
# * w: analysis window
# * N: FFT size
# * H: hop size
#
# Returns
#
# * xmX: magnitude spectra
# * xpX: phase spectra
def stft_analysis(x, w, N, H) :
if (H <= 0): # raise error if hop size 0 or negative
raise ValueError("Hop size (H) smaller or equal to 0")
M = w.size # size of analysis window
hM1 = (M+1)//2 # half analysis window size by rounding
hM2 = M//2 # half analysis window size by floor
x = np.append(np.zeros(hM2),x) # add zeros at beginning to center first window at sample 0
x = np.append(x,np.zeros(hM2)) # add zeros at the end to analyze last sample
pin = hM1 # initialize sound pointer in middle of analysis window
pend = x.size-hM1 # last sample to start a frame
w = w / sum(w) # normalize analysis window
xmX = [] # Initialise empty list for mX
xpX = [] # Initialise empty list for pX
while pin<=pend: # while sound pointer is smaller than last sample
x1 = x[pin-hM1:pin+hM2] # select one frame of input sound
mX, pX = dft_analysis(x1, w, N) # compute dft
xmX.append(np.array(mX)) # Append output to list
xpX.append(np.array(pX))
pin += H # advance sound pointer
xmX = np.array(xmX) # Convert to numpy array
xpX = np.array(xpX)
return xmX, xpX
# ***dft_synthesis***
#
# Synthesis of a signal using the discrete Fourier transform
#
# Params
#
# * mX: magnitude spectrum
# * pX: phase spectrum
# * M: window size
#
# Returns
#
# * y: output signal
def dft_synthesis(mX, pX, M):
hN = mX.size # size of positive spectrum, it includes sample 0
N = (hN-1)*2 # FFT size
hM1 = int(math.floor((M+1)/2)) # half analysis window size by rounding
hM2 = int(math.floor(M/2)) # half analysis window size by floor
fftbuffer = np.zeros(N) # initialize buffer for FFT
y = np.zeros(M) # initialize output array
Y = np.zeros(N, dtype = complex) # clean output spectrum
Y[:hN] = 10**(mX/20) * np.exp(1j*pX) # generate positive frequencies
Y[hN:] = 10**(mX[-2:0:-1]/20) * np.exp(-1j*pX[-2:0:-1]) # generate negative frequencies
fftbuffer = np.real(ifft(Y)) # compute inverse FFT
y[:hM2] = fftbuffer[-hM2:] # undo zero-phase window
y[hM2:] = fftbuffer[:hM1]
return y
# ***stft_synthesis***
#
# Synthesis of a sound using the short-time Fourier transform
#
# * mY: magnitude spectra
# * pY: phase spectra
# * M: window size
# * H: hop-size
#
# Returns
#
# * y: output sound
def stft_synthesis(mY, pY, M, H) :
hM1 = (M+1)//2 # half analysis window size by rounding
hM2 = M//2 # half analysis window size by floor
nFrames = mY[:,0].size # number of frames
y = np.zeros(nFrames*H + hM1 + hM2) # initialize output array
pin = hM1
for i in range(nFrames): # iterate over all frames
y1 = dft_synthesis(mY[i,:], pY[i,:], M) # compute idft
y[pin-hM1:pin+hM2] += H*y1 # overlap-add to generate output sound
pin += H # advance sound pointer
y = np.delete(y, range(hM2)) # delete half of first window which was added in stftAnal
y = np.delete(y, range(y.size-hM1, y.size)) # delete the end of the sound that was added in stftAnal
return y
# ***stft_system***
#
# STFT analysis and re-synthesis system. Performs an STFT analysis of a signal and then re-synthesizes it
#
# Params
#
# * p_N: The FFT size
# * p_M: The window size
# * p_H: The hop size
# * p_wN: The name of the window funtion to use
#
# Returns void
#
# Plots the input waveform, the magnitude and phase spectra, and the re-synthesized output waveform and allows the output to be played back
#
def stft_system(p_N, p_M, p_H, p_wN):
global N, M, H, wN, w, mX, pX, y, yR
# Set the analysis parameters
N = p_N
M = p_M if p_M <= N else N
H = p_H if p_H <= M//2 else M//2
wN = p_wN
w = get_window(wN, M)
# Do the analysis step
mX, pX = stft_analysis(x, w, N, H)
# Do the synthesis step
y = stft_synthesis(mX, pX, M, H)
yR = copy.deepcopy(y) # copy array
yR *= INT16_FAC # scaling floating point -1 to 1 range signal to int16 range
yR = np.int16(yR)
# create figure to plot
plt.figure(figsize=(17, 20))
# frequency range to plot
maxplotfreq = 5000.0
# plot the input sound
plt.subplot(4,1,1)
plt.plot(np.arange(x.size)/float(fs), x)
plt.axis([0, x.size/float(fs), min(x), max(x)])
plt.ylabel('amplitude')
plt.xlabel('time (sec)')
plt.title('input sound: x')
# plot magnitude spectrogram
plt.subplot(4,1,2)
numFrames = int(mX[:,0].size)
frmTime = H*np.arange(numFrames)/float(fs)
binFreq = fs*np.arange(N*maxplotfreq/fs)/N
plt.pcolormesh(frmTime, binFreq, np.transpose(mX[:,:int(N*maxplotfreq/fs+1)]))
plt.xlabel('time (sec)')
plt.ylabel('frequency (Hz)')
plt.title('magnitude spectrogram')
plt.autoscale(tight=True)
# plot the phase spectrogram
plt.subplot(4,1,3)
numFrames = int(pX[:,0].size)
frmTime = H*np.arange(numFrames)/float(fs)
binFreq = fs*np.arange(N*maxplotfreq/fs)/N
plt.pcolormesh(frmTime, binFreq, np.transpose(np.diff(pX[:,:int(N*maxplotfreq/fs+1)],axis=1)))
plt.xlabel('time (sec)')
plt.ylabel('frequency (Hz)')
plt.title('phase spectrogram (derivative)')
plt.autoscale(tight=True)
# plot the output sound
plt.subplot(4,1,4)
plt.plot(np.arange(y.size)/float(fs), y)
plt.axis([0, y.size/float(fs), min(y), max(y)])
plt.ylabel('amplitude')
plt.xlabel('time (sec)')
plt.title('output sound: y')
plt.tight_layout()
plt.ion()
plt.show()
display(ipd.Audio(yR, rate=fs))
# # Playground
#
# Here you can play with a few different inputs, change some parameters and listen to the results
# +
def read_input_file(p_iF):
global iF, fs, xR, x
iF = p_iF
# Read the input file now
fs, xR = read(iF)
x = np.float32(xR)/norm_fact[xR.dtype.name]
display(ipd.Audio(xR, rate=fs))
files = glob.glob('audio/*.wav')
interact(read_input_file, p_iF = widgets.Dropdown(options=files,description='Audio File:'))
interact_manual(stft_system,
p_wN = widgets.Dropdown(options=['blackmanharris', 'blackman', 'hamming', 'hanning', 'rectangular' ],description='Window Type'),
p_M=widgets.SelectionSlider(options=[2**i for i in range(4,13)],value=512,description='Window Size'),
p_N=widgets.SelectionSlider(options=[2**i for i in range(4,13)],value=1024,description='FFT Size'),
p_H=widgets.SelectionSlider(options=[2**i for i in range(4,13)],value=128,description='Hop Size'))
# -
| notebooks/kxmx_dsp_stft_float.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from scipy.stats import kendalltau as kTau
import matplotlib.pyplot as plt
# from sklearn.externals.joblib import Memory
# memory = Memory(cachedir='/tmp',verbose=0)
import jupyternotify
ip = get_ipython()
ip.register_magics(jupyternotify.JupyterNotifyMagics)
# # %autonotify -a 30
# -
# This is probably unnecesary ¯\_(ツ)_/¯
def ODF2DF(GP_ODF):
GP_ODF = GP_ODF[['Rank','Feature']]
GP_ODF.sort_values('Rank', inplace=true)
GP_ODF.set_index('Rank', inplace=True)
return GP_ODF
# # Data
# * GCT file: [all_aml_test.gct](https://software.broadinstitute.org/cancer/software/genepattern/data/all_aml/all_aml_test.gct).
# * CLS file: [all_aml_test.cls](https://software.broadinstitute.org/cancer/software/genepattern/data/all_aml/all_aml_test.cls).
#
# # Using CMS: Gold Standard
# + genepattern={"server": "https://genepattern.broadinstitute.org/gp", "type": "auth"}
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.GPAuthWidget(genepattern.register_session("https://genepattern.broadinstitute.org/gp", "", ""))
# + genepattern={"type": "task"}
comparativemarkerselection_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00044')
comparativemarkerselection_job_spec = comparativemarkerselection_task.make_job_spec()
comparativemarkerselection_job_spec.set_parameter("input.file", "https://software.broadinstitute.org/cancer/software/genepattern/data/all_aml/all_aml_test.gct")
comparativemarkerselection_job_spec.set_parameter("cls.file", "https://software.broadinstitute.org/cancer/software/genepattern/data/all_aml/all_aml_test.cls")
comparativemarkerselection_job_spec.set_parameter("confounding.variable.cls.file", "")
comparativemarkerselection_job_spec.set_parameter("test.direction", "2")
comparativemarkerselection_job_spec.set_parameter("test.statistic", "0")
comparativemarkerselection_job_spec.set_parameter("min.std", "")
comparativemarkerselection_job_spec.set_parameter("number.of.permutations", "10000")
comparativemarkerselection_job_spec.set_parameter("log.transformed.data", "false")
comparativemarkerselection_job_spec.set_parameter("complete", "false")
comparativemarkerselection_job_spec.set_parameter("balanced", "false")
comparativemarkerselection_job_spec.set_parameter("random.seed", "779948241")
comparativemarkerselection_job_spec.set_parameter("smooth.p.values", "true")
comparativemarkerselection_job_spec.set_parameter("phenotype.test", "one versus all")
comparativemarkerselection_job_spec.set_parameter("output.filename", "<input.file_basename>.comp.marker.odf")
genepattern.GPTaskWidget(comparativemarkerselection_task)
# + genepattern={"type": "job"}
job1587350 = gp.GPJob(genepattern.get_session(0), 1587350)
genepattern.GPJobWidget(job1587350)
# -
# The code below will only run if pandas is installed: http://pandas.pydata.org
from gp.data import ODF
all_aml_test_comp_marker_odf_1587350 = ODF(job1587350.get_file("all_aml_test.comp.marker.odf"))
all_aml_test_comp_marker_odf_1587350
cms_scores = all_aml_test_comp_marker_odf_1587350.dataframe
cms_scores.sort_values(by='Rank',inplace=True)
cms_scores
# ---
# # Using CCALnoir
import cuzcatlan as cusca
import pandas as pd
import numpy as np
from cuzcatlan import differential_gene_expression
import urllib.request
# +
# %%time
TOP = 500
permuations=1000
RUN = True
data_url = "https://software.broadinstitute.org/cancer/software/genepattern/data/all_aml/all_aml_test.gct"
pheno_url = "https://software.broadinstitute.org/cancer/software/genepattern/data/all_aml/all_aml_test.cls"
data_df = pd.read_table(data_url, header=2, index_col=0)
data_df.drop('Description', axis=1, inplace=True)
url_file, __ = urllib.request.urlretrieve(pheno_url)
temp = open(url_file)
temp.readline()
temp.readline()
classes = [int(i) for i in temp.readline().strip('\n').split(' ')]
classes = pd.Series(classes, index=data_df.columns)
# -
# %%notify
# %%time
raw_scores = differential_gene_expression(phenotypes=pheno_url, gene_expression=data_url,
output_filename='DE_test', ranking_method=cusca.custom_pearson_corr,
number_of_permutations=10)
ccal_scores = raw_scores.copy()
ccal_scores['abs_score'] = abs(ccal_scores['Score'])
ccal_scores['Feature'] = ccal_scores.index
ccal_scores.sort_values('abs_score', ascending=False, inplace=True)
ccal_scores.reset_index(inplace=True)
ccal_scores['Rank'] = ccal_scores.index +1
print(ccal_scores)
# %%time
# %%notify
raw_ic_scores = differential_gene_expression(phenotypes=pheno_url, gene_expression=data_url,
output_filename='DE_test', ranking_method=cusca.compute_information_coefficient,
number_of_permutations=10)
ccal_ic_scores = raw_ic_scores.copy()
ccal_ic_scores['abs_score'] = abs(ccal_ic_scores['Score'])
ccal_ic_scores['Feature'] = ccal_ic_scores.index
ccal_ic_scores.sort_values('abs_score', ascending=False, inplace=True)
ccal_ic_scores.reset_index(inplace=True)
ccal_ic_scores['Rank'] = ccal_ic_scores.index +1
print(ccal_ic_scores)
# # Comparing results
# ### CMS vs CCAL_correlation
# @memory.cache
def custom_metric(list_1, list_2):
temp = list_1 - list_2
temp.fillna(len(temp), inplace=True)
# Metric is 0 if perfect overlap, 1 if list are reversed. It can be larger than one!
return sum(abs(temp))/ np.floor(list_1.shape[0]**2/2)
# @memory.cache
def map_df1_to_df2(df_1, df_2):
to_return = df_1.copy()
df_2_copy = df_2.copy()
to_return.sort_values(by='Rank', inplace=True)
to_return.set_index('Feature', inplace=True)
df_2_copy.sort_values(by='Rank', inplace=True)
df_2_copy.set_index('Feature', inplace=True)
df_2_copy.rename(columns={'Rank': 'new_Rank'}, inplace=True)
to_return_2 = to_return.join(df_2_copy)
return to_return_2
def compute_overlap(reference_df, new_df, col='index'):
if col == 'index':
common = (list(set(reference_df.index) & set(new_df.index)))
else:
common = (list(set(reference_df[col]) & set(new_df[col])))
overlap = 100*len(common)/len(reference_df)
return overlap
# @memory.cache
def compare_ranks(df_a, df_b, number_of_genes=5, verbose=False):
# Not ssuming both df's are ranked already!
subset_a = df_a.head(number_of_genes)[['Feature', 'Rank']]
subset_b = df_b.head(number_of_genes)[['Feature', 'Rank']]
a_in_b = map_df1_to_df2(subset_a, df_b[['Feature','Rank']])
b_in_a = map_df1_to_df2(subset_b, df_a[['Feature','Rank']])
metric_1 = custom_metric(a_in_b['Rank'], a_in_b['new_Rank'])
metric_2 = custom_metric(b_in_a['Rank'], b_in_a['new_Rank'])
overlap = compute_overlap(subset_a, subset_b, col='Feature')
if verbose:
print(a_in_b)
print(b_in_a)
return ((metric_1 + metric_2)/2, overlap)
# @memory.cache
def compare_multiple_ranks(df_a, df_b, max_number_of_genes=10, verbose=False):
# This is the largest subset we will consider
subset_a = df_a.head(max_number_of_genes)[['Feature', 'Rank']]
subset_b = df_b.head(max_number_of_genes)[['Feature', 'Rank']]
df_a_to_use = df_a[['Feature','Rank']]
df_b_to_use = df_b[['Feature','Rank']]
indexes = []
metrics = []
overlap = []
for i in range(max_number_of_genes, 0, -1):
if i == max_number_of_genes:
subset_a_to_use = subset_a
subset_b_to_use = subset_b
else:
subset_a_to_use = subset_a_to_use.drop(subset_a_to_use.index[i])
subset_b_to_use = subset_b_to_use.drop(subset_b_to_use.index[i])
a_in_b = map_df1_to_df2(subset_a_to_use, df_b_to_use)
b_in_a = map_df1_to_df2(subset_b_to_use, df_a_to_use)
overlap.append(compute_overlap(subset_a_to_use, subset_b_to_use, col='Feature'))
metric_1 = custom_metric(a_in_b['Rank'], a_in_b['new_Rank'])
metric_2 = custom_metric(b_in_a['Rank'], b_in_a['new_Rank'])
indexes.append(i)
# print(i, metric_1, metric_2)
metrics.append((metric_1 + metric_2)/2)
if verbose:
print('Depreciated!')
return indexes, metrics, overlap
# %%time
ixs, mets, over = compare_multiple_ranks(cms_scores, ccal_scores, max_number_of_genes=5, verbose=False)
print(ixs)
print(mets)
print(over)
# %%time
m1, ov = compare_ranks(cms_scores, ccal_ic_scores, number_of_genes=10, verbose=True)
print("\nMetric =",m1, "Overlap=", ov)
# ### CMS vs CCAL_ic
# ### CCAL_correlation vs CCAL_ic
# ## Plotting trends
# ### CMS vs CCAL_ic
# +
# %%time
plt.clf()
fig, axs = plt.subplots(2,1,dpi=150)
# for i in range(int(len(scores)/2)):
# for i in range(1000):
# if i ==0:
# continue
# metric = compare_ranks(cms_scores, ccal_ic_scores, number_of_genes=i)
# fig.gca().scatter(i,metric,color='k')
# fig.gca().set_ylim(-0.1,8)
ixs, mets,over = compare_multiple_ranks(cms_scores, ccal_ic_scores, max_number_of_genes=500, verbose=False)
axs[0].scatter(ixs,mets,color='k')
axs[0].set_ylim(-0.1,8)
axs[0].set_ylabel('Custom metric')
axs[1].scatter(ixs,over,color='k')
axs[1].set_ylabel('% Overlap')
axs[1].set_xlabel('Top n genes')
axs[0].set_title("CMS vs CCAL_IC")
fig
# -
fig
# ### CMS vs CCAL_correlation
# +
# %%time
plt.close('all')
plt.clf()
fig2, axs2 = plt.subplots(2,1,dpi=150)
# for i in range(int(len(scores)/2)):
# for i in range(1000):
# if i ==0:
# continue
# metric = compare_ranks(cms_scores, ccal_scores, number_of_genes=i)
# fig2.gca().scatter(i,metric,color='k')
ixs, mets, over = compare_multiple_ranks(cms_scores, ccal_scores, max_number_of_genes=100, verbose=False)
axs2[0].scatter(ixs,mets,color='k')
axs2[0].set_ylim(-0.1,8)
axs2[0].set_ylabel('Custom metric')
axs2[1].scatter(ixs,over,color='k')
axs2[1].set_ylabel('% Overlap')
axs2[1].set_xlabel('Top n genes')
axs2[0].set_title("CMS vs CCAL_PC")
# -
fig2
# ### CCAL_correlation vs CCAL_ic
# %%time
fig3, axs3 = plt.subplots(2,1,dpi=150)
# for i in range(int(len(scores)/2)):
# for i in range(1000):
# if i ==0:
# continue
# metric = compare_ranks(ccal_ic_scores, ccal_scores, number_of_genes=i)
# fig2.gca().scatter(i,metric,color='k')
# fig2.gca().set_ylim(-0.1,8)
ixs, mets, over = compare_multiple_ranks(ccal_ic_scores, ccal_scores, max_number_of_genes=100, verbose=False)
axs3[0].scatter(ixs,mets,color='k')
axs3[0].set_ylim(-0.1,8)
axs3[0].set_ylabel('Custom metric')
axs3[1].scatter(ixs,over,color='k')
axs3[1].set_ylabel('% Overlap')
axs3[1].set_xlabel('Top n genes')
fig3
| tests/benchmarking_CCALnoir-v2-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#data manipulation
import numpy as np
import pandas as pd
#visualization
import matplotlib.pyplot as plt
import seaborn as sns
#machine learning
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score, make_scorer
from sklearn.dummy import DummyClassifier
#train test pipeline
import train_test_pipeline
#settings
from IPython.display import display
plt.style.use('bmh')
# %matplotlib inline
plt.rcParams['figure.figsize'] = (14, 10)
plt.rcParams['axes.titlepad'] = 25
sns.set_color_codes('pastel')
# -
orders = pd.read_csv('orders.csv', header = 0)
departments = pd.read_csv('departments.csv', header = 0)
products = pd.read_csv('products.csv', header = 0)
order_products_train = pd.read_csv('order_products__train.csv', header = 0)
order_products_prior = pd.read_csv('order_products__prior.csv', header = 0)
aisles = pd.read_csv('aisles.csv', header = 0)
orders.head()
plt.title('Distribution of the Max Number of Orders')
plt.ylabel('count')
sns.distplot(orders.groupby('user_id').order_number.max(), kde = False, norm_hist = False)
# +
to_plot = pd.pivot_table(data = orders,
index = 'order_dow',
columns = 'order_hour_of_day',
values = 'order_id',
aggfunc = 'count')
plt.title('Orders Per Day Per Hour')
sns.heatmap(to_plot)
# -
to_plot = orders.days_since_prior_order.value_counts(normalize = True, dropna = False, sort = False)
x_ticks = range(len(to_plot))
plt.xticks(x_ticks, [i if np.isnan(i) else int(i) for i in to_plot.index])
plt.title('Days Since Prior Order')
plt.xlabel('days since prior order capped 30')
plt.ylabel('percent of all orders')
plt.bar(x_ticks, to_plot)
to_plot = order_products_train.groupby('order_id')\
.add_to_cart_order.max()\
.value_counts(normalize = True, dropna = False, sort = False)
plt.title('Distribution of Number of Product Per Order')
plt.xlabel('percent of all orders')
plt.ylabel('number of products per order capped 80')
plt.bar(to_plot.index, to_plot)
order_products_train_pivot = pd.pivot_table(data = order_products_train[order_products_train.add_to_cart_order <= 5],
index = 'order_id',
columns = 'add_to_cart_order',
values = 'product_id',
fill_value = 0)
order_products_train_pivot.head()
order_products_prior_pivot = pd.pivot_table(data = order_products_prior[order_products_prior.add_to_cart_order <= 5],
index = 'order_id',
columns = 'add_to_cart_order',
values = 'product_id',
fill_value = 0)
order_products_prior_pivot.head()
products.head()
products_by_department = order_products_train.\
merge(right = products,
how = 'left',
left_on = 'product_id',
right_on = 'product_id').\
merge(right = departments,
how = 'left',
left_on = 'department_id',
right_on = 'department_id').\
merge(right = aisles,
how = 'left',
left_on = 'aisle_id',
right_on = 'aisle_id')#.\
#drop(['order_id',
# 'add_to_cart_order',
# 'reordered',
# 'aisle_id',
# 'department_id',
# 'aisle_id'],
# axis = 1)
products_by_department.head()
to_plot = products_by_department.department.value_counts(ascending = True)
plt.title('Product Orders by Department')
plt.xlabel('ordered products count')
plt.barh(to_plot.index, to_plot)
top_n = 20
to_plot = products_by_department.aisle.value_counts(ascending = True)
plt.title('Top %s Product Orders by Aisle' % top_n)
plt.xlabel('ordered products count')
plt.barh(to_plot.index[-top_n:], to_plot[-top_n:])
top_n = 40
to_plot = products_by_department.product_name.value_counts(ascending = True)
plt.title('Top %s Product Orders by Product Name' % top_n)
plt.xlabel('ordered products count')
plt.barh(to_plot.index[-top_n:], to_plot[-top_n:])
plt.title('Distribution of Number of Orders Per Product')
sns.distplot(products_by_department.product_id, kde = False, norm_hist = False)
top_n_reordered = 20
plt.title('Top Reordered Products')
plt.xlabel('number of reorders')
to_plot = products_by_department.groupby('product_name').sum().reordered.sort_values(ascending = True)
plt.yticks(range(top_n_reordered), to_plot.index[-top_n_reordered:])
plt.barh(range(top_n_reordered), to_plot[-top_n_reordered:])
#there are no common orders between order_products_train and order_products_prior
set(order_products_train.order_id.unique()) & set(order_products_prior.order_id.unique())
orders_products = orders.\
merge(right = order_products_train_pivot.\
append(order_products_prior_pivot),
how = 'left',
left_on = 'order_id',
right_on = 'order_id')
orders_products.head()
orders_products.info(verbose = True, memory_usage = True, null_counts = True)
# +
#append orders_train to orders_prior to get better statistics on the products usage
#drop eval_set and order_id
#convert the product columns (1, 2, 3...) to categories
#make pipeline over several models
# -
#0 is a meaningful value for days_since_prior_order
#we can try using -1 for no prior orders
orders_products.days_since_prior_order.unique()
orders_products['days_since_prior_order'] = orders_products['days_since_prior_order'].fillna(-1)
X_train, X_test, y_train, y_test = train_test_split(orders_products[['order_dow',
'order_hour_of_day',
'days_since_prior_order']],
orders_products[1].fillna(0),
test_size = 0.3, random_state = 42)
# +
most_frequent_product = products_by_department.product_id.value_counts().iloc[0]
models = {'bayes': GaussianNB(),
'adaboost': AdaBoostClassifier(),
'dummy' : DummyClassifier(constant = most_frequent_product)}
# -
model_explorer = train_test_pipeline.MultiEstimator(models, make_scorer(f1_score, average = 'weighted'))
train_limit = int(10e3)
model_explorer.fit(X_train[:train_limit], y_train[:train_limit]).get_score_dataframe()
model_explorer.draw_learning_curves()
| instacart/instacart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Python Module Overview
#
# The `Polylidar3D` and `MatrixDouble` class should be imported from the ``polylidar`` python module. The `Polyldar3D` has routines to work with 2D point sets. The `MatrixDouble` is simply a wrapper around a numpy object.
# +
import time
import math
import numpy as np
from polylidar import MatrixDouble, Polylidar3D
# -
# Next polylidar has a submodule named `polyylidarutil` that has several useful utilties such as generating random 2D points sets and 2D and 3D plotting.
# +
from polylidar.polylidarutil import (generate_test_points, plot_triangles, get_estimated_lmax,
plot_triangle_meshes, get_triangles_from_list, get_colored_planar_segments, plot_polygons)
from polylidar.polylidarutil import (plot_polygons_3d, generate_3d_plane, set_axes_equal, plot_planes_3d,
scale_points, rotation_matrix, apply_rotation)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
# -
# ## Unorganized 2D Point Sets
# First we generate two clusters of a 2D point set. Each cluster should have around 1000 points and should be about 100 units away from eachother.
np.random.seed(1)
kwargs = dict(num_groups=2, group_size=1000, dist=100.0, seed=1)
points = generate_test_points(**kwargs)
fig, ax = plt.subplots(figsize=(10, 10), nrows=1, ncols=1)
ax.scatter(points[:, 0], points[:, 1], c='k')
# Polylidar3D works by triangulating the point set and the removing triangles by edge-length contraints (and possibly other constraints) that the user specifies. The remaining triangles that are spatially connected create a *region*/*planar segment*. The polygonal representation of each segment is then extracted.
#
# First we configure the arguments for `Polylidar3D`. The parameter `lmax=1.8` indicates that any triangle which has an edge length greater than 1.8 will be removed. The `min_triangles` will remove any small *regions* that have too few triangles.
polylidar_kwargs = dict(lmax=1.8, min_triangles=20)
polylidar = Polylidar3D(**polylidar_kwargs)
# Next we need to "convert" our numpy array to a format Polylidar3D understands. This is simply just a wrapper around numpy and can be specified to perfrom no copy. In other words both datastructures point to the same memory buffer. Afterwards we simply call `extract_planes_and_polygons` with `points_mat`.
# Convert points to matrix format (no copy) and make Polylidar3D Object
points_mat = MatrixDouble(points, copy=False)
# Extract the mesh, planes, polygons, and time
t1 = time.perf_counter()
mesh, planes, polygons = polylidar.extract_planes_and_polygons(points_mat)
t2 = time.perf_counter()
print("Took {:.2f} milliseconds".format((t2 - t1) * 1000))
# So what is mesh, planes, and polygons? Mesh contains the 2D Delaunay triangulation of the points set. Planes is a *list* of the triangle *regions* discussed above. Finally Polygons is a list of the polygon data structure, where each polygon contains a linear ring denoting the exterior hull and list (possibly empty) of linear rings representing interior holes.
# +
vertices = np.asarray(mesh.vertices) # same as points! 2000 X 2 numpy array
triangles = np.asarray(mesh.triangles) # K X 3 numpy array. Each row contains three point indices representing a triangle.
halfedges = np.asarray(mesh.halfedges) # K X 3 numpy array. Each row contains thee twin/oppoiste/shared half-edge ids of the triangle.
print(points[:2, :])
print(vertices[:2, :])
print("")
print("First two triangles point indices")
print(triangles[:2, :])
print("")
print("First two triangles (actual points)")
print(vertices[triangles[:2, :], :])
print("")
print("A plane is just a collection of triangle indices")
assert len(planes) >= 1
plane1 = planes[0]
print("The first 10 triangle indices for the first plane:")
print(np.asarray(plane1[:10]), np.asarray(plane1).dtype)
print("")
print("A polygon has a shell and holes")
assert len(polygons) >= 1
polygon1 = polygons[0]
print("The exterior shell of the first polygon, which is associated with the first plane (point indices):")
print(np.asarray(polygon1.shell))
print("The first hole of the first polygon, which is associated with the first plane (point indices):")
print(np.asarray(polygon1.holes[0]))
# -
# Here we finally color code the triangle segments and plot the polygons
fig, ax = plt.subplots(figsize=(10, 10), nrows=1, ncols=1)
# plot points
ax.scatter(points[:, 0], points[:, 1], c='k')
# plot all triangles
plot_triangles(get_triangles_from_list(triangles, points), ax)
# plot seperated planar triangular segments
triangle_meshes = get_colored_planar_segments(planes, triangles, points)
plot_triangle_meshes(triangle_meshes, ax)
# plot polygons
plot_polygons(polygons, points, ax)
plt.axis('equal')
plt.show()
# ## Unorganized 3D Point Clouds
# Polylidar3D also can create applied to unorganized 3D point clouds. Note this method is only suitable if you know the dominant surface normal of the plane you desire to extract.
# The 3D point cloud must be rotated such that this surface normal is [0,0,1], i.e., the plane is aligned with the XY Plane. The method relies upon 3D->2D Projection and performs 2.5D Delaunay Triangulation.
# First we generate a simulated rooftop building with noise. Notice there are two flat surfaces, the blue points and the yellow ponts.
# +
np.random.seed(1)
# generate random plane with hole
plane = generate_3d_plane(bounds_x=[0, 10, 0.5], bounds_y=[0, 10, 0.5], holes=[
[[3, 5], [3, 5]]], height_noise=0.02, planar_noise=0.02)
# Generate top of box (causing the hole that we see)
box_top = generate_3d_plane(bounds_x=[3, 5, 0.2], bounds_y=[3, 5, 0.2], holes=[
], height_noise=0.02, height=2, planar_noise=0.02)
# Generate side of box (causing the hole that we see)
box_side = generate_3d_plane(bounds_x=[0, 2, 0.2], bounds_y=[
0, 2, 0.2], holes=[], height_noise=0.02, planar_noise=0.02)
rm = rotation_matrix([0, 1, 0], -math.pi / 2.0)
box_side = apply_rotation(rm, box_side) + [5, 3, 0]
# All points joined together
points = np.concatenate((plane, box_side, box_top))
fig, ax = plt.subplots(figsize=(10, 10), nrows=1, ncols=1,
subplot_kw=dict(projection='3d'))
# plot points
ax.scatter(*scale_points(points), s=2.5, c=points[:, 2], cmap=plt.cm.plasma)
set_axes_equal(ax)
ax.view_init(elev=15., azim=-35)
plt.show()
# -
# Now that we are working with 3D data we have a few more options. We configure our lmax to be 1.0, but also have `z_thresh` and `norm_thresh_min`
#
# * `z_thresh` - Maximum point to plane distance during region growing (3D only). Forces planarity constraints. A value of 0.0 ignores this constraint.
# * `norm_thresh_min` - Minimum value of the dot product between a triangle and surface normal being extracted. Forces planar constraints.
#
# The first is basically only allowing triangles be grouped together if their *vertices* fit well to a geometric plane. The later basically only allows triangles to be grouped together if they share a common *surface normal* to some degree.
polylidar_kwargs = dict(alpha=0.0, lmax=1.0, min_triangles=20, z_thresh=0.1, norm_thresh_min=0.94)
polylidar = Polylidar3D(**polylidar_kwargs)
# Next we extract the data and plot the results
# +
# Extracts planes and polygons, time
points_mat = MatrixDouble(points, copy=False)
t1 = time.time()
mesh, planes, polygons = polylidar.extract_planes_and_polygons(points_mat)
t2 = time.time()
print("Took {:.2f} milliseconds".format((t2 - t1) * 1000))
print("Should see two planes extracted, please rotate.")
triangles = np.asarray(mesh.triangles)
fig, ax = plt.subplots(figsize=(10, 10), nrows=1, ncols=1,
subplot_kw=dict(projection='3d'))
# plot all triangles
plot_planes_3d(points, triangles, planes, ax)
plot_polygons_3d(points, polygons, ax)
# plot points
ax.scatter(*scale_points(points), c='k', s=0.1)
set_axes_equal(ax)
ax.view_init(elev=15., azim=-35)
plt.show()
print("")
| docs/tutorial/Python/basicdemo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="LGowmkzpy8jf"
# # CHEM2XXX Intro to Python
# ## Command line execution
# These notebooks use what is called a "command line interface". Command lines function similarly to the address bar at the top of web browsers:<br>
#
#
# 1. You enter some text into the bar (e.g. "weber.edu")<br>
# 2. The browser interprepts that text and does the action you were asking for (e.g. loading the WSU website)
#
# To run code in the notebook's command line interface, click on the cell and either click the triangular "run" button or hit Shift-Enter on your keyboard
#
# + id="baPTR4iR0EIz" outputId="9486754c-3697-4a23-9ac7-2d59316f3522" colab={"base_uri": "https://localhost:8080/"}
x = 1
y = 2
x+y
# + id="gEnbkliH0zLq"
| PythonDemos/PythonIntro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:ai]
# language: python
# name: conda-env-ai-py
# ---
# +
import sys
import os
sys.path.append('../../')
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
# +
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from skimage.io import imread
from gen.load_data import load_data
# +
from sklearn.utils import shuffle
train_df, valid_df, test_df = load_data('../../data')
print(train_df.head())
# +
from models.segnet import model_segnetVGG16
model = model_segnetVGG16(3, image_shape=(320, 416, 3))
# +
from gen.datagen import oversample_generator_from_df, balanced_generator_from_df
BATCH_SIZE = 16
model_dir = '../../saved_models/segnet/segnet_v3/'
if not os.path.exists(model_dir):
os.mkdir(model_dir)
train_gen = oversample_generator_from_df(train_df, BATCH_SIZE, (320, 416), samples=2000)
valid_gen = balanced_generator_from_df(valid_df, BATCH_SIZE, (320, 416))
# -
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy', 'mse'])
# +
from train import train_nn
m = train_df.shape[0]
history = train_nn(model,
train_gen,
valid_gen,
training_size=2000,
batch_size=BATCH_SIZE,
validation_size=valid_df.shape[0],
output_path=model_dir,
epochs=500,
gpus = 1)
# -
# # summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
model.load_weights('../../saved_models/segnet/segnet_v3//model.hdf5')
model.save('../../saved_models/segnet/segnet_v3/model_saved.h5')
| notebooks/segnet/segnet_v3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# 
# # Train and explain models remotely via Azure Machine Learning Compute
#
#
# _**This notebook showcases how to use the Azure Machine Learning Interpretability SDK to train and explain a regression model remotely on an Azure Machine Leanrning Compute Target (AMLCompute).**_
#
#
#
#
# ## Table of Contents
#
# 1. [Introduction](#Introduction)
# 1. [Setup](#Setup)
# 1. Initialize a Workspace
# 1. Create an Experiment
# 1. Introduction to AmlCompute
# 1. Submit an AmlCompute run in a few different ways
# 1. Option 1: Provision as a run based compute target
# 1. Option 2: Provision as a persistent compute target (Basic)
# 1. Option 3: Provision as a persistent compute target (Advanced)
# 1. Additional operations to perform on AmlCompute
# 1. [Download model explanations from Azure Machine Learning Run History](#Download)
# 1. [Visualize explanations](#Visualize)
# 1. [Next steps](#Next)
# ## Introduction
#
# This notebook showcases how to train and explain a regression model remotely via Azure Machine Learning Compute (AMLCompute), and download the calculated explanations locally for visualization.
# It demonstrates the API calls that you need to make to submit a run for training and explaining a model to AMLCompute, download the compute explanations remotely, and visualizing the global and local explanations via a visualization dashboard that provides an interactive way of discovering patterns in model predictions and downloaded explanations.
#
# We will showcase one of the tabular data explainers: TabularExplainer (SHAP).
#
# Problem: Boston Housing Price Prediction with scikit-learn (train a model and run an explainer remotely via AMLCompute, and download and visualize the remotely-calculated explanations.)
#
# |  |
# |:--:|
#
# ## Setup
# If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't.
#
#
# If you are using Jupyter notebooks, the extensions should be installed automatically with the package.
# If you are using Jupyter Labs run the following command:
# ```
# (myenv) $ jupyter labextension install @jupyter-widgets/jupyterlab-manager
# ```
#
# +
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
# -
# ## Initialize a Workspace
#
# Initialize a workspace object from persisted configuration
# + tags=["create workspace"]
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
# -
# ## Create An Experiment
#
# **Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
from azureml.core import Experiment
experiment_name = 'explainer-remote-run-on-amlcompute'
experiment = Experiment(workspace=ws, name=experiment_name)
# ## Introduction to AmlCompute
#
# Azure Machine Learning Compute is managed compute infrastructure that allows the user to easily create single to multi-node compute of the appropriate VM Family. It is created **within your workspace region** and is a resource that can be used by other users in your workspace. It autoscales by default to the max_nodes, when a job is submitted, and executes in a containerized environment packaging the dependencies as specified by the user.
#
# Since it is managed compute, job scheduling and cluster management are handled internally by Azure Machine Learning service.
#
# For more information on Azure Machine Learning Compute, please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)
#
# If you are an existing BatchAI customer who is migrating to Azure Machine Learning, please read [this article](https://aka.ms/batchai-retirement)
#
# **Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
#
#
# The training script `train_explain.py` is already created for you. Let's have a look.
# ## Submit an AmlCompute run in a few different ways
#
# First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.
#
# You can also pass a different region to check availability and then re-create your workspace in that region through the [configuration notebook](../../../configuration.ipynb)
# +
from azureml.core.compute import ComputeTarget, AmlCompute
AmlCompute.supported_vmsizes(workspace=ws)
# AmlCompute.supported_vmsizes(workspace=ws, location='southcentralus')
# -
# ### Create project directory
#
# Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on
# +
import os
import shutil
project_folder = './explainer-remote-run-on-amlcompute'
os.makedirs(project_folder, exist_ok=True)
shutil.copy('train_explain.py', project_folder)
# -
# ### Option 1: Provision as a run based compute target
#
# You can provision AmlCompute as a compute target at run-time. In this case, the compute is auto-created for your run, scales up to max_nodes that you specify, and then **deleted automatically** after the run completes.
# +
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
# create a new runconfig object
run_config = RunConfiguration()
# signal that you want to use AmlCompute to execute script.
run_config.target = "amlcompute"
# AmlCompute will be created in the same region as workspace
# Set vm size for AmlCompute
run_config.amlcompute.vm_size = 'STANDARD_D2_V2'
# enable Docker
run_config.environment.docker.enabled = True
# set Docker base image to the default CPU-based image
run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE
# use conda_dependencies.yml to create a conda environment in the Docker image for execution
run_config.environment.python.user_managed_dependencies = False
azureml_pip_packages = [
'azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',
'azureml-interpret', 'sklearn-pandas', 'azureml-dataprep'
]
# specify CondaDependencies obj
run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],
pip_packages=azureml_pip_packages)
# Now submit a run on AmlCompute
from azureml.core.script_run_config import ScriptRunConfig
script_run_config = ScriptRunConfig(source_directory=project_folder,
script='train_explain.py',
run_config=run_config)
run = experiment.submit(script_run_config)
# Show run details
run
# -
# Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run).
# %%time
# Shows output of the run on stdout.
run.wait_for_completion(show_output=True)
# ### Option 2: Provision as a persistent compute target (Basic)
#
# You can provision a persistent AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.
#
# * `vm_size`: VM family of the nodes provisioned by AmlCompute. Simply choose from the supported_vmsizes() above
# * `max_nodes`: Maximum nodes to autoscale to while running a job on AmlCompute
# +
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
# -
# ### Configure & Run
# +
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# create a new RunConfig object
run_config = RunConfiguration(framework="python")
# Set compute target to AmlCompute target created in previous step
run_config.target = cpu_cluster.name
# enable Docker
run_config.environment.docker.enabled = True
azureml_pip_packages = [
'azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',
'azureml-interpret', 'azureml-dataprep'
]
# specify CondaDependencies obj
run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],
pip_packages=azureml_pip_packages)
from azureml.core import Run
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=project_folder,
script='train_explain.py',
run_config=run_config)
run = experiment.submit(config=src)
run
# -
# %%time
# Shows output of the run on stdout.
run.wait_for_completion(show_output=True)
run.get_metrics()
# ### Option 3: Provision as a persistent compute target (Advanced)
#
# You can also specify additional properties or change defaults while provisioning AmlCompute using a more advanced configuration. This is useful when you want a dedicated cluster of 4 nodes (for example you can set the min_nodes and max_nodes to 4), or want the compute to be within an existing VNet in your subscription.
#
# In addition to `vm_size` and `max_nodes`, you can specify:
# * `min_nodes`: Minimum nodes (default 0 nodes) to downscale to while running a job on AmlCompute
# * `vm_priority`: Choose between 'dedicated' (default) and 'lowpriority' VMs when provisioning AmlCompute. Low Priority VMs use Azure's excess capacity and are thus cheaper but risk your run being pre-empted
# * `idle_seconds_before_scaledown`: Idle time (default 120 seconds) to wait after run completion before auto-scaling to min_nodes
# * `vnet_resourcegroup_name`: Resource group of the **existing** VNet within which AmlCompute should be provisioned
# * `vnet_name`: Name of VNet
# * `subnet_name`: Name of SubNet within the VNet
# +
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
vm_priority='lowpriority',
min_nodes=2,
max_nodes=4,
idle_seconds_before_scaledown='300',
vnet_resourcegroup_name='<my-resource-group>',
vnet_name='<my-vnet-name>',
subnet_name='<my-subnet-name>')
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
# -
# ### Configure & Run
# +
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# create a new RunConfig object
run_config = RunConfiguration(framework="python")
# Set compute target to AmlCompute target created in previous step
run_config.target = cpu_cluster.name
# enable Docker
run_config.environment.docker.enabled = True
azureml_pip_packages = [
'azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',
'azureml-interpret', 'azureml-dataprep'
]
# specify CondaDependencies obj
run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],
pip_packages=azureml_pip_packages)
from azureml.core import Run
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=project_folder,
script='train_explain.py',
run_config=run_config)
run = experiment.submit(config=src)
run
# -
# %%time
# Shows output of the run on stdout.
run.wait_for_completion(show_output=True)
run.get_metrics()
# +
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
client = ExplanationClient.from_run(run)
# Get the top k (e.g., 4) most important features with their importance values
explanation = client.download_model_explanation(top_k=4)
# -
# ## Additional operations to perform on AmlCompute
#
# You can perform more operations on AmlCompute such as updating the node counts or deleting the compute.
# Get_status () gets the latest status of the AmlCompute target
cpu_cluster.get_status().serialize()
# Update () takes in the min_nodes, max_nodes and idle_seconds_before_scaledown and updates the AmlCompute target
# cpu_cluster.update(min_nodes=1)
# cpu_cluster.update(max_nodes=10)
cpu_cluster.update(idle_seconds_before_scaledown=300)
# cpu_cluster.update(min_nodes=2, max_nodes=4, idle_seconds_before_scaledown=600)
# +
# Delete () is used to deprovision and delete the AmlCompute target. Useful if you want to re-use the compute name
# 'cpu-cluster' in this case but use a different VM family for instance.
# cpu_cluster.delete()
# -
# ## Download
# 1. Download model explanation data.
# +
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
# Get model explanation data
client = ExplanationClient.from_run(run)
global_explanation = client.download_model_explanation()
local_importance_values = global_explanation.local_importance_values
expected_values = global_explanation.expected_values
# -
# Or you can use the saved run.id to retrive the feature importance values
client = ExplanationClient.from_run_id(ws, experiment_name, run.id)
global_explanation = client.download_model_explanation()
local_importance_values = global_explanation.local_importance_values
expected_values = global_explanation.expected_values
# Get the top k (e.g., 4) most important features with their importance values
global_explanation_topk = client.download_model_explanation(top_k=4)
global_importance_values = global_explanation_topk.get_ranked_global_values()
global_importance_names = global_explanation_topk.get_ranked_global_names()
print('global importance values: {}'.format(global_importance_values))
print('global importance names: {}'.format(global_importance_names))
# 2. Download model file.
# retrieve model for visualization and deployment
from azureml.core.model import Model
from sklearn.externals import joblib
original_model = Model(ws, 'model_explain_model_on_amlcomp')
model_path = original_model.download(exist_ok=True)
original_model = joblib.load(model_path)
# 3. Download test dataset.
# retrieve x_test for visualization
from sklearn.externals import joblib
x_test_path = './x_test_boston_housing.pkl'
run.download_file('x_test_boston_housing.pkl', output_file_path=x_test_path)
x_test = joblib.load('x_test_boston_housing.pkl')
# ## Visualize
# Load the visualization dashboard
from azureml.contrib.interpret.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, original_model, x_test)
# ## Next
# Learn about other use cases of the explain package on a:
# 1. [Training time: regression problem](../../tabular-data/explain-binary-classification-local.ipynb)
# 1. [Training time: binary classification problem](../../tabular-data/explain-binary-classification-local.ipynb)
# 1. [Training time: multiclass classification problem](../../tabular-data/explain-multiclass-classification-local.ipynb)
# 1. Explain models with engineered features:
# 1. [Simple feature transformations](../../tabular-data/simple-feature-transformations-explain-local.ipynb)
# 1. [Advanced feature transformations](../../tabular-data/advanced-feature-transformations-explain-local.ipynb)
# 1. [Save model explanations via Azure Machine Learning Run History](../run-history/save-retrieve-explanations-run-history.ipynb)
# 1. Inferencing time: deploy a classification model and explainer:
# 1. [Deploy a locally-trained model and explainer](../scoring-time/train-explain-model-locally-and-deploy.ipynb)
# 1. [Deploy a remotely-trained model and explainer](../scoring-time/train-explain-model-on-amlcompute-and-deploy.ipynb)
| how-to-use-azureml/explain-model/azure-integration/remote-explanation/explain-model-on-amlcompute.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="10OhbibAVvtr"
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.preprocessing import LabelEncoder
# + colab={"base_uri": "https://localhost:8080/", "height": 397} id="8hJUkVQOV8AM" outputId="3746b22f-9879-4cbc-c046-74dba81d6e02"
dataset = pd.read_csv('online_shoppers_intention.csv')
dataset.head()
# + id="FO_AdTjd9b9h"
lb = LabelEncoder()
dataset['Month'] = lb.fit_transform(dataset['Month'])
dataset.dropna(inplace=True)
print(dataset['Month'])
# + id="wYHc3-geWeUB"
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
# + id="kGgL0uxda47-"
le = LabelEncoder()
X[:, 16] = le.fit_transform(X[:, 16])
print(X[:,16])
# + id="b7gTvdvCBaC1"
le = LabelEncoder()
y = le.fit_transform(y)
print(y)
# + id="TEz-IzDpcOss"
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [15])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
print(X[:,15])
# + id="YZmR-o6celDJ"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# + id="hlcdO39IhkMY"
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# -
import keras
from keras.models import Sequential #used to initialize the NN
from keras.layers import Dense #used to build the hidden Layers
from keras.layers import Dropout
np.shape(X_train)
#Criando as camadas de ann
ann = tf.keras.models.Sequential()
ann.add(tf.keras.layers.Dense(units=8, activation='selu'))
ann.add(tf.keras.layers.Dense(units=8, activation='selu'))
ann.add(tf.keras.layers.Dense(units=8, activation='selu'))
ann.add(tf.keras.layers.Dense(units=8, activation='selu'))
ann.add(tf.keras.layers.Dense(units=8, activation='selu'))
ann.add(tf.keras.layers.Dense(units=8, activation='selu'))
ann.add(tf.keras.layers.Dense(units=8, activation='selu'))
ann.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
ann.compile(optimizer = 'adamax', loss = 'binary_crossentropy', metrics = ['accuracy'])
#Printando o x e o y de treino
y_train
X_train
ann.fit(X_train, y_train, batch_size = 32, epochs = 150)
y_train.size
X_train.size
#probabilidade da pessoa comprar no site ou não
#probabilidade de gerar receita ou não
y_pred = ann.predict(X_test)
y_pred
#0.7 é o threshold
y_pred = (y_pred > 0.50)
y_pred
#acertou 1985 e 232
#errou 103 e 144
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
import seaborn as sns
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in
cm.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in
cm.flatten()/np.sum(cm)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in
zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cm, annot=labels, fmt='', cmap='Blues')
| neural_network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Logistic Regression
#
#
# Researchers are often interested in setting up a model to analyze the relationship between predictors (i.e., independent variables) and it's corresponsing response (i.e., dependent variable). Linear regression is commonly used when the response variable is continuous. One assumption of linear models is that the residual errors follow a normal distribution. This assumption fails when the response variable is categorical, so an ordinary linear model is not appropriate. This newsletter presents a regression model for response variable that is dichotomous–having two categories. Examples are common: whether a plant lives or dies, whether a survey respondent agrees or disagrees with a statement, or whether an at-risk child graduates or drops out from high school.
#
# In ordinary linear regression, the response variable (Y) is a linear function of the coefficients (B0, B1, etc.) that correspond to the predictor variables (X1, X2, etc.,). A typical model would look like:
#
# Y = B0 + B1*X1 + B2*X2 + B3*X3 + … + E
#
# For a dichotomous response variable, we could set up a similar linear model to predict individual category memberships if numerical values are used to represent the two categories. Arbitrary values of 1 and 0 are chosen for mathematical convenience. Using the first example, we would assign Y = 1 if a plant lives and Y = 0 if a plant dies.
#
# This linear model does not work well for a few reasons. First, the response values, 0 and 1, are arbitrary, so modeling the actual values of Y is not exactly of interest. Second, it is the probability that each individual in the population responds with 0 or 1 that we are interested in modeling. For example, we may find that plants with a high level of a fungal infection (X1) fall into the category “the plant lives” (Y) less often than those plants with low level of infection. Thus, as the level of infection rises, the probability of plant living decreases.
#
# Thus, we might consider modeling P, the probability, as the response variable. Again, there are problems. Although the general decrease in probability is accompanied by a general increase in infection level, we know that P, like all probabilities, can only fall within the boundaries of 0 and 1. Consequently, it is better to assume that the relationship between X1 and P is sigmoidal (S-shaped), rather than a straight line.
#
# It is possible, however, to find a linear relationship between X1 and function of P. Although a number of functions work, one of the most useful is the logit function. It is the natural log of the odds that Y is equal to 1, which is simply the ratio of the probability that Y is 1 divided by the probability that Y is 0. The relationship between the logit of P and P itself is sigmoidal in shape. The regression equation that results is:
#
# ln[P/(1-P)] = B0 + B1*X1 + B2*X2 + …
#
# Although the left side of this equation looks intimidating, this way of expressing the probability results in the right side of the equation being linear and looking familiar to us. This helps us understand the meaning of the regression coefficients. The coefficients can easily be transformed so that their interpretation makes sense.
#
# The logistic regression equation can be extended beyond the case of a dichotomous response variable to the cases of ordered categories and polytymous categories (more than two categories).
# # Mathematics behind Logistic Regression
# ## Notation
# The problem structure is the classic classification problem. Our data set $\mathcal{D}$ is composed of $N$ samples. Each sample is a tuple containing a feature vector and a label. For any sample $n$ the feature vector is a $d+1$ dimensional column vector denoted by ${\bf x}_n$ with $d$ real-valued components known as features. Samples are represented in homogeneous form with the first component equal to $1$: $x_0=1$. Vectors are bold-faced. The associated label is denoted $y_n$ and can take only two values: $+1$ or $-1$.
#
# $$
# \mathcal{D} = \lbrace ({\bf x}_1, y_1), ({\bf x}_2, y_2), ..., ({\bf x}_N, y_N) \rbrace \\
# {\bf x}_n = \begin{bmatrix} 1 & x_1 & ... & x_d \end{bmatrix}^T
# $$
# ## Learning Algorithm
# The learning algorithm is how we search the set of possible hypotheses (hypothesis space $\mathcal{H}$) for the best parameterization (in this case the weight vector ${\bf w}$). This search is an optimization problem looking for the hypothesis that optimizes an error measure.
# There is no sophisticted, closed-form solution like least-squares linear, so we will use gradient descent instead. Specifically we will use batch gradient descent which calculates the gradient from all data points in the data set.
# Luckily, our "cross-entropy" error measure is convex so there is only one minimum. Thus the minimum we arrive at is the global minimum.
# Gradient descent is a general method and requires twice differentiability for smoothness. It updates the parameters using a first-order approximation of the error surface.
#
# $$
# {\bf w}_{i+1} = {\bf w}_i + \nabla E_\text{in}({\bf w}_i)
# $$
# To learn we're going to minimize the following error measure using batch gradient descent.
#
# $$
# e(h({\bf x}_n), y_n) = \ln \left( 1+e^{-y_n \; {\bf w}^T {\bf x}_n} \right) \\
# E_\text{in}({\bf w}) = \frac{1}{N} \sum_{n=1}^{N} e(h({\bf x}_n), y_n) = \frac{1}{N} \sum_{n=1}^{N} \ln \left( 1+e^{-y_n \; {\bf w}^T {\bf x}_n} \right)
# $$
# We'll need the derivative of the point loss function and possibly some abuse of notation.
#
# $$
# \frac{d}{d{\bf w}} e(h({\bf x}_n), y_n)
# = \frac{-y_n \; {\bf x}_n \; e^{-y_n {\bf w}^T {\bf x}_n}}{1 + e^{-y_n {\bf w}^T {\bf x}_n}}
# = -\frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}^T {\bf x}_n}}
# $$
# With the point loss derivative we can determine the gradient of the in-sample error:
#
# $$
# \begin{align}
# \nabla E_\text{in}({\bf w})
# &= \frac{d}{d{\bf w}} \left[ \frac{1}{N} \sum_{n=1}^N e(h({\bf x}_n), y_n) \right] \\
# &= \frac{1}{N} \sum_{n=1}^N \frac{d}{d{\bf w}} e(h({\bf x}_n), y_n) \\
# &= \frac{1}{N} \sum_{n=1}^N \left( - \frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}^T {\bf x}_n}} \right) \\
# &= - \frac{1}{N} \sum_{n=1}^N \frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}^T {\bf x}_n}} \\
# \end{align}
# $$
# Our weight update rule per batch gradient descent becomes
#
# $$
# \begin{align}
# {\bf w}_{i+1} &= {\bf w}_i - \eta \; \nabla E_\text{in}({\bf w}_i) \\
# &= {\bf w}_i - \eta \; \left( - \frac{1}{N} \sum_{n=1}^N \frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}_i^T {\bf x}_n}} \right) \\
# &= {\bf w}_i + \eta \; \left( \frac{1}{N} \sum_{n=1}^N \frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}_i^T {\bf x}_n}} \right) \\
# \end{align}
# $$
#
# where $\eta$ is our learning rate.
# ### Enough with the theory, now jump to the implimentation. We will look at 2 libraries for the same.
# ## Logistic Regression with statsmodel
# We'll be using the same dataset as UCLA's Logit Regression tutorial to explore logistic regression in Python. Our goal will be to identify the various factors that may influence admission into graduate school.
#
# The dataset contains several columns which we can use as predictor variables:
#
# * gpa
# * gre score
# * rank or prestige of an applicant's undergraduate alma mater
# * The fourth column, admit, is our binary target variable. It indicates whether or not a candidate was admitted our not.
# +
import numpy as np
import pandas as pd
import pylab as pl
import statsmodels.api as sm
# -
df = pd.read_csv("binary.csv")
#df = pd.read_csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
df.head()
#RENAMING THE RANK COLUMN
df.columns = ["admit", "gre", "gpa", "prestige"]
df.head()
#df.shape
# ### Summary Statistics & Looking at the data
# Now that we've got everything loaded into Python and named appropriately let's take a look at the data. We can use the pandas function which describes a summarized view of everything. There's also function for calculating the standard deviation, std.
#
# A feature I really like in pandas is the pivot_table/crosstab aggregations. crosstab makes it really easy to do multidimensional frequency tables. You might want to play around with this to look at different cuts of the data.
pd.crosstab(df["admit"],df["prestige"],rownames = ["admit"])
df.hist()
pl.show()
# ### dummy variables
# pandas gives you a great deal of control over how categorical variables can be represented. We're going dummify the "prestige" column using get_dummies.
#
# get_dummies creates a new DataFrame with binary indicator variables for each category/option in the column specified. In this case, prestige has four levels: 1, 2, 3 and 4 (1 being most prestigious). When we call get_dummies, we get a dataframe with four columns, each of which describes one of those levels.
dummy_ranks = pd.get_dummies(df["prestige"], prefix = "prestige")
dummy_ranks.head()
# CREATING A CLEAN DATA FRAME
cols_to_keep = ["admit", "gre", "gpa"]
data = df[cols_to_keep].join(dummy_ranks.loc[:,"prestige_2":])
data.head()
# Once that's done, we merge the new dummy columns with the original dataset and get rid of the prestige column which we no longer need.
#
# Lastly we're going to add a constant term for our logistic regression. The statsmodels function we would use requires intercepts/constants to be specified explicitly.
#
# ### Performing the regression
# Actually doing the logistic regression is quite simple. Specify the column containing the variable you're trying to predict followed by the columns that the model should use to make the prediction.
#
# In our case we'll be predicting the admit column using gre, gpa, and the prestige dummy variables prestige_2, prestige_3 and prestige_4. We're going to treat prestige_1 as our baseline and exclude it from our fit. This is done to prevent multicollinearity, or the dummy variable trap caused by including a dummy variable for every single category.
#ADDING THE INTERCEPT MANUALLY
data["intercept"] = 1.0
data.head()
# +
train_cols = data.columns[1:]
logit = sm.Logit(data["admit"], data[train_cols])
# -
results = logit.fit()
# Since we're doing a logistic regression, we're going to use the statsmodels Logit function. For details on other models available in statsmodels, check out their docs here.
#
# ### Interpreting the results
# One of my favorite parts about statsmodels is the summary output it gives. If you're coming from R, I think you'll like the output and find it very familiar too.
ironman = results.predict([800,4,0,0,0,1.0])
print(ironman)
results.summary()
| CODE_FILES/AiRobosoft/Machine Learning/Project_2/logistics_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Mahalanobis distance for anomaly detection
#
# Standard method for measuring the distance of a datapoint from a multivariate gaussian distribution. This intuitively requires that you check that the base distribution is normal. In the case of anomaly detection, the base (training) distribution is based only on known nonanomalous datapoints.
#
# $$ MD = \sqrt{(x-\mu)^T S^{-1} (x-\mu)} $$
import numpy as np
import matplotlib.pyplot as plt
# generating the base data
normal_x = np.random.multivariate_normal([0,0],[[1,0.7],[0.7,1]],size=1100)
normal_x_test = normal_x[1000:,:]
normal_x = normal_x[:1000,:]
print(normal_x.shape,normal_x_test.shape)
anomaly_x = np.array([[0,2.8],[1,3.2],[2,3.6],[0,-2.9],[-1.5,-3.5],[-1.6,1.7],[2.9,0],[3.1,1],[-3,-0.5],[2.3,-1.5],[-1.5,1.9],[3.5,3.1],[2.3,-1],[3.2,3.25],[-3.5,-3.4],[1,-2.5],[-2.5,0.75],[-1,-3.4],[0.5,3.65],[-3.9,-3.4]])
print(anomaly_x.shape)
plt.scatter(normal_x[:,0],normal_x[:,1],c="c",alpha=0.8,label='train')
plt.scatter(normal_x_test[:,0],normal_x_test[:,1],c="k",label='normal')
plt.scatter(anomaly_x[:,0],anomaly_x[:,1],c="m",label='anomaly')
plt.legend()
plt.show()
mu = np.mean(normal_x,axis=0)
mu = np.expand_dims(mu,axis=-1)
cov = np.cov(normal_x.T)
print(mu)
print("---------------------")
print(cov)
def get_md(all_x,mu,cov):
""" returns mahalanobis distance
"""
all_distances = []
for x in all_x:
x = np.expand_dims(x,axis=-1)
distance = ((x-mu).T.dot(cov)).dot(x-mu)
all_distances.append(float(distance))
return all_distances
anomaly_distances = get_md(anomaly_x,mu,cov)
normal_distances = get_md(normal_x_test,mu,cov)
plt.hist(normal_distances,bins=15,color="c",label="normal dist.")
plt.hist(anomaly_distances,bins=20,color="m",label="anomaly dist.")
plt.legend()
plt.show()
| anomaly_detection/Mahalanobis_distance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
mushroom=pd.read_csv('mushrooms.csv')
mushroom.head(5)
encoder=LabelEncoder()
mushroom_encode=mushroom.apply(encoder.fit_transform,axis=0)
mushroom_encode.head(5)
mushroom_encode.values
mushroom_encode.values.shape
mushroom_np=mushroom_encode.values # Turning everithing to a numpy array
X=mushroom_np[:,1:]
y=mushroom_np[:,0]
y.shape
X.shape
X
y
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.20, random_state=42)
len(X_train), len(X_test)
len(y_train), len(y_test)
len(X_test)
len(y_test)
# $$p(A|B)=\frac{p(B|A)p(A)}{p(B)}$$
# + P(A) = the prior, is the initial degree of belief in A.
# + P(A|B) = the posterior, is the degree of belief after incorporating news
# that B is true.
# + P(B|A) = the likelihood, that can be estimated from the training data.
#
def prior(y_train,label):# p(y_train == label)=probability that y_train has label 0/1
total=y_train.shape[0]
true_labels=np.sum(y_train == label)
return true_labels/total
y_train
np.sum(y_train == 1)
prior(y_train,0)
def likelihood(X_train, y_train,col_num,col_val,label): # p(X|y_train == label)=p(x intersect y_train == label)/p(y_train == label)
X_filter=X_train[y_train == label]
num=np.sum(X_filter[:,col_num]==col_val)
denum=X_filter.shape[0]
return num/denum
likelihood(X_train,y_train,2,4,1)
X_filter=X_train[y_train == 1]
X_filter[:,1]==2
def predict(X_train,y_train, x_test):
classes=np.unique(y_train)
number_feat=X_train.shape[1]
post_prob=[]
for l in classes:
like=1.0
for i in range(number_feat):
cond=likelihood(X_train,y_train,i,x_test[i],l)#/(X_train.shape[0])
like=like*cond
pro=prior(y_train,l)
post=like*pro
post_prob.append(post)
pred=np.argmax(post_prob)
return pred
X_test[2]
X_test[2][3]
predict(X_train,y_train, X_test[4])
def accuracy(X_train,y_train,X_test,y_test):
pred=[]
for i in range(X_test.shape[0]):
p=predict(X_train,y_train,X_test[i])
pred.append(p)
y_pred=np.array(pred)
acc=np.sum(y_pred == y_test)/(y_test.shape[0])
return acc
accuracy(X_train,y_train,X_test,y_test)
| Twitter Sentiment Analysis/Mushrooms-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Text Processing Exercise
#
# In this exerise, you will learn some building blocks for text processing . You will learn how to normalize, tokenize, stemmeize, and lemmatize tweets from Twitter.
# ### Fetch Data from the online resource
#
# First, we will use the `get_tweets()` function from the `exercise_helper` module to get all the tweets from the following Twitter page https://twitter.com/AIForTrading1. This website corresponds to a Twitter account created especially for this course. This webiste contains 28 tweets, and our goal will be to get all these 28 tweets. The `get_tweets()` function uses the `requests` library and BeautifulSoup to get all the tweets from our website. In a later lesson we will learn how the use the `requests` library and BeautifulSoup to get data from websites. For now, we will just use this function to help us get the tweets we want.
# +
import exercise_helper
all_tweets = exercise_helper.get_tweets()
print(all_tweets)
# -
# ### Normalization
# Text normalization is the process of transforming text into a single canonical form.
#
# There are many normalization techniques, however, in this exercise we focus on two methods. First, we'll converting the text into lowercase and second, remove all the punctuation characters the text.
#
# #### TODO: Part 1
#
# Convert text to lowercase.
#
# Use the Python built-in method `.lower()` for converting each tweet in `all_tweets` into the lower case.
# your code goes here
all_tweets = [tweet.lower() for tweet in all_tweets]
all_tweets
# #### Part 2
#
# Here, we are using `Regular Expression` library to remove punctuation characters.
#
# The easiest way to remove specific punctuation characters is with regex, the `re` module. You can sub out specific patterns with a space:
#
# ```python
# re.sub(pattern, ' ', text)
# ```
#
# This will substitute a space with anywhere the pattern matches in the text.
#
# Pattern for punctuation is the following `[^a-zA-Z0-9]`.
# +
import re
counter = 0
for tweet in all_tweets:
all_tweets[counter] = re.sub(r'[^a-zA-Z0-9]', ' ', tweet)
counter += 1
print(all_tweets)
# -
# ### NLTK: Natural Language ToolKit
#
# NLTK is a leading platform for building Python programs to work with human language data. It has a suite of tools for classification, tokenization, stemming, tagging, parsing, and semantic reasoning.
#
# Let's import NLTK.
import os
import nltk
nltk.data.path.append(os.path.join(os.getcwd(), "nltk_data"))
# #### TODO: Part 1
#
# NLTK has `TweetTokenizer` method that splits tweets into tokens.
#
# This make tokenizng tweets much easier and faster.
#
# For `TweetTokenizer`, you can pass the following argument `(preserve_case= False)` to make your tokens in lower case. In the cell below tokenize each tweet in `all_tweets`
# +
from nltk.tokenize import TweetTokenizer
# your code goes here
tknzr = TweetTokenizer(preserve_case= False)
for tweet in all_tweets:
print(tknzr.tokenize(tweet))
# -
# #### Part 2
#
# NLTK adds more modularity for tokenization.
#
# For example, stop words are words which do not contain important significance to be used in text analysis. They are repetitive words such as "the", "and", "if", etc. Ideally, we want to remove these words from our tokenized lists.
#
# NLTK has a list of these words, `nltk.corpus.stopwords`, which you actually need to download through `nltk.download`.
#
# Let's print out stopwords in English to see what these words are.
from nltk.corpus import stopwords
nltk.download("stopwords")
# ### TODO:
#
# print stop words in English
# your code is here
print(stopwords.words('english'))
# #### TODO: Part 3
#
# In the cell below use the `.split()` method to split each tweet into a list of words and remove the stop words from all the tweets.
## your code is here
for tweet in all_tweets:
words = tweet.split()
print([w for w in words if w not in stopwords.words("english")])
# ### Stemming
# Stemming is the process of reducing words to their word stem, base or root form.
#
# ### TODO:
#
# In the cell below, use the `PorterStemmer` method from the ntlk library to perform stemming on all the tweets
# +
from nltk.stem.porter import PorterStemmer
# your code goes here
for tweet in all_tweets:
words = tweet.split()
new_words = [w for w in words if w not in stopwords.words("english")]
print([PorterStemmer().stem(w) for w in new_words])
# -
# ### Lemmatizing
# #### Part 1
#
# Lemmatization is the process of grouping together the inflected forms of a word so they can be analyzed as a single item.
#
# For reducing the words into their root form, you can use `WordNetLemmatizer()` method.
#
# For more information about lemmatzing in NLTK, please take a look at NLTK documentation https://www.nltk.org/api/nltk.stem.html
#
# If you like to understand more about Stemming and Lemmatizing, take a look at the following source:
# https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html
nltk.download('wordnet') ### download this part
# ### TODO:
#
# In the cell below, use the `WordNetLemmatizer()` method to lemmatize all the tweets
# +
from nltk.stem.wordnet import WordNetLemmatizer
# your code goes here
for tweet in all_tweets:
words = tweet.split()
new_words = [w for w in words if w not in stopwords.words("english")]
print([WordNetLemmatizer().lemmatize(w) for w in new_words])
# -
# #### TODO: Part 2
#
# In the cell below, lemmatize verbs by specifying `pos`. For `WordNetLemmatizer().lemmatize` add `pos` as an argument.
# +
from nltk.stem.wordnet import WordNetLemmatizer
# your code goes here
for tweet in all_tweets:
words = tweet.split()
new_words = [w for w in words if w not in stopwords.words("english")]
print([WordNetLemmatizer().lemmatize(w, pos='v') for w in new_words])
# -
# # Solution
#
# [Solution notebook](process_tweets_solution.ipynb)
| Text Processing/process_tweets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Read and take a look at the the datafile
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
df_calendar = pd.read_csv('./calendar.csv')
df_listing = pd.read_csv('./listings.csv')
df_reviews = pd.read_csv('./reviews.csv')
df_calendar.head()
df_listing.head()
df_reviews.head()
# # How many Airbnb lists and hosts are in the Seattle area
list_num = df_calendar.listing_id.nunique()
host_num = df_listing['host_id'].nunique()
print(list_num, host_num)
# # Clean the data from df_calendar
# +
#for the price column, drop NaN values
df_calendar = df_calendar.dropna(subset=['price'])
# drop the $ sign and coma from the price column and then convert the data into numerical values
def clean_price_data (df, col_price):
'''
INPUT
df - pandas DataFrame
col_price - the column that contains price information
OUTPUT
df - cleaned dataset with data in col_price column changed to numerical values
'''
df[col_price] = pd.to_numeric(df[col_price].apply(lambda x: str(x).replace('$','').replace(',','')),errors='coerce')
return df
clean_price_data(df_calendar, 'price')
# -
df_calendar.head()
# # Average price by date
df_calendar['date'] = pd.to_datetime(df_calendar['date'])
plt.plot(df_calendar.groupby('date')['price'].mean())
plt.ylabel('Average price, $')
plt.xlabel('Dates')
# # Average price by month
df_calendar['month'] = df_calendar['date'].dt.month
plt.plot(df_calendar.groupby(['month'])['price'].mean())
plt.ylabel('Average price, $')
plt.xlabel('Month')
# # Average price by weekdays
import calendar
df_calendar['weekday'] = df_calendar['date'].dt.weekday
plt.plot(df_calendar.groupby(['weekday'])['price'].mean())
weekday_map= ['MON','TUE', 'WED','THU','FRI','SAT','SUN']
plt.ylabel('Average price, $')
plt.xlabel('Weekday')
plt.xticks(np.arange(7),weekday_map);
# # Number of available listings by month
# +
plt.plot(df_calendar.groupby('month')['listing_id'].nunique())
plt.ylabel('Number of listings')
plt.xlabel('Month')
# -
#Convert t in column 'available' to 1
df_calendar['available'] = df_calendar['available'].apply(lambda x: 1 if x == 't' else x)
# # Average listing prices
plt.hist(df_calendar.groupby('listing_id')['price'].mean(),bins=20)
plt.ylabel('Number of listings')
plt.xlabel('Price, $')
# what percentage list price lower than $150/night
(df_calendar.groupby('listing_id')['price'].mean()<150).mean()
# # Take a look at the list decriptions word cloud
from wordcloud import WordCloud, STOPWORDS
description = ' '.join(df_listing['description'])
name_wordcloud = WordCloud(stopwords = STOPWORDS, background_color = 'white', height = 2000, width = 4000).generate(description)
plt.figure(figsize = (16,8))
plt.imshow(name_wordcloud)
plt.axis('off')
plt.show()
# word cloud for most used words in guest comments
df_reviews = df_reviews.dropna(subset=['comments'])
reviews = ' '.join(df_reviews['comments'])
name_wordcloud = WordCloud(stopwords = STOPWORDS, background_color = 'white', height = 2000, width = 4000).generate(reviews)
plt.figure(figsize = (16,8))
plt.imshow(name_wordcloud)
plt.axis('off')
plt.show()
# # Clean the data from df_listing
# column= [['id', 'listing_url', 'scrape_id', 'last_scraped', 'name', 'summary',
# 'space', 'description', 'experiences_offered', 'neighborhood_overview',
# 'notes', 'transit', 'thumbnail_url', 'medium_url', 'picture_url',
# 'xl_picture_url', 'host_id', 'host_url', 'host_name', 'host_since',
# 'host_location', 'host_about', 'host_response_time',
# 'host_response_rate', 'host_acceptance_rate', 'host_is_superhost',
# 'host_thumbnail_url', 'host_picture_url', 'host_neighbourhood',
# 'host_listings_count', 'host_total_listings_count',
# 'host_verifications', 'host_has_profile_pic', 'host_identity_verified',
# 'street', 'neighbourhood', 'neighbourhood_cleansed',
# 'neighbourhood_group_cleansed', 'city', 'state', 'zipcode', 'market',
# 'smart_location', 'country_code', 'country', 'latitude', 'longitude',
# 'is_location_exact', 'property_type', 'room_type', 'accommodates',
# 'bathrooms', 'bedrooms', 'beds', 'bed_type', 'amenities', 'square_feet',
# 'price', 'weekly_price', 'monthly_price', 'security_deposit',
# 'cleaning_fee', 'guests_included', 'extra_people', 'minimum_nights',
# 'maximum_nights', 'calendar_updated', 'has_availability',
# 'availability_30', 'availability_60', 'availability_90',
# 'availability_365', 'calendar_last_scraped', 'number_of_reviews',
# 'first_review', 'last_review', 'review_scores_rating',
# 'review_scores_accuracy', 'review_scores_cleanliness',
# 'review_scores_checkin', 'review_scores_communication',
# 'review_scores_location', 'review_scores_value', 'requires_license',
# 'license', 'jurisdiction_names', 'instant_bookable',
# 'cancellation_policy', 'require_guest_profile_picture',
# 'require_guest_phone_verification', 'calculated_host_listings_count',
# 'reviews_per_month']
df_listing.info()
# +
#remove rows where price is NaN
df_listing = df_listing.dropna(subset=['price'])
# Rename the column id to listing_id to keep consistant with df_calendar
df_listing = df_listing.rename(columns = {'id':'listing_id'})
# drop the $ sign and coma from all columns associated with price and then convert the data into numerical values
price_col = ['price','weekly_price','monthly_price','security_deposit','cleaning_fee','extra_people']
for col in price_col:
clean_price_data(df_listing, col)
# Fill in some missing values #
def fillna_with_mode (df, col):
'''
INPUT
dataframe
column
OUTPUT
column - with NaN filled by mode
'''
df[col] = df[col].fillna(df[col].mode().iloc[0])
return df[col]
#fill in missing values for bathrooms, bedrooms and beds with mode
col_fill_mode = ['bathrooms','bedrooms','beds','host_listings_count']
for col in col_fill_mode:
fillna_with_mode(df_listing, col)
def fillna_with_mean (df, col):
'''
INPUT
dataframe
column
OUTPUT
column - with NaN filled by mean
'''
df[col] = df[col].fillna(df[col].mean())
return df[col]
col_reviews = ['review_scores_rating', 'review_scores_accuracy', 'review_scores_cleanliness',
'review_scores_checkin', 'review_scores_communication', 'review_scores_location',
'review_scores_value','reviews_per_month']
for col in col_reviews:
fillna_with_mean(df_listing,col)
# +
# Change property type to major types or other
major_property = ['House','Apartment','Townhouse','Condominium','Loft','Bed & Breakfast']
def encode_property (property_type):
if property_type not in major_property:
return 'Other'
return property_type
df_listing['property_type'] = df_listing['property_type'].apply(encode_property)
#replace t and f in columns to True and False:
def t_to_true (var):
if var == 't':
return True
if var == 'f':
return False
t_or_f_col = ['host_has_profile_pic','host_identity_verified','instant_bookable',
'is_location_exact','require_guest_profile_picture','require_guest_phone_verification']
for col in t_or_f_col:
df_listing[col] = df_listing[col].apply(t_to_true)
# Change 'host_since' column dtype from object to datetime, and only save years
df_listing['host_since'] = pd.to_datetime(df_listing['host_since'])
df_listing['host_since'] = df_listing['host_since'].dt.year
fillna_with_mean(df_listing, 'host_since')
#convert 'host_response_rate' into number
df_listing['host_response_rate'] = pd.to_numeric(df_listing['host_response_rate'].apply(lambda x: str(x).replace('%','')),errors='coerce')
fillna_with_mean(df_listing, 'host_response_rate')
# change'extra_people' column to True if charged and False otherwise
def if_charge (extra_charge):
if extra_charge == 0:
return False
else:
return True
df_listing['extra_people'] = df_listing['extra_people'].apply(if_charge)
# +
# Preprocess 'amenities' column: extract features from 'amenities' column, and replace True and False for each feature
#replace empty {} with ''
df_listing.loc[df_listing['amenities'] == '{}','amenities']=''
#Remove the symbols and split the list by ','
df_listing['amenities'] = df_listing['amenities'].apply(lambda x: set(x.replace('[', '').replace("'", '').replace("]", '').replace('"', '').replace('{', '').replace('}', '').split(',')))
# Get a set of all items in amentities
all_amenities = set()
for i in range(len(df_listing)):
items = df_listing.loc[i, 'amenities']
all_amenities = all_amenities.union(set(items))
# -
# all_amenities = {'',
# '24-Hour Check-in',
# 'Air Conditioning',
# 'Breakfast',
# 'Buzzer/Wireless Intercom',
# 'Cable TV',
# 'Carbon Monoxide Detector',
# 'Cat(s)',
# 'Dog(s)',
# 'Doorman',
# 'Dryer',
# 'Elevator in Building',
# 'Essentials',
# 'Family/Kid Friendly',
# 'Fire Extinguisher',
# 'First Aid Kit',
# 'Free Parking on Premises',
# 'Gym',
# 'Hair Dryer',
# 'Hangers',
# 'Heating',
# 'Hot Tub',
# 'Indoor Fireplace',
# 'Internet',
# 'Iron',
# 'Kitchen',
# 'Laptop Friendly Workspace',
# 'Lock on Bedroom Door',
# 'Other pet(s)',
# 'Pets Allowed',
# 'Pets live on this property',
# 'Pool',
# 'Safety Card',
# 'Shampoo',
# 'Smoke Detector',
# 'Smoking Allowed',
# 'Suitable for Events',
# 'TV',
# 'Washer',
# 'Washer / Dryer',
# 'Wheelchair Accessible',
# 'Wireless Internet'}
# ### For items in amentities list, since there are so many potential predictors, I chose a few amenities such as internet that I think are more vital that amentities such as having an iron. Of course, customers have different preferences. This is a demo to show that I can convey the amentities list into single feature columns and use them for predictions.
# +
# Choose some items in amenities that are used to predict prices
amenities_pred = ['Internet','Kitchen','Free Parking on Premises','Family/Kid Friendly','Washer / Dryer','Wheelchair Accessible']
#Add new boolean columns for amenities features that are used to predict prices
for item in amenities_pred:
df_listing[item] = df_listing['amenities'].apply(lambda x: item in x)
# -
# # Price correlation with some selected numerical columns
# ### For categorial features: 1) I removed all the features that are absolutely irrelavant for predicting the prices, such as id, and different url. 2) For similar features related to each other, I chose one among them. For example, I chose 'neighborhood_group_cleansed' over other features relating to geographical information. 3) I also removed features that have more than 80% of missing values such as square_foot.
# +
#Select factors for predicting prices
pred_cols = ['host_since', 'host_is_superhost','host_identity_verified',
'host_has_profile_pic','market', 'property_type', 'neighbourhood_group_cleansed',
'room_type', 'accommodates', 'bathrooms', 'bedrooms',
'beds', 'bed_type', 'price', 'guests_included','minimum_nights',
'extra_people', 'number_of_reviews', 'require_guest_profile_picture', 'require_guest_phone_verification',
'review_scores_rating', 'instant_bookable', 'cancellation_policy','reviews_per_month'] + amenities_pred
df_listing = df_listing[pred_cols]
# +
#Select numerical columns to find out correlations
num_col = df_listing.select_dtypes(include=['int64','float64'])
#Plot heatmap
df_map = num_col
sns.heatmap(df_map.corr(), square=True,annot=True, fmt = '.2f');
# -
# # Use machine learning to predict price correlations
# +
# Dummy the categorical variables
cat_vars = df_listing.select_dtypes(include=['object']).copy().columns
for var in cat_vars:
df_listing = pd.concat([df_listing.drop(var, axis=1), pd.get_dummies(df_listing[var], prefix=var, prefix_sep='_', drop_first=True)], axis=1)
# +
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
#Test and Train split data
y = df_listing['price']
X = df_listing.drop(columns = 'price', axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y , test_size=0.1, random_state=42)
# +
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=100,
random_state=42,
n_jobs=-1)
forest.fit(X_train, y_train)
#calculate scores for the model
y_train_preds = forest.predict(X_train)
y_test_preds = forest.predict(X_test)
print('Random Forest MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_preds),
mean_squared_error(y_test, y_test_preds)))
print('Random Forest R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_preds),
r2_score(y_test, y_test_preds)))
# +
#get feature importances from the model
headers = ["name", "score"]
values = sorted(zip(X_train.columns, forest.feature_importances_), key=lambda x: x[1],reverse=True)
forest_feature_importances = pd.DataFrame(values, columns = headers)
features = forest_feature_importances['name'][:15]
y_pos = np.arange(len(features))
scores = forest_feature_importances['score'][:15]
#plot feature importances
plt.barh(y_pos,scores)
plt.yticks(y_pos, features)
plt.xlabel('Score')
plt.title('Feature importances (Random Forest)')
plt.show()
# +
fig, ax = plt.subplots()
ax.scatter(y_test, y_test_preds, edgecolors=(0, 0, 0))
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'k--', lw=4)
ax.set_xlabel('Actual')
ax.set_ylabel('Predicted')
ax.set_title("Ground Truth vs Predicted")
plt.show()
# -
# # Removing redundant features
# +
import scipy
from scipy.cluster import hierarchy as hc
corr = np.round(scipy.stats.spearmanr(df_listing).correlation,4)
corr_condensed = hc.distance.squareform(1-corr)
z=hc.linkage(corr_condensed, method = 'average')
fig = plt.figure(figsize=(16,10))
dendrogram = hc.dendrogram(z,labels=df_listing.columns,orientation='left')
plt.show()
# +
# beds and accomodates are highly correlated
# remove feature 'beds' and only keeps the the most important features and rerun the ML model
pred_cols = ['host_since', 'host_is_superhost',
'property_type', 'neighbourhood_group_cleansed',
'room_type', 'bathrooms', 'bedrooms',
'accommodates', 'price', 'guests_included','minimum_nights',
'extra_people', 'number_of_reviews',
'review_scores_rating', 'cancellation_policy','reviews_per_month'] + amenities_pred
df_listing = df_listing[pred_cols]
# -
df_listing.head()
| AirbnbView.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/XavierCarrera/Blood-Stream-App/blob/master/Limpeza_datos_Bloodstream.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="LUDmONFyreue" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="154d41dd-a6b8-44b1-bad9-ecaffb8bb178"
# Librerias
import pandas as pd
import numpy as np
import seaborn as sns
# + id="C8J6w20CsqWX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4331480e-6b40-4a48-816d-23195acb380d"
# Abrimos el doc un drive
from google.colab import drive
drive.mount('/content/drive/')
# + id="ET3HA3YiszZZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="49b98977-e1f5-462c-ffc9-c040f63961d5"
# %cd '/content/drive/My Drive/Colab Notebooks/db'
# !ls
# + id="gXmQxrYXtBzH" colab_type="code" colab={}
# Hace el dataframe
df = pd.read_csv('appstore_games.csv')
# + id="C3Qj6z-nte-s" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="6bbd3949-133f-4774-f378-4c382b298b30"
# Verificar columnas
df.columns
# + id="h2aHX-sXt7lK" colab_type="code" colab={}
#Eliminar columnnas innecesarias
df = df.drop(['Price', 'In-app Purchases'], axis = 1)
# + id="0KISIZlzu5V9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="f74e6b59-1fc0-42ca-ff52-1a65eccd836f"
df.columns
# + id="5pKJhCrovBpP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="f4aba5a4-d7a9-454b-d5c7-48c9fe4eb2c6"
# Verificación de NA y porcentajes de NA
df.isna().sum(axis=0)/len(df)*100
# + id="FUmP6LeXwk0y" colab_type="code" colab={}
# Borrar columnas con +50% datos nulas
df = df.drop(['Subtitle'], axis = 1)
# + id="TrasUmnhxb18" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="9a1f3d70-935b-4f30-9973-b0f437e9f885"
df.columns
# + id="uWWhDT2VyWk3" colab_type="code" colab={}
df = df.drop_duplicates()
# + id="8JNABHq80C7b" colab_type="code" colab={}
# Borrar datos nulos, usando columnos indece
df = df.drop(df[df.Languages == ''].index)
# + id="z6v-d9mh0j_9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="c6200282-157f-4e6e-e9e7-2726a4a47121"
df = df.drop(df[df.Size == ''].index)
# + id="YWhWrk0X08R6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="46b4f5e9-762a-403f-b275-8e5d202a4c12"
df
# + id="ZlrblZp61O6J" colab_type="code" colab={}
# + colab_type="code" id="c-88oW9V13Uk" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="f6358567-c834-4b31-92a6-6238cd95a0bd"
# Verificación de NA y porcentajes de NA
df.isna().sum(axis=0)/len(df)*100
# + id="1zwlE8x4qm8Q" colab_type="code" colab={}
df = df.drop(['Average User Rating', 'User Rating Count'], axis = 1)
# + id="lKb2TfJiEBWA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="dc542e8b-2c1e-4c69-f414-2d0a4d726bc3"
df.columns
# + id="6BI2rPsYESiw" colab_type="code" colab={}
spec_chars = ['"']
for char in special_char:
df['Name'] = df['Name'].str.replace(char, '')
df['Description'] = df['Description'].str.replace(char, '')
# + id="yn7rdJaRGbG3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="33d1e20f-62cf-4630-9aa0-7faf6198ea75"
df
# + id="n6Mh_697Harv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="826aece5-73fe-4c93-cce2-695e053c240e"
df.isna().sum(axis=0)
# + id="hCX_fNSFJha2" colab_type="code" colab={}
df = df.dropna(subset=['Languages'])
# + id="zxBEoY5dJrqL" colab_type="code" colab={}
df = df.dropna(subset=['Size'])
# + id="L1hRbPYxHaSv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="9527be59-b760-4549-f70e-fe9fb1282155"
df.isna().sum(axis=0)
# + id="FrHaGPCyK2II" colab_type="code" colab={}
df.
# + id="rjFqY7ZBOfCP" colab_type="code" colab={}
# + id="2CdDXqK4K19R" colab_type="code" colab={}
# + id="UE7JbkucJxNK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 423} outputId="f5561ee9-0de0-4230-9fb6-becf4301bbe9"
sns.heatmap(df.isnull(), cmap='viridis')
# + id="A-YihDLtGuYW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="0f2a8db7-0950-4759-b3e1-a9cf6faecb27"
from google.colab import files
df.to_csv('clean-dataset.csv')
files.download('clean-dataset.csv')
| Limpeza_datos_Bloodstream.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ToolkitEnv2
# language: python
# name: toolkitenv2
# ---
# # Contact
#
# Feel free to contact individual contributors, the authors of related papers, or the primary Toolkit maintainers:
#
# * <NAME>, [<EMAIL>], IBM Research - Zurich
# * <NAME>, [<EMAIL>], IBM Research - Zurich
#
| Website/Contact.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analysis of South African Covid-19 Datasets - Updated as of 05-Apr-2020
#
# The outbreak of the Covid-19 pandemic has been an unprecendented event in our lives. While this event is first and foremost a human tragegy, humanity's collective response to this pandememic will shape our lives in the months and years to come. In order to gain deeper insight into the factors that are shaping our response, be it in the medical, government or social spheres, it is vital to analyse the data that is being gathered as this crises evolves.
# This objective of this data analysis should not only be limited to guiding policy decisions but to also strenghten the spirit of enquiry, curiosity and exploration in our society so that all of us are empowered to ask our own questions and seek answers.
#
# To aid this endevour, this Jupyter notebook makes uses of publicity available data sets and provides a programmatic framework for data analyts and data scientists to explore and visualize the data. As this notebook evolves, we will also explore further trends and patterns in these data sets that could be used to guide decision making.
#
#
# Contributors:
# 1. <NAME>, Wits University and Accenture
# 2. <NAME>, Accenture
# 3. <NAME>, Accenture
# # Source of data
# Data Science for Social Impact Research Group @ University of Pretoria, Coronavirus COVID-19 (2019-nCoV) Data Repository for South Africa. Available on: https://github.com/dsfsi/covid19za.
#
# # Part 1: Preliminary Data Exploration and Data Wrangling
# # Some early questions to be considered
# 1. How have the number of infections evolved on a daily basis? How are the infections distributed across the different provinces in SA?
# 2. What were the sources of these infections? Global, local travel or community infections?
# 3. What do the demographics of the infected persons tell us? Does age or gender influence the possibility of getting infected?
# 4. What is the trend of deaths that have occured thus far? How does this trend compare to global trends? Are there other factors that increase the chances of death when infected such as co-morbidity (or having a pre-existing medical condition)?
import pandas as pd
import numpy as np
import matplotlib as plt
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
# # How has the number of infections grown over time?
df = pd.read_csv('../data/covid19za_provincial_cumulative_timeline_confirmed.csv')
df.head()
fig = plt.figure(figsize=[15,8]);
cov_date = pd.Series(df.total.values, index=df.date)
plt.xlabel('Date')
plt.tick_params(axis='x', rotation=70)
plt.ylabel('Number of infections')
plt.title('Cumulative total infections', fontsize=20)
for index, row in df.iterrows():
plt.annotate('{}'.format(row['total']), xy=(row['date'], row['total']), xytext=(-5, 5), ha='right', textcoords='offset points', arrowprops=dict(arrowstyle='->', shrinkA=0))
#plt.text(row['date'], row['total'], row['total'])
plt.plot(cov_date, 'o-')
# ## Analysis
# ### 05-Apr-2020
# While it seems that there is a slowdown in the of rate infected cases post 27-Mar, i.e. no longer following an exponential increase, is that really the case, or are the detected infections a function of the number of tests being conducted? See the daily testing charts below.
# # How many people have been tested for presence of the infection so far?
df = pd.read_csv('../data/covid19za_timeline_testing.csv')
df.head()
# # How quickly are we testing for potential infections?
fig = plt.figure(figsize=[20,10]);
cov_date = pd.Series(df.cumulative_tests.values, index=df.date)
plt.xlabel('Date')
plt.tick_params(axis='x', rotation=70)
plt.ylabel('Number of tests')
plt.title('Cumulative total tests', fontsize=20)
for index, row in df.iterrows():
if (not pd.isna(row['cumulative_tests'])):
#print(row['date'], row['total'])
plt.annotate('{:d}'.format(int(row['cumulative_tests'])), xy=(row['date'], row['cumulative_tests']),
xytext=(-5, 5), ha='right', textcoords='offset points', arrowprops=dict(arrowstyle='->', shrinkA=1))
#plt.text(row['date'], row['total'], row['total'])
plt.plot(cov_date, 'o-')
# The gap in the graph is due to missing data for that particular day.
# # Daily Testing trend for Covid-19
df3 = df.assign(daily_test_count=np.zeros(df.shape[0]))
df4 = df3.shift(periods=-1, axis='rows')
df3.daily_test_count = np.log(df4.cumulative_tests - df3.cumulative_tests)
df3.plot(x ='date', y='daily_test_count', kind = 'line', figsize=(15,7), title='Log Number of Tests done daily')
# The above plot show the log of the daily test count over time, since testing was first initiated. It can be seen that that the number of tests conducted has remained the same in the recent days.
# +
#df3.date = pd.to_datetime(df3.date, format="%d-%m-%Y")
#df.date = pd.to_datetime(df.date, format="%d-%m-%Y")
#dt = pd.to_datetime(df3.date)
# +
ax = plt.gca()
df = pd.read_csv('../data/covid19za_provincial_cumulative_timeline_confirmed.csv')
#df3.date = pd.to_datetime(df3.date, format="%d-%m-%Y")
#df.date = pd.to_datetime(df.date, format="%d-%m-%Y")
df3.plot(kind='line',x='date',y='daily_test_count', figsize=(15,7), ax=ax)
#df.plot(kind='line',x='date',y='total', logy=True, figsize=(15,7), ax=ax)
#plt.plot(df3.date, df3.daily_test_count)
plt.plot(df.date,np.log(df.total), color='red')
plt.xlabel("date")
plt.ylabel("Log of counts (tested, infected)")
plt.tick_params(axis='x', rotation=70)
plt.title('Joint plot of Total Infections with Total Tested - on a Log scale', fontsize=20)
# -
# The above plot indicates a certain degree of co-relation between the number of people tested and the number of people found to be infected. It can be expected that as the number of tests conducted daily increases, it should pickup a higher number of infected people. Therefore, one cannot rest easy thinking that the number of infected people is not increasing. This is likely because we are not testing in higher numbers on a daily basis.
# # What is the distribution of age among the infected people?
# +
fig, ax = plt.subplots(1,1,figsize=(15,8))
# read csv
df = pd.read_csv('../data/covid19za_timeline_confirmed.csv')
# replace missing age values with -5
values = {'age':-5}
df = df.fillna(value=values)
# create age bins [0,5,10....] and labels [0-4,5-9,10-14 etc]
bins = [-5]
labels = ['Unknown']
for i in range(101):
if i%5 == 0:
bins.append(int(i))
if(len(bins)>2):
labels.append(f"{str(bins[-2])}-{str(bins[-1])}")
else:
continue
# Set the ticks to be at the edges of the bins.
ax.set_xticks(bins)
# add column to AgeGroups column to df and group all entries into the age bins [0-4, 5-9, 10-14 etc]
df['AgeGroup'] = pd.cut(df['age'], bins=bins, labels=labels, right=False)
#plt.rcParams['font.family'] = 'Graphik' #Specify font
# draw histogram
plt.title('Covid-19 cases by age groups - Plot 1', fontsize= 20)
plt.xlabel('Age Group', fontsize=17)
plt.ylabel('Number of cases', fontsize=17)
counts, bins, patches = plt.hist(df['age'], bins, color = "skyblue", ec='black')
# Label the raw counts and age groups below the x-axis...
bin_centers = 0.5 * np.diff(bins) + bins[:-1]
for count, x, label in zip(counts, bin_centers, labels):
# Label the raw counts
#ax.annotate(str(label) + '= ' + str(int(count)), xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -3), textcoords='offset points', va='top', ha='center', rotation=270)
ax.annotate(str(label), xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -3), textcoords='offset points', va='top', ha='center')
ax.annotate(str(int(count)), xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -13), textcoords='offset points', va='top', ha='center')
ax.annotate('Age Range', xy=(-8, 0), xycoords=('data', 'axes fraction'), xytext=(0, -3), textcoords='offset points', va='top', ha='center')
ax.annotate('Count', xy=(-8, 0), xycoords=('data', 'axes fraction'), xytext=(0, -13), textcoords='offset points', va='top', ha='center')
ax.tick_params(axis='x', labelsize=12, labelcolor='#6F6F6F', rotation=0 ) #Specify font )
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
#ax.axes.get_xaxis().set_visible(False)
plt.xlabel('Age', fontsize=17)
ax.tick_params(axis='y', labelsize=13, labelcolor='black')
ax.xaxis.labelpad = 40
#plt.tight_layout()
#plt.show()
# -
# The leftmost bar indicates the cases were the age was not available.
# +
from collections import Counter
fig = plt.figure(figsize=[15,8]);
#Read in CSV and Group Data
countAG= { #dictionary to store the number of people per age group
"0-9": 0,
"10-19":0,
"20-29":0,
"30-39":0,
"40-49":0,
"50-59":0,
"60-69":0,
"70-79":0,
"80+":0
}
df= pd.read_csv('..\data\covid19za_timeline_confirmed.csv') #read in csv file
ageCount= Counter(df['age']) #count the number of people per age (who are positive)
for i in ageCount: #group each age into an age range and sum it
if (i in range(0,9)):
countAG["0-9"]= ageCount[i]+ countAG["0-9"]
elif (i in range(10,19)):
countAG["10-19"]= ageCount[i]+ countAG["10-19"]
elif (i in range(20,29)):
countAG["20-29"]= ageCount[i]+ countAG["20-29"]
elif (i in range(30,39)):
countAG["30-39"]= ageCount[i]+ countAG["30-39"]
elif (i in range(40,49)):
countAG["40-49"]= ageCount[i]+ countAG["40-49"]
elif (i in range(50,59)):
countAG["50-59"]= ageCount[i]+ countAG["50-59"]
elif (i in range(60,69)):
countAG["60-69"]= ageCount[i]+ countAG["60-69"]
elif (i in range(70,79)):
countAG["70-79"]= ageCount[i]+ countAG["70-79"]
elif (i >= 80):
countAG["80+"]= ageCount[i]+ countAG["80+"]
#Start With Graph Plotting
plt.rcParams['font.family'] = 'Graphik' #Specify font
plt.bar(range(len(countAG)), list(countAG.values()), color= ["#3b0060","#590090","#af26ff","#7600c0","#7c1fff","#cb6fff" ,"#a76aff","#3b0060","#590090"], align='center') #Accenture colours added
plt.xticks(range(len(countAG)), list(countAG.keys()),fontweight='black')
plt.yticks(fontweight='black')
plt.xlabel('Age Groups', fontsize= 15,fontweight='black', color= "#2e005c")
plt.ylabel('Number of Infected People',fontsize= 15,fontweight='black', color= "#2e005c")
plt.title('Covid-19 cases by age groups - Plot 2',fontsize= 20,fontweight='black')
plt.rcParams['axes.linewidth']=0.8
#plt.grid(b=True, color='k', linestyle=':', lw=.5, zorder=1) #Optional grid lines
xs=[0,1,2,3,4,5,6,7,8] #Necessary to form x,y pair
for x,y in zip(xs,list(countAG.values())):
label = y
plt.annotate(label, # this is the text
(x,y), # this is the point to label
textcoords="offset points", # how to position the text
xytext=(0,2), # distance from text to points (x,y)
ha='center', # horizontal alignment can be left, right or center
fontweight='black') # make labels bold
# -
# This does not include positive cases whose age is not known - which is shown in the bar graph above this one.
# # Further areas to explore
#
from IPython.display import Image, display
display(Image(filename='images/forecast_cases.png', embed=True))
# ## Calculating a Daily Growth Rate:
# +
df_growth = pd.read_csv('../data/covid19za_provincial_cumulative_timeline_confirmed.csv')
df_growth.head()
df_growth["Previous cases"] = df_growth["total"].shift(1)
df_growth["Daily Growth Rate"] = (df_growth["total"] - df_growth["Previous cases"]) / df_growth["Previous cases"] * 100
print("Average Daily Growth Rate:")
print(df_growth["Daily Growth Rate"].mean())
print( " ")
recent_growth_rate = df_growth.iloc[len(df_growth)-3:len(df_growth),-1].mean()
print("3 Day Growth Rate:")
print(recent_growth_rate)
print( " ")
print("Number of Cases Double Every:")
print(str((100/recent_growth_rate)) + " days")
# -
from IPython.display import Image, display
display(Image(filename='images/forecast_hospitalizations.png', embed=True))
from IPython.display import Image, display
display(Image(filename='images/forecast_deaths.png', embed=True))
| notebooks/COVID-19 South Africa - Data Analysis and Visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="svbaEInHvuk7"
# # 1 - Préparation du scraping
# + [markdown] id="tnRnoYwgx9-Q"
# ## 1.1 - Importation des modules pertinents
# -
# %pip install snscrape
# + colab={"base_uri": "https://localhost:8080/", "height": 351} id="3m8ouQ2vvtzX" outputId="8b8f6f02-e0da-464b-ad94-139490971f10"
import time
import snscrape.modules.twitter as sntwitter
import pandas as pd
import os
# + [markdown] id="PgjkxilGyBsV"
# ## 1.2 - Création de la liste qui contiendra les donnés scrapées
# + id="LQQ-e2y5vtzY" outputId="2395ee12-764d-47cf-c4f8-de2af93d1f78"
tweets_list = []
# + [markdown] id="lcw3qBYsyI6x"
# ## 1.3 - Choix des périodes pertinentes pour le scraping
# + [markdown] id="OUZbjHTWyNig"
# Avant de commencer à scraper, il faut déterminer les dates pertinentes pour chaque film. Pour pouvoir répondre à notre problématique, il convient de comparer un échantillon de tweets écrits avant la sortie du film et un échantillon de tweets écrits après. Pour nous assurer que les tweets scrapés portent bien sur le contenu du film, nous avons décidé de retenir les périodes suivant la publication des différents trailers des films.\
# \
# Dans le cas de Dune, le premier trailer du film est publié sur Youtube le 09 septembre 2020 et le second le 22 juillet 2021. Nous avons donc également scrapé sur des périodes de deux semaines suivant la sortie de chaque trailer et sur une période de cinq semaines suivant la sortie du film en salles le 08 octobre 2021 (donc jusqu’au 13 novembre 2021).
# \
# \
# Dans le cas de Space Jam 2, le premier trailer du film est publié sur Youtube le 03 avril 2021 et le second le 09 juin de la même année. Nous avons donc scrapé sur des périodes de deux semaines suivant la sortie de chaque trailer et sur une période de cinq semaines suivant la sortie du film en salles le 03 juillet 2021 (donc jusqu’au 08 août 2021).
#
# + id="LWR0INyjv-dQ"
periods_dune = [pd.date_range(start='2020-09-09', end='2020-09-23').strftime("%Y-%m-%d").tolist(),
pd.date_range(start='2021-07-22', end='2021-08-05').strftime("%Y-%m-%d").tolist(),
pd.date_range(start='2021-10-08', end='2021-11-13').strftime("%Y-%m-%d").tolist()]
periods_space= [pd.date_range(start='2021-04-03', end='2021-04-17').strftime("%Y-%m-%d").tolist(),
pd.date_range(start='2021-06-09', end='2021-06-23').strftime("%Y-%m-%d").tolist(),
pd.date_range(start='2021-07-03', end='2021-08-08').strftime("%Y-%m-%d").tolist()]
# + [markdown] id="0JdZYVnwwF6t"
# # 2 - Scraping
# + [markdown] id="cIEoZTNYxt9j"
# ## 2.1 - Scraping
# + [markdown] id="lK1rTI74wNtg"
# Pour scraper, nous n'avons pas recours à une API mais à un module appelé *TwitterSearchScraper* dont les fonctionnalités sont suffisantes pour nos besoins. On utilise donc TwitterSearchScraper pour obtenir tous les tweets contenant un string choisi durant une période choisie.\
# \
# Pour des raisons tenant aux limitations quantitatives imposées par Twitter au scraping ne faisant pas appel à une API, nous ne scrapons qu'au plus 500 tweets par jour.
# \
# \
# Parmi toutes les données que le module permet de récolter, nous décidons de ne relever que le film concerné, la période durant laquelle le tweet a été écrit, son contenu, sa date, son numéro (pour bonne mesure) et la langue (ici l'Anglais).
# + id="LKQMLRGEv-RG"
t = time.time()
for film , periods in [('dune',periods_dune),('space jam',periods_space)]:
for time_range in periods :
for k in range(len(time_range)-1):
for i,tweet in enumerate(sntwitter.TwitterSearchScraper(f"{film} movie since:{time_range[k]} until:{time_range[k+1]} lang:en").get_items()):
if i>=500: # On scrape 500 tweets par jour
break
tweets_list.append([film, time_range, tweet.content, tweet.date, tweet.id, tweet.lang])
print(f"Temps d'exécution : {time.strftime('%H:%M:%S', time.gmtime(time.time()-t))}")
# + [markdown] id="nBxTB8Vjxw4Z"
# # 2.2 - Mise en forme
# + [markdown] id="dXZ_iQlexlv_"
# On crée un dataframe à partir de ce qu'on a scrapé
# + id="yZFnUxRtv-EX"
start = time.time()
tweets_df = pd.DataFrame(tweets_list, columns=['Film', 'Time Range', 'Text', 'Datetime', 'Tweet Id', 'Language'])
time.time()-start
# + id="xETcnlSmvtza"
pd.set_option('display.max_colwidth', 550)
# + id="wUaFGzSJvtza" outputId="4910c95c-926b-4426-bc94-5a01dd24fffe"
tweets_df
# + [markdown] id="3o8HacsFx0L4"
# ## 2.3 - Enregistrement des données
# + id="Ak_7ZmVmvtzb"
file = os.path.join("data", "web", "web.csv")
if not os.path.exists(os.path.dirname(file)):
os.makedirs(os.path.dirname(file))
tweets_df.to_csv(file)
| Scraping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
print("Hello Jupyter Notebooks!")
"Just test directly"
2+2, 5**2, True and False, False or True
name = "Valdis"
print(f"My name is {name}")
name = "Val"
print(f"My name is still {name}")
print(name)
# as soon as cell is run, the contents are available in the notebook
| Diena_13_Visualization/Jupyter Introduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %matplotlib inline
# 3D interpolation
# =============
#
# Interpolation of a three-dimensional regular grid.
#
# Trivariate
# -----------
#
# The
# [trivariate](https://pangeo-pyinterp.readthedocs.io/en/latest/generated/pyinterp.trivariate.html#pyinterp.trivariate)
# interpolation allows obtaining values at arbitrary points in a 3D space of a
# function defined on a grid.
#
# The distribution contains a 3D field `tcw.nc` that will be used in this help.
# This file is located in the `src/pyinterp/tests/dataset` directory at the root
# of the project.
#
# This method performs a bilinear interpolation in 2D space by considering the
# axes of longitude and latitude of the grid, then performs a linear interpolation
# in the third dimension. Its interface is similar to the
# [bivariate](https://pangeo-pyinterp.readthedocs.io/en/latest/generated/pyinterp.bivariate.html#pyinterp.bivariate)
# class except for a third axis, which is handled by this object.
#
# ---
# **Note**
#
# When using a time axis, care must be taken to use the same unit of dates, between the axis defined and the dates supplied during interpolation. The function [pyinterp.TemporalAxis.safe_cast()](https://pangeo-pyinterp.readthedocs.io/en/latest/generated/pyinterp.TemporalAxis.safe_cast.html#pyinterp.TemporalAxis.safe_cast) automates this task and will warn you if there is an inconsistency during the date conversion.
#
# ---
import cartopy.crs
import matplotlib
import matplotlib.pyplot
import numpy
import pyinterp
import pyinterp.backends.xarray
import pyinterp.tests
import xarray
# The first step is to load the data into memory and create the
# interpolator object:
ds = xarray.open_dataset(pyinterp.tests.grid3d_path())
interpolator = pyinterp.backends.xarray.Grid3D(ds.tcw)
# We will build a new grid that will be used to build a new interpolated
# grid.
mx, my, mz = numpy.meshgrid(numpy.arange(-180, 180, 0.25) + 1 / 3.0,
numpy.arange(-80, 80, 0.25) + 1 / 3.0,
numpy.array(["2002-07-02T15:00:00"],
dtype="datetime64"),
indexing='ij')
# We interpolate our grid using a
# [classical](https://pangeo-pyinterp.readthedocs.io/en/latest/generated/pyinterp.backends.xarray.Grid3D.trivariate.html#pyinterp.backends.xarray.Grid3D.trivariate):
trivariate = interpolator.trivariate(
dict(longitude=mx.ravel(), latitude=my.ravel(), time=mz.ravel()))
# Bicubic on 3D grid
# ----------------------
#
# The grid used organizes the latitudes in descending order. We ask our
# constructor to flip this axis in order to correctly evaluate the bicubic
# interpolation from this 3D cube (only necessary to perform a bicubic
# interpolation).
interpolator = pyinterp.backends.xarray.Grid3D(ds.data_vars["tcw"],
increasing_axes=True)
# We interpolate our grid using a
# [bicubic](https://pangeo-pyinterp.readthedocs.io/en/latest/generated/pyinterp.backends.xarray.Grid3D.bicubic.html#pyinterp.backends.xarray.Grid3D.bicubic)
# interpolation in space followed by a linear interpolation
# in the temporal axis:
bicubic = interpolator.bicubic(
dict(longitude=mx.ravel(), latitude=my.ravel(), time=mz.ravel()))
# We transform our result cubes into a matrix.
trivariate = trivariate.reshape(mx.shape).squeeze(axis=2)
bicubic = bicubic.reshape(mx.shape).squeeze(axis=2)
lons = mx[:, 0].squeeze()
lats = my[0, :].squeeze()
# Let's visualize our results.
# +
fig = matplotlib.pyplot.figure(figsize=(5, 8))
ax1 = fig.add_subplot(
211, projection=cartopy.crs.PlateCarree(central_longitude=180))
pcm = ax1.pcolormesh(lons,
lats,
trivariate.T,
cmap='jet',
shading='auto',
transform=cartopy.crs.PlateCarree(),
vmin=0,
vmax=80)
ax1.coastlines()
ax1.set_extent([80, 170, -45, 30], crs=cartopy.crs.PlateCarree())
ax1.set_title("Trilinear")
ax2 = fig.add_subplot(
212, projection=cartopy.crs.PlateCarree(central_longitude=180))
pcm = ax2.pcolormesh(lons,
lats,
bicubic.T,
cmap='jet',
shading='auto',
transform=cartopy.crs.PlateCarree(),
vmin=0,
vmax=80)
ax2.coastlines()
ax2.set_extent([80, 170, -45, 30], crs=cartopy.crs.PlateCarree())
ax2.set_title("Spline & Linear in time")
fig.colorbar(pcm, ax=[ax1, ax2], shrink=0.8)
fig.show()
| notebooks/auto_examples/ex_3d.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 简单线性回归(梯度下降法)
# ### 0. 引入依赖
import numpy as np
import matplotlib.pyplot as plt
# ### 1. 导入数据(data.csv)
# +
points = np.genfromtxt('data.csv', delimiter=',')
points[0,0]
# 提取points中的两列数据,分别作为x,y
x = points[:, 0]
y = points[:, 1]
# 用plt画出散点图
plt.scatter(x, y)
plt.show()
# -
# ### 2. 定义损失函数
# 损失函数是系数的函数,另外还要传入数据的x,y
def compute_cost(w, b, points):
total_cost = 0
M = len(points)
# 逐点计算平方损失误差,然后求平均数
for i in range(M):
x = points[i, 0]
y = points[i, 1]
total_cost += ( y - w * x - b ) ** 2
return total_cost/M
# ### 3. 定义模型的超参数
alpha = 0.0001
initial_w = 0
initial_b = 0
num_iter = 10
# ### 4. 定义核心梯度下降算法函数
# +
def grad_desc(points, initial_w, initial_b, alpha, num_iter):
w = initial_w
b = initial_b
# 定义一个list保存所有的损失函数值,用来显示下降的过程
cost_list = []
for i in range(num_iter):
cost_list.append( compute_cost(w, b, points) )
w, b = step_grad_desc( w, b, alpha, points )
return [w, b, cost_list]
def step_grad_desc( current_w, current_b, alpha, points ):
sum_grad_w = 0
sum_grad_b = 0
M = len(points)
# 对每个点,代入公式求和
for i in range(M):
x = points[i, 0]
y = points[i, 1]
sum_grad_w += ( current_w * x + current_b - y ) * x
sum_grad_b += current_w * x + current_b - y
# 用公式求当前梯度
grad_w = 2/M * sum_grad_w
grad_b = 2/M * sum_grad_b
# 梯度下降,更新当前的w和b
updated_w = current_w - alpha * grad_w
updated_b = current_b - alpha * grad_b
return updated_w, updated_b
# -
# ### 5. 测试:运行梯度下降算法计算最优的w和b
# +
w, b, cost_list = grad_desc( points, initial_w, initial_b, alpha, num_iter )
print("w is: ", w)
print("b is: ", b)
cost = compute_cost(w, b, points)
print("cost is: ", cost)
plt.plot(cost_list)
plt.show()
# -
# ### 6. 画出拟合曲线
# +
plt.scatter(x, y)
# 针对每一个x,计算出预测的y值
pred_y = w * x + b
plt.plot(x, pred_y, c='r')
plt.show()
# -
| rs/2_线性回归梯度下降法.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# Lecture 05
# -
from __future__ import print_function
import numpy as np
a = np.random.randint(0,10,5)
b = np.random.randint(0,10,(5,1))
a
b
a.ndim
b.ndim
a.T
b.T
a == a.T
b == b.T
a*a
np.dot(a,a)
np.save('atest',a)
np.savetxt('atest',a)
z = np.concatenate((a,a))
z
v = np.vstack((a,a))
h = np.hstack((a,a))
print(v)
print(h)
temp = np.dot(b,b.T)
print(temp)
z =temp.flatten()
print(z, z.ndim)
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 25)
y = np.cos(x)
plt.figure()
plt.plot(x, y)
plt.show()
plt.figure()
plt.plot(x,y,':')
plt.show()
plt.figure()
plt.plot(x,y,'X')
plt.show()
plt.figure()
plt.plot(x,y,'--o')
plt.show()
plt.figure()
plt.plot(x,y,'g-o')
plt.show()
plt.figure()
plt.plot(x,y)
plt.xlim(-2,10)
plt.ylim(-2,2)
plt.show()
plt.figure()
plt.plot(x,y)
plt.xscale('log')
plt.show()
plt.figure()
plt.plot(x,y)
plt.grid(True)
plt.show()
# +
z = np.sin(x)
plt.figure()
plt.plot(x,y, label='cos')
plt.plot(x,z, label='sin')
plt.grid(True)
plt.legend(loc=1)
plt.show()
# -
plt.figure()
plt.plot(x,y, label='cos')
plt.plot(x,z, label='sin')
plt.grid(True)
plt.legend(loc=1)
plt.title('Trig is hard', fontsize=16)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
| lectures/lecture05/lecture05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''PythonData'': conda)'
# language: python
# name: python38364bitpythondataconda5a1651f5006c42e28f55073acdf8b1f4
# ---
# +
# #!pip install gmaps
# #!pip install -U jupyter
# #!jupyter nbextension enable --py --sys-prefix widgetsnbextension
# #!jupyter nbextension enable --py gmaps
# #!pip install -e
# -
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
from pprint import pprint
# Import API key
from api_keys import g_key
#import csv and read into cs file
city_csv = "../WeatherPy/city_info.csv"
city_df = pd.read_csv(city_csv, encoding="utf-8")
# +
#Designate key to be used
gmaps.configure(api_key=g_key)
#Set variables for heat map
locations = city_df[["City Latitude", "City Longitude"]]
humidity = city_df["Humidity"].astype(float)
# Plot Heatmap
fig = gmaps.figure()
# Create heat layer
heat_layer = gmaps.heatmap_layer(locations, weights=humidity,
dissipating=False, max_intensity=10,
point_radius=1)
# Add layer
fig.add_layer(heat_layer)
# Display figure
fig
# -
#Creating ideal temperature
ideal_temp_df = city_df.loc [(city_df["Max Temperature"] <=80)&(city_df["Humidity"] <=30)&(city_df["Cloud Coverage"] <=0)]
#Showing data frame
ideal_temp_df
#Create copy of ideal_temp_df
hotel_df = ideal_temp_df.copy()
#Create empty column for "Hotel Name"
hotel_df["Hotel Name"]=" "
#Setting variables for params
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
target_type = "lodging"
radius = 5000
#For loop to go find first hotel in 5000 meter range from the given lat,lng in the data frame
for index, row in ideal_temp_df.iterrows():
lat_loc = row["City Latitude"]
lng_loc = row["City Longitude"]
location = f'{lat_loc}, {lng_loc}'
params = {
"location": location,
"types": target_type,
"radius": radius,
"key": g_key
}
response = requests.get(base_url, params).json()
#Try/except loop so code the doesn't crash
try:
hotel_df.loc[index,"Hotel Name"]=response["results"][0]["name"]
except:
pass
#Showing data frame
hotel_df
# +
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{Cities}</dd>
<dt>Country</dt><dd>{City Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["City Latitude", "City Longitude"]]
# -
#Create hotel layer
hotel_layer = gmaps.symbol_layer(
locations, fill_color='rgba(0, 150, 0, 0.4)',
stroke_color='rgba(0, 0, 150, 0.4)', scale=2,
info_box_content= hotel_info
)
#Plot hotel layer
fig = gmaps.figure()
#Add layer
fig.add_layer(hotel_layer)
#Display figure
fig
#Plot layers
fig = gmaps.figure()
#Add layers
fig.add_layer(heat_layer)
fig.add_layer(hotel_layer)
#Display figures
fig
| VacationPy/VacaionPy_HW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''tf'': conda)'
# name: python3
# ---
# +
import tensorflow.compat.v1 as tf
import os
import tensorflow_addons as tfa
tf.disable_v2_behavior()
tf.disable_eager_execution()
# +
#parameters
lfsize = [372, 540, 8, 8] #dimensions of Lytro light fields
batchsize = 4 #modify based on user's GPU memory
patchsize = [192, 192] #spatial dimensions of training light fields
disp_mult = 4.0 #max disparity between adjacent veiws
num_crops = 4 #number of random spatial crops per light field for each input queue thread to push
learning_rate = 0.001
train_iters = 12000
# +
#functions for CNN layers
def weight_variable(w_shape):
return tf.get_variable('weights', w_shape, initializer=tf.keras.initializers.glorot_normal())
def bias_variable(b_shape, init_bias=0.0):
return tf.get_variable('bias', b_shape, initializer=tf.constant_initializer(init_bias))
def cnn_layer(input_tensor, w_shape, b_shape, layer_name, rate=1, ds=1):
with tf.variable_scope(layer_name):
W = weight_variable(w_shape)
pad_amt_0 = rate * (w_shape[0] - 1)//2
pad_amt_1 = rate * (w_shape[1] - 1)//2
input_tensor = tf.pad(input_tensor, [[0,0],[pad_amt_0,pad_amt_0],[pad_amt_1,pad_amt_1],[0,0]], mode='SYMMETRIC')
h = tf.nn.convolution(input_tensor, W, strides=[ds, ds], padding='VALID', dilation_rate=[rate, rate], name=layer_name + '_conv')
h = tfa.layers.InstanceNormalization()(h + bias_variable(b_shape))
h = tf.nn.leaky_relu(h)
return h
def cnn_layer_plain(input_tensor, w_shape, b_shape, layer_name, rate=1, ds=1):
with tf.variable_scope(layer_name):
W = weight_variable(w_shape)
pad_amt_0 = rate * (w_shape[0] - 1)//2
pad_amt_1 = rate * (w_shape[1] - 1)//2
input_tensor = tf.pad(input_tensor, [[0,0],[pad_amt_0,pad_amt_0],[pad_amt_1,pad_amt_1],[0,0]], mode='SYMMETRIC')
h = tf.nn.convolution(input_tensor, W, strides=[ds, ds], padding='VALID', dilation_rate=[rate, rate], name=layer_name + '_conv')
h = h + bias_variable(b_shape)
return h
def cnn_layer_3D(input_tensor, w_shape, b_shape, layer_name, rate=1, ds=1):
with tf.variable_scope(layer_name):
W = weight_variable(w_shape)
pad_amt_0 = rate * (w_shape[0] - 1)//2
pad_amt_1 = rate * (w_shape[1] - 1)//2
pad_amt_2 = rate * (w_shape[2] - 1)//2
input_tensor = tf.pad(input_tensor, [[0,0],[pad_amt_0,pad_amt_0],[pad_amt_1,pad_amt_1],[pad_amt_2,pad_amt_2],[0,0]], mode='SYMMETRIC')
h = tf.nn.convolution(input_tensor, W, strides=[ds, ds, ds], padding='VALID', dilation_rate=[rate, rate, rate], name=layer_name + '_conv')
h = tfa.layers.InstanceNormalization()(h + bias_variable(b_shape))
h = tf.nn.leaky_relu(h)
return h
def cnn_layer_3D_plain(input_tensor, w_shape, b_shape, layer_name, rate=1, ds=1):
with tf.variable_scope(layer_name):
W = weight_variable(w_shape)
pad_amt_0 = rate * (w_shape[0] - 1)//2
pad_amt_1 = rate * (w_shape[1] - 1)//2
pad_amt_2 = rate * (w_shape[2] - 1)//2
input_tensor = tf.pad(input_tensor, [[0,0],[pad_amt_0,pad_amt_0],[pad_amt_1,pad_amt_1],[pad_amt_2,pad_amt_2],[0,0]], mode='SYMMETRIC')
h = tf.nn.convolution(input_tensor, W, strides=[ds, ds, ds], padding='VALID', dilation_rate=[rate, rate, rate], name=layer_name + '_conv')
h = h + bias_variable(b_shape)
return h
# +
#network to predict ray depths from input image
def depth_network(x, lfsize, disp_mult, name):
with tf.variable_scope(name):
b_sz = tf.shape(x)[0]
y_sz = tf.shape(x)[1]
x_sz = tf.shape(x)[2]
v_sz = lfsize[2]
u_sz = lfsize[3]
c1 = cnn_layer(x, [3, 3, 3, 16], [16], 'c1')
c2 = cnn_layer(c1, [3, 3, 16, 64], [64], 'c2')
c3 = cnn_layer(c2, [3, 3, 64, 128], [128], 'c3')
c4 = cnn_layer(c3, [3, 3, 128, 128], [128], 'c4', rate=2)
c5 = cnn_layer(c4, [3, 3, 128, 128], [128], 'c5', rate=4)
c6 = cnn_layer(c5, [3, 3, 128, 128], [128], 'c6', rate=8)
c7 = cnn_layer(c6, [3, 3, 128, 128], [128], 'c7', rate=16)
c8 = cnn_layer(c7, [3, 3, 128, 128], [128], 'c8')
c9 = cnn_layer(c8, [3, 3, 128, lfsize[2]*lfsize[3]], [lfsize[2]*lfsize[3]], 'c9')
c10 = disp_mult*tf.tanh(cnn_layer_plain(c9, [3, 3, lfsize[2]*lfsize[3], lfsize[2]*lfsize[3]], \
[lfsize[2]*lfsize[3]], 'c10'))
return tf.reshape(c10, [b_sz, y_sz, x_sz, v_sz, u_sz])
# +
#network for refining Lambertian light field (predict occluded rays and non-Lambertian effects)
def occlusions_network(x, shear, lfsize, name):
with tf.variable_scope(name):
b_sz = tf.shape(x)[0]
y_sz = tf.shape(x)[1]
x_sz = tf.shape(x)[2]
v_sz = lfsize[2]
u_sz = lfsize[3]
x = tf.transpose(tf.reshape(tf.transpose(x, perm=[0, 5, 1, 2, 3, 4]), \
[b_sz, 4, y_sz, x_sz, u_sz*v_sz]), perm=[0, 4, 2, 3, 1])
c1 = cnn_layer_3D(x, [3, 3, 3, 4, 8], [8], 'c1')
c2 = cnn_layer_3D(c1, [3, 3, 3, 8, 8], [8], 'c2')
c3 = cnn_layer_3D(c2, [3, 3, 3, 8, 8], [8], 'c3')
c4 = cnn_layer_3D(c3, [3, 3, 3, 8, 8], [8], 'c4')
c5 = tf.tanh(cnn_layer_3D_plain(c4, [3, 3, 3, 8, 3], [3], 'c5'))
output = tf.transpose(tf.reshape(tf.transpose(c5, perm=[0, 4, 2, 3, 1]), \
[b_sz, 3, y_sz, x_sz, v_sz, u_sz]), perm=[0, 2, 3, 4, 5, 1]) + shear
return output
# +
#full forward model
def forward_model(x, lfsize, disp_mult):
with tf.variable_scope('forward_model') as scope:
#predict ray depths from input image
ray_depths = depth_network(x, lfsize, disp_mult, 'ray_depths')
#shear input image by predicted ray depths to render Lambertian light field
lf_shear_r = depth_rendering(x[:, :, :, 0], ray_depths, lfsize)
lf_shear_g = depth_rendering(x[:, :, :, 1], ray_depths, lfsize)
lf_shear_b = depth_rendering(x[:, :, :, 2], ray_depths, lfsize)
lf_shear = tf.stack([lf_shear_r, lf_shear_g, lf_shear_b], axis=5)
#occlusion/non-Lambertian prediction network
shear_and_depth = tf.stack([lf_shear_r, lf_shear_g, lf_shear_b, tf.stop_gradient(ray_depths)], axis=5)
y = occlusions_network(shear_and_depth, lf_shear, lfsize, 'occlusions')
return ray_depths, lf_shear, y
# +
#render light field from input image and ray depths
def depth_rendering(central, ray_depths, lfsize):
with tf.variable_scope('depth_rendering') as scope:
b_sz = tf.shape(central)[0]
y_sz = tf.shape(central)[1]
x_sz = tf.shape(central)[2]
u_sz = lfsize[2]
v_sz = lfsize[3]
central = tf.expand_dims(tf.expand_dims(central, 3), 4)
#create and reparameterize light field grid
b_vals = tf.to_float(tf.range(b_sz))
v_vals = tf.to_float(tf.range(v_sz)) - tf.to_float(v_sz)/2.0
u_vals = tf.to_float(tf.range(u_sz)) - tf.to_float(u_sz)/2.0
y_vals = tf.to_float(tf.range(y_sz))
x_vals = tf.to_float(tf.range(x_sz))
b, y, x, v, u = tf.meshgrid(b_vals, y_vals, x_vals, v_vals, u_vals, indexing='ij')
#warp coordinates by ray depths
y_t = y + v * ray_depths
x_t = x + u * ray_depths
v_r = tf.zeros_like(b)
u_r = tf.zeros_like(b)
#indices for linear interpolation
b_1 = tf.to_int32(b)
y_1 = tf.to_int32(tf.floor(y_t))
y_2 = y_1 + 1
x_1 = tf.to_int32(tf.floor(x_t))
x_2 = x_1 + 1
v_1 = tf.to_int32(v_r)
u_1 = tf.to_int32(u_r)
y_1 = tf.clip_by_value(y_1, 0, y_sz-1)
y_2 = tf.clip_by_value(y_2, 0, y_sz-1)
x_1 = tf.clip_by_value(x_1, 0, x_sz-1)
x_2 = tf.clip_by_value(x_2, 0, x_sz-1)
#assemble interpolation indices
interp_pts_1 = tf.stack([b_1, y_1, x_1, v_1, u_1], -1)
interp_pts_2 = tf.stack([b_1, y_2, x_1, v_1, u_1], -1)
interp_pts_3 = tf.stack([b_1, y_1, x_2, v_1, u_1], -1)
interp_pts_4 = tf.stack([b_1, y_2, x_2, v_1, u_1], -1)
#gather light fields to be interpolated
lf_1 = tf.gather_nd(central, interp_pts_1)
lf_2 = tf.gather_nd(central, interp_pts_2)
lf_3 = tf.gather_nd(central, interp_pts_3)
lf_4 = tf.gather_nd(central, interp_pts_4)
#calculate interpolation weights
y_1_f = tf.to_float(y_1)
x_1_f = tf.to_float(x_1)
d_y_1 = 1.0 - (y_t - y_1_f)
d_y_2 = 1.0 - d_y_1
d_x_1 = 1.0 - (x_t - x_1_f)
d_x_2 = 1.0 - d_x_1
w1 = d_y_1 * d_x_1
w2 = d_y_2 * d_x_1
w3 = d_y_1 * d_x_2
w4 = d_y_2 * d_x_2
lf = tf.add_n([w1*lf_1, w2*lf_2, w3*lf_3, w4*lf_4])
return lf
# +
#resample ray depths for depth consistency regularization
def transform_ray_depths(ray_depths, u_step, v_step, lfsize):
with tf.variable_scope('transform_ray_depths') as scope:
b_sz = tf.shape(ray_depths)[0]
y_sz = tf.shape(ray_depths)[1]
x_sz = tf.shape(ray_depths)[2]
u_sz = lfsize[2]
v_sz = lfsize[3]
#create and reparameterize light field grid
b_vals = tf.to_float(tf.range(b_sz))
v_vals = tf.to_float(tf.range(v_sz)) - tf.to_float(v_sz)/2.0
u_vals = tf.to_float(tf.range(u_sz)) - tf.to_float(u_sz)/2.0
y_vals = tf.to_float(tf.range(y_sz))
x_vals = tf.to_float(tf.range(x_sz))
b, y, x, v, u = tf.meshgrid(b_vals, y_vals, x_vals, v_vals, u_vals, indexing='ij')
#warp coordinates by ray depths
y_t = y + v_step * ray_depths
x_t = x + u_step * ray_depths
v_t = v - v_step + tf.to_float(v_sz)/2.0
u_t = u - u_step + tf.to_float(u_sz)/2.0
#indices for linear interpolation
b_1 = tf.to_int32(b)
y_1 = tf.to_int32(tf.floor(y_t))
y_2 = y_1 + 1
x_1 = tf.to_int32(tf.floor(x_t))
x_2 = x_1 + 1
v_1 = tf.to_int32(v_t)
u_1 = tf.to_int32(u_t)
y_1 = tf.clip_by_value(y_1, 0, y_sz-1)
y_2 = tf.clip_by_value(y_2, 0, y_sz-1)
x_1 = tf.clip_by_value(x_1, 0, x_sz-1)
x_2 = tf.clip_by_value(x_2, 0, x_sz-1)
v_1 = tf.clip_by_value(v_1, 0, v_sz-1)
u_1 = tf.clip_by_value(u_1, 0, u_sz-1)
#assemble interpolation indices
interp_pts_1 = tf.stack([b_1, y_1, x_1, v_1, u_1], -1)
interp_pts_2 = tf.stack([b_1, y_2, x_1, v_1, u_1], -1)
interp_pts_3 = tf.stack([b_1, y_1, x_2, v_1, u_1], -1)
interp_pts_4 = tf.stack([b_1, y_2, x_2, v_1, u_1], -1)
#gather light fields to be interpolated
lf_1 = tf.gather_nd(ray_depths, interp_pts_1)
lf_2 = tf.gather_nd(ray_depths, interp_pts_2)
lf_3 = tf.gather_nd(ray_depths, interp_pts_3)
lf_4 = tf.gather_nd(ray_depths, interp_pts_4)
#calculate interpolation weights
y_1_f = tf.to_float(y_1)
x_1_f = tf.to_float(x_1)
d_y_1 = 1.0 - (y_t - y_1_f)
d_y_2 = 1.0 - d_y_1
d_x_1 = 1.0 - (x_t - x_1_f)
d_x_2 = 1.0 - d_x_1
w1 = d_y_1 * d_x_1
w2 = d_y_2 * d_x_1
w3 = d_y_1 * d_x_2
w4 = d_y_2 * d_x_2
lf = tf.add_n([w1*lf_1, w2*lf_2, w3*lf_3, w4*lf_4])
return lf
# +
#loss to encourage consistency of ray depths corresponding to same scene point
def depth_consistency_loss(x, lfsize):
x_u = transform_ray_depths(x, 1.0, 0.0, lfsize)
x_v = transform_ray_depths(x, 0.0, 1.0, lfsize)
x_uv = transform_ray_depths(x, 1.0, 1.0, lfsize)
d1 = (x[:,:,:,1:,1:]-x_u[:,:,:,1:,1:])
d2 = (x[:,:,:,1:,1:]-x_v[:,:,:,1:,1:])
d3 = (x[:,:,:,1:,1:]-x_uv[:,:,:,1:,1:])
l1 = tf.reduce_mean(tf.abs(d1)+tf.abs(d2)+tf.abs(d3))
return l1
# +
#spatial TV loss (l1 of spatial derivatives)
def image_derivs(x, nc):
dy = tf.nn.depthwise_conv2d(x, tf.tile(tf.expand_dims(tf.expand_dims([[1.0, 2.0, 1.0], [0.0, 0.0, 0.0], [-1.0, -2.0, -1.0]], 2), 3), [1, 1, nc, 1]), strides=[1, 1, 1, 1], padding='VALID')
dx = tf.nn.depthwise_conv2d(x, tf.tile(tf.expand_dims(tf.expand_dims([[1.0, 0.0, -1.0], [2.0, 0.0, -2.0], [1.0, 0.0, -1.0]], 2), 3), [1, 1, nc, 1]), strides=[1, 1, 1, 1], padding='VALID')
return dy, dx
def tv_loss(x):
b_sz = tf.shape(x)[0]
y_sz = tf.shape(x)[1]
x_sz = tf.shape(x)[2]
u_sz = lfsize[2]
v_sz = lfsize[3]
temp = tf.reshape(x, [b_sz, y_sz, x_sz, u_sz*v_sz])
dy, dx = image_derivs(temp, u_sz*v_sz)
l1 = tf.reduce_mean(tf.abs(dy)+tf.abs(dx))
return l1
# +
#normalize to between -1 and 1, given input between 0 and 1
def normalize_lf(lf):
return 2.0*(lf-0.5)
# +
#input pipeline
def process_lf(lf, num_crops, lfsize, patchsize):
gamma_val = tf.random_uniform(shape=[], minval=0.4, maxval=1.0) #random gamma for data augmentation (change at test time, I suggest 0.4-0.5)
lf = normalize_lf(tf.image.adjust_gamma(tf.to_float(lf[:lfsize[0]*14, :lfsize[1]*14, :])/255.0, gamma=gamma_val))
lf = tf.transpose(tf.reshape(lf, [lfsize[0], 14, lfsize[1], 14, 3]), [0, 2, 1, 3, 4])
lf = lf[:, :, (14//2)-(lfsize[2]//2):(14//2)+(lfsize[2]//2), (14//2)-(lfsize[3]//2):(14//2)+(lfsize[3]//2), :]
aif = lf[:, :, lfsize[2]//2, lfsize[3]//2, :]
aif_list = []
lf_list = []
for i in range(num_crops):
r = tf.random_uniform(shape=[], minval=0, maxval=tf.shape(lf)[0]-patchsize[0], dtype=tf.int32)
c = tf.random_uniform(shape=[], minval=0, maxval=tf.shape(lf)[1]-patchsize[1], dtype=tf.int32)
aif_list.append(aif[r:r+patchsize[0], c:c+patchsize[1], :])
lf_list.append(lf[r:r+patchsize[0], c:c+patchsize[1], :, :, :])
return aif_list, lf_list
def read_lf(filename_queue, num_crops, lfsize, patchsize):
value = tf.read_file(filename_queue[0])
lf = tf.image.decode_image(value, channels=3)
aif_list, lf_list = process_lf(lf, num_crops, lfsize, patchsize)
return aif_list, lf_list
def input_pipeline(filenames, lfsize, patchsize, batchsize, num_crops):
filename_queue = tf.train.slice_input_producer([filenames], shuffle=True)
example_list = [read_lf(filename_queue, num_crops, lfsize, patchsize) for _ in range(4)] #number of threads for populating queue
min_after_dequeue = 0
capacity = 8
aif_batch, lf_batch = tf.train.shuffle_batch_join(example_list, batch_size=batchsize, capacity=capacity,
min_after_dequeue=min_after_dequeue, enqueue_many=True,
shapes=[[patchsize[0], patchsize[1], 3],
[patchsize[0], patchsize[1], lfsize[2], lfsize[3], 3]])
return aif_batch, lf_batch
# +
train_path = '/home/teddy/Desktop/CP-HW/Flowers_8bit' #path to training examples
train_filenames = [os.path.join(train_path, f) for f in os.listdir(train_path) if not f.startswith('.')]
aif_batch, lf_batch = input_pipeline(train_filenames, lfsize, patchsize, batchsize, num_crops)
# +
#forward model
ray_depths, lf_shear, y = forward_model(aif_batch, lfsize, disp_mult)
#training losses to minimize
lam_tv = 0.01
lam_dc = 0.005
with tf.name_scope('loss'):
shear_loss = tf.reduce_mean(tf.abs(lf_shear-lf_batch))
output_loss = tf.reduce_mean(tf.abs(y-lf_batch))
tv_loss = lam_tv * tv_loss(ray_depths)
depth_consistency_loss = lam_dc * depth_consistency_loss(ray_depths, lfsize)
train_loss = shear_loss + output_loss + tv_loss + depth_consistency_loss
with tf.name_scope('train'):
train_step = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(train_loss)
# +
#tensorboard summaries
tf.summary.scalar('shear_loss', shear_loss)
tf.summary.scalar('output_loss', output_loss)
tf.summary.scalar('tv_loss', tv_loss)
tf.summary.scalar('depth_consistency_loss', depth_consistency_loss)
tf.summary.scalar('train_loss', train_loss)
tf.summary.histogram('ray_depths', ray_depths)
tf.summary.image('input_image', aif_batch)
tf.summary.image('lf_shear', tf.reshape(tf.transpose(lf_shear, perm=[0, 3, 1, 4, 2, 5]),
[batchsize, patchsize[0]*lfsize[2], patchsize[1]*lfsize[3], 3]))
tf.summary.image('lf_output', tf.reshape(tf.transpose(y, perm=[0, 3, 1, 4, 2, 5]),
[batchsize, patchsize[0]*lfsize[2], patchsize[1]*lfsize[3], 3]))
tf.summary.image('ray_depths', tf.reshape(tf.transpose(ray_depths, perm=[0, 3, 1, 4, 2]),
[batchsize, patchsize[0]*lfsize[2], patchsize[1]*lfsize[3], 1]))
merged = tf.summary.merge_all()
# +
logdir = 'logs/train/' #path to store logs
checkpointdir = 'checkpoints/' #path to store checkpoints
with tf.Session() as sess:
train_writer = tf.summary.FileWriter(logdir, sess.graph)
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer()) #initialize variables
coord = tf.train.Coordinator() #coordinator for input queue threads
threads = tf.train.start_queue_runners(sess=sess, coord=coord) #start input queue threads
for i in range(train_iters):
#training training step
_ = sess.run(train_step)
#save training summaries
if (i+1) % 1 == 0: #can change the frequency of writing summaries if desired
print('training step: ', i)
trainsummary = sess.run(merged)
train_writer.add_summary(trainsummary, i)
#save checkpoint
if (i+1) % 100 == 0:
saver.save(sess, checkpointdir + 'model.ckpt', global_step=i)
#cleanup
train_writer.close()
coord.request_stop()
coord.join(threads)
| Local_Light_Field_Synthesis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# # Claims classification with Keras: The Python Deep Learning Library
#
# In this notebook, you will train a classification model for claim text that will predict `1` if the claim is an auto insurance claim or `0` if it is a home insurance claim. The model will be built using a type of DNN called the Long Short-Term Memory (LSTM) recurrent neural network using TensorFlow via the Keras library.
#
# This notebook will walk you through the text analytic process that consists of:
#
# - Example word analogy with Glove word embeddings
# - Vectorizing training data using GloVe word embeddings
# - Creating and training a LSTM based classifier model
# - Using the model to predict classifications
# ## Prepare modules
#
# This notebook will use the Keras library to build and train the classifier.
# +
import string
import re
import os
import numpy as np
import pandas as pd
import urllib.request
import tensorflow as tf
import keras
from keras import models, layers, optimizers, regularizers
from keras.models import Sequential
from keras.layers import Dense, Activation, Embedding, LSTM
from keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
print('Keras version: ', keras.__version__)
print('Tensorflow version: ', tf.__version__)
# -
# **Let's download the pretrained GloVe word embeddings and load them in this notebook.**
#
# This will create a `dictionary` of size **400,000** words, and the corresponding `GloVe word vectors` for words in the dictionary. Each word vector is of size: 50, thus the dimensionality of the word embeddings used here is **50**.
#
# *The next cell might take couple of minutes to run*
# +
words_list_url = ('https://quickstartsws9073123377.blob.core.windows.net/'
'azureml-blobstore-0d1c4218-a5f9-418b-bf55-902b65277b85/glove50d/wordsList.npy')
word_vectors_url = ('https://quickstartsws9073123377.blob.core.windows.net/'
'azureml-blobstore-0d1c4218-a5f9-418b-bf55-902b65277b85/glove50d/wordVectors.npy')
word_vectors_dir = './word_vectors'
os.makedirs(word_vectors_dir, exist_ok=True)
urllib.request.urlretrieve(words_list_url, os.path.join(word_vectors_dir, 'wordsList.npy'))
urllib.request.urlretrieve(word_vectors_url, os.path.join(word_vectors_dir, 'wordVectors.npy'))
dictionary = np.load(os.path.join(word_vectors_dir, 'wordsList.npy'))
dictionary = dictionary.tolist()
dictionary = [word.decode('UTF-8') for word in dictionary]
print('Loaded the dictionary! Dictionary size: ', len(dictionary))
word_vectors = np.load(os.path.join(word_vectors_dir, 'wordVectors.npy'))
print ('Loaded the word vectors! Shape of the word vectors: ', word_vectors.shape)
# -
# **Create the word contractions map. The map is going to used to expand contractions in our corpus (for example "can't" becomes "cannot").**
contractions_url = ('https://quickstartsws9073123377.blob.core.windows.net/'
'azureml-blobstore-0d1c4218-a5f9-418b-bf55-902b65277b85/glove50d/contractions.xlsx')
contractions_df = pd.read_excel(contractions_url)
contractions = dict(zip(contractions_df.original, contractions_df.expanded))
print('Review first 10 entries from the contractions map')
print(contractions_df.head(10))
# ## Word analogy example with GloVe word embeddings
# GloVe represents each word in the dictionary as a vector. We can use word vectors for predicting word analogies.
#
# See example below that solves the following analogy: **father->mother :: king->?**
# Cosine similarity is a measure used to evaluate how similar two words are. This helper function takes vectors of two words and returns their cosine similarity that range from -1 to 1. For synonyms the cosine similarity will be close to 1 and for antonyms the cosine similarity will be close to -1.
def cosine_similarity(u, v):
dot = u.dot(v)
norm_u = np.linalg.norm(u)
norm_v = np.linalg.norm(v)
cosine_similarity = dot/norm_u/norm_v
return cosine_similarity
# Let’s review the vector for the words **father**, **mother**, and **king**
father = word_vectors[dictionary.index('father')]
mother = word_vectors[dictionary.index('mother')]
king = word_vectors[dictionary.index('king')]
print(father)
print('')
print(mother)
print('')
print(king)
# To solve for the analogy, we need to solve for x in the following equation:
#
# **mother – father = x - king**
#
# Thus, **x = mother - father + king**
x = mother - father + king
# **Next, we will find the word whose word vector is closest to the vector x computed above**
#
# To limit the computation cost, we will identify the best word from a list of possible answers instead of searching the entire dictionary.
# +
answers = ['women', 'prince', 'princess', 'england', 'goddess', 'diva', 'empress',
'female', 'lady', 'monarch', 'title', 'queen', 'sovereign', 'ruler',
'male', 'crown', 'majesty', 'royal', 'cleopatra', 'elizabeth', 'victoria',
'throne', 'internet', 'sky', 'machine', 'learning', 'fairy']
df = pd.DataFrame(columns = ['word', 'cosine_similarity'])
# Find the similarity of each word in answers with x
for w in answers:
sim = cosine_similarity(word_vectors[dictionary.index(w)], x)
df = df.append({'word': w, 'cosine_similarity': sim}, ignore_index=True)
df.sort_values(['cosine_similarity'], ascending=False, inplace=True)
print(df)
# -
# **From the results above, you can observe the vector for the word `queen` is most similar to the vector `x`.**
# ## Prepare the training data
#
# Contoso Ltd has provided a small document containing examples of the text they receive as claim text. They have provided this in a text file with one line per sample claim.
#
# Run the following cell to download and examine the contents of the file. Take a moment to read the claims (you may find some of them rather comical!).
# +
data_location = './data'
base_data_url = 'https://databricksdemostore.blob.core.windows.net/data/05.03/'
filesToDownload = ['claims_text.txt', 'claims_labels.txt']
os.makedirs(data_location, exist_ok=True)
for file in filesToDownload:
data_url = os.path.join(base_data_url, file)
local_file_path = os.path.join(data_location, file)
urllib.request.urlretrieve(data_url, local_file_path)
print('Downloaded file: ', file)
claims_corpus = [claim for claim in open(os.path.join(data_location, 'claims_text.txt'))]
claims_corpus
# -
# In addition to the claims sample, Contoso Ltd has also provided a document that labels each of the sample claims provided as either 0 ("home insurance claim") or 1 ("auto insurance claim"). This to is presented as a text file with one row per sample, presented in the same order as the claim text.
#
# Run the following cell to examine the contents of the supplied claims_labels.txt file:
labels = [int(re.sub("\n", "", label)) for label in open(os.path.join(data_location, 'claims_labels.txt'))]
print(len(labels))
print(labels[0:5]) # first 5 labels
print(labels[-5:]) # last 5 labels
# As you can see from the above output, the values are integers 0 or 1. In order to use these as labels with which to train our model, we need to convert these integer values to categorical values (think of them like enum's from other programming languages).
#
# We can use the to_categorical method from `keras.utils` to convert these value into binary categorical values. Run the following cell:
labels = to_categorical(labels, 2)
print(labels.shape)
print()
print(labels[0:2]) # first 2 categorical labels
print()
print(labels[-2:]) # last 2 categorical labels
# Now that we have our claims text and labels loaded, we are ready to begin our first step in the text analytics process, which is to normalize the text.
# ### Process the claims corpus
#
# - Lowercase all words
# - Expand contractions (for example "can't" becomes "cannot")
# - Remove special characters (like punctuation)
# - Convert the list of words in the claims text to a list of corresponding indices of those words in the dictionary. Note that the order of the words as they appear in the written claims is maintained.
#
# Run the next cell to process the claims corpus.
# +
def remove_special_characters(token):
pattern = re.compile('[{}]'.format(re.escape(string.punctuation)))
filtered_token = pattern.sub('', token)
return filtered_token
def convert_to_indices(corpus, dictionary, c_map, unk_word_index = 399999):
sequences = []
for i in range(len(corpus)):
tokens = corpus[i].split()
sequence = []
for word in tokens:
word = word.lower()
if word in c_map:
resolved_words = c_map[word].split()
for resolved_word in resolved_words:
try:
word_index = dictionary.index(resolved_word)
sequence.append(word_index)
except ValueError:
sequence.append(unk_word_index) #Vector for unkown words
else:
try:
clean_word = remove_special_characters(word)
if len(clean_word) > 0:
word_index = dictionary.index(clean_word)
sequence.append(word_index)
except ValueError:
sequence.append(unk_word_index) #Vector for unkown words
sequences.append(sequence)
return sequences
claims_corpus_indices = convert_to_indices(claims_corpus, dictionary, contractions)
# -
# **Review the indices of one sample claim**
print(remove_special_characters(claims_corpus[5]).split())
print()
print('Ordered list of indices for the above claim')
print(claims_corpus_indices[5])
print('')
print('For example, the index of second word in the claims text \"pedestrian\" is: ', dictionary.index('pedestrian'))
# **Create fixed length vectors**
#
# The number of words used in a claim, vary with the claim. We need to create the input vectors of fixed size. We will use the utility function `pad_sequences` from `keras.preprocessing.sequence` to help us create fixed size vector (size = 125) of word indices.
# +
maxSeqLength = 125
X = pad_sequences(claims_corpus_indices, maxlen=maxSeqLength, padding='pre', truncating='post')
print('Review the new fixed size vector for a sample claim')
print(remove_special_characters(claims_corpus[5]).split())
print()
print(X[5])
print('')
print('Lenght of the vector: ', len(X[5]))
# -
# ## Build the LSTM recurrent neural network
#
# Now that you have preprocessed the input features from training text data, you are ready to build the classifier. In this case, we will build a LSTM recurrent neural network. The network will have a word embedding layer that will convert the word indices to GloVe word vectors. The GloVe word vectors are then passed to the LSTM layer, followed by a binary classifier output layer.
#
# Run the following cell to build the structure for your neural network:
embedding_layer = Embedding(word_vectors.shape[0],
word_vectors.shape[1],
weights=[word_vectors],
input_length=maxSeqLength,
trainable=False)
model = Sequential()
model.add(embedding_layer)
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(2, activation='sigmoid'))
model.summary()
# ## Train the neural network
# First, we will split the data into two sets: (1) training set and (2) validation or test set. The validation set accuracy will be used to measure the performance of the model.
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.2, random_state=0)
# -
# We will use the `Adam` optimization algorithm to train the model. Also, given that the problem is of type `Binary Classification`, we are using the `Sigmoid` activation function for the output layer and the `Binary Crossentropy` as the loss function.
opt = keras.optimizers.Adam(lr=0.001)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
# Now we are ready to let the DNN learn by fitting it against our training data and labels. We have defined the batch size and the number of epochs for our training.
#
# Run the following cell to fit your model against the data:
epochs = 100
batch_size = 16
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=(X_test, y_test))
# Take a look at the final output for the value "val_accuracy". This stands for validation set accuracy. If you think of random chance as having a 50% accuracy, is you model better than random?
#
# It's OK if it's not much better then random at this point- this is only your first model! The typical data science process would continue with many more iterations taking different actions to improve the model accuracy, including:
# - Acquiring more labeled documents for training
# - Regularization to prevent overfitting
# - Adjusting the model hyperparameters, such as the number of layers, number of nodes per layer, and learning rate
# ## Test classifying claims
#
# Now that you have constructed a model, try it out against a set of claims. Recall that we need to first preprocess the text.
#
# Run the following cell to prepare our test data:
# +
test_claim = ['I crashed my car into a pole.',
'The flood ruined my house.',
'I lost control of my car and fell in the river.']
test_claim_indices = convert_to_indices(test_claim, dictionary, contractions)
test_data = pad_sequences(test_claim_indices, maxlen=maxSeqLength, padding='pre', truncating='post')
# -
# Now use the model to predict the classification:
pred = model.predict(test_data)
pred_label = pred.argmax(axis=1)
pred_df = pd.DataFrame(np.column_stack((pred,pred_label)), columns=['class_0', 'class_1', 'label'])
pred_df.label = pred_df.label.astype(int)
print('Predictions')
pred_df
# ## Model exporting and importing
#
# Now that you have a working model, you need export the trained model to a file so that it can be used downstream by the deployed web service.
#
# *The next two cells might take couple of minutes to run*
#
# To export the model run the following cell:
# +
import joblib
output_folder = './output'
model_filename = 'final_model.hdf5'
os.makedirs(output_folder, exist_ok=True)
model.save(os.path.join(output_folder, model_filename))
# -
# To test re-loading the model into the same Notebook instance, run the following cell:
from keras.models import load_model
loaded_model = load_model(os.path.join(output_folder, model_filename))
loaded_model.summary()
# As before you can use the model to run predictions.
#
# Run the following cells to try the prediction with the re-loaded model:
pred = loaded_model.predict(test_data)
pred_label = pred.argmax(axis=1)
pred_df = pd.DataFrame(np.column_stack((pred,pred_label)), columns=['class_0', 'class_1', 'label'])
pred_df.label = pred_df.label.astype(int)
print('Predictions')
pred_df
| Hands-on lab/notebooks/03 Claim Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://www.microsoft.com/en-us/research/uploads/prod/2020/05/Segmentation.png" width="400"/>
# # Customer Segmentation: Estimate Individualized Responses to Incentives
#
# Nowadays, business decision makers rely on estimating the causal effect of interventions to answer what-if questions about shifts in strategy, such as promoting specific product with discount, adding new features to a website or increasing investment from a sales team. However, rather than learning whether to take action for a specific intervention for all users, people are increasingly interested in understanding the different responses from different users to the two alternatives. Identifying the characteristics of users having the strongest response for the intervention could help make rules to segment the future users into different groups. This can help optimize the policy to use the least resources and get the most profit.
#
# In this case study, we will use a personalized pricing example to explain how the [EconML](https://aka.ms/econml) library could fit into this problem and provide robust and reliable causal solutions.
#
# ### Summary
#
# 1. [Background](#background)
# 2. [Data](#data)
# 3. [Get Causal Effects with EconML](#estimate)
# 4. [Understand Treatment Effects with EconML](#interpret)
# 5. [Make Policy Decisions with EconML](#policy)
# 6. [Conclusions](#conclusion)
#
# # Background <a id="background"></a>
#
# <img src="https://cdn.pixabay.com/photo/2018/08/16/11/59/radio-3610287_960_720.png" width="400" />
#
# The global online media market is growing fast over the years. Media companies are always interested in attracting more users into the market and encouraging them to buy more songs or become members. In this example, we'll consider a scenario where one experiment a media company is running is to give small discount (10%, 20% or 0) to their current users based on their income level in order to boost the likelihood of their purchase. The goal is to understand the **heterogeneous price elasticity of demand** for people with different income level, learning which users would respond most strongly to a small discount. Furthermore, their end goal is to make sure that despite decreasing the price for some consumers, the demand is raised enough to boost the overall revenue.
#
# EconML’s `DML` based estimators can be used to take the discount variation in existing data, along with a rich set of user features, to estimate heterogeneous price sensitivities that vary with multiple customer features. Then, the `SingleTreeCateInterpreter` provides a presentation-ready summary of the key features that explain the biggest differences in responsiveness to a discount, and the `SingleTreePolicyInterpreter` recommends a policy on who should receive a discount in order to increase revenue (not only demand), which could help the company to set an optimal price for those users in the future.
# +
# Some imports to get us started
# Utilities
import os
import urllib.request
import numpy as np
import pandas as pd
# Generic ML imports
from sklearn.preprocessing import PolynomialFeatures
from sklearn.ensemble import GradientBoostingRegressor
# EconML imports
from econml.dml import LinearDML, ForestDML
from econml.cate_interpreter import SingleTreeCateInterpreter, SingleTreePolicyInterpreter
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# # Data <a id="data"></a>
#
#
# The dataset* has ~10,000 observations and includes 9 continuous and categorical variables that represent user's characteristics and online behaviour history such as age, log income, previous purchase, previous online time per week, etc.
#
# We define the following variables:
#
# Feature Name|Type|Details
# :--- |:---|:---
# **account_age** |W| user's account age
# **age** |W|user's age
# **avg_hours** |W| the average hours user was online per week in the past
# **days_visited** |W| the average number of days user visited the website per week in the past
# **friend_count** |W| number of friends user connected in the account
# **has_membership** |W| whether the user had membership
# **is_US** |W| whether the user accesses the website from the US
# **songs_purchased** |W| the average songs user purchased per week in the past
# **income** |X| user's income
# **price** |T| the price user was exposed during the discount season (baseline price * small discount)
# **demand** |Y| songs user purchased during the discount season
#
# **To protect the privacy of the company, we use the simulated data as an example here. The data is synthetically generated and the feature distributions don't correspond to real distributions. However, the feature names have preserved their names and meaning.*
#
#
# The treatment and outcome are generated using the following functions:
# $$
# T =
# \begin{cases}
# 1 & \text{with } p=0.2, \\
# 0.9 & \text{with }p=0.3, & \text{if income}<1 \\
# 0.8 & \text{with }p=0.5, \\
# \\
# 1 & \text{with }p=0.7, \\
# 0.9 & \text{with }p=0.2, & \text{if income}\ge1 \\
# 0.8 & \text{with }p=0.1, \\
# \end{cases}
# $$
#
#
# \begin{align}
# \gamma(X) & = -3 - 14 \cdot \{\text{income}<1\} \\
# \beta(X,W) & = 20 + 0.5 \cdot \text{avg_hours} + 5 \cdot \{\text{days_visited}>4\} \\
# Y &= \gamma(X) \cdot T + \beta(X,W)
# \end{align}
#
#
# Import the sample pricing data
file_url = "https://msalicedatapublic.blob.core.windows.net/datasets/Pricing/pricing_sample.csv"
train_data = pd.read_csv(file_url)
# Data sample
train_data.head()
# Define estimator inputs
Y = train_data["demand"] # outcome of interest
T = train_data["price"] # intervention, or treatment
X = train_data[["income"]] # features
W = train_data.drop(columns=["demand", "price", "income"]) # confounders
# Get test data
X_test = np.linspace(0, 5, 100).reshape(-1, 1)
X_test_data = pd.DataFrame(X_test, columns=["income"])
# # Get Causal Effects with EconML <a id="estimate"></a>
# To learn the price elasticity on demand as a function of income, we fit the model as follows:
#
#
# \begin{align}
# log(Y) & = \theta(X) \cdot log(T) + f(X,W) + \epsilon \\
# log(T) & = g(X,W) + \eta
# \end{align}
#
#
# where $\epsilon, \eta$ are uncorrelated error terms.
#
# The models we fit here aren't an exact match for the data generation function above, but if they are a good approximation, they will allow us to create a good discount policy. Although the model is misspecified, we hope to see that our `DML` based estimators can still capture the right trend of $\theta(X)$ and that the recommended policy beats other baseline policies (such as always giving a discount) on revenue. Because of the mismatch between the data generating process and the model we're fitting, there isn't a single true $\theta(X)$ (the true elasticity varies with not only X but also T and W), but given how we generate the data above, we can still calculate the range of true $\theta(X)$ to compare against.
# +
# Define underlying treatment effect function given DGP
def gamma_fn(X):
return -3 - 14 * (X["income"] < 1)
def beta_fn(X):
return 20 + 0.5 * (X["avg_hours"]) + 5 * (X["days_visited"] > 4)
def demand_fn(data, T):
Y = gamma_fn(data) * T + beta_fn(data)
return Y
def true_te(x, n, stats):
if x < 1:
subdata = train_data[train_data["income"] < 1].sample(n=n, replace=True)
else:
subdata = train_data[train_data["income"] >= 1].sample(n=n, replace=True)
te_array = subdata["price"] * gamma_fn(subdata) / (subdata["demand"])
if stats == "mean":
return np.mean(te_array)
elif stats == "median":
return np.median(te_array)
elif isinstance(stats, int):
return np.percentile(te_array, stats)
# -
# Get the estimate and range of true treatment effect
truth_te_estimate = np.apply_along_axis(true_te, 1, X_test, 1000, "mean") # estimate
truth_te_upper = np.apply_along_axis(true_te, 1, X_test, 1000, 95) # upper level
truth_te_lower = np.apply_along_axis(true_te, 1, X_test, 1000, 5) # lower level
# ## Parametric heterogeneity
# First of all, we can try to learn a **linear projection of the treatment effect** assuming a polynomial form of $\theta(X)$. We use the `LinearDML` estimator. Since we don't have any priors on these models, we use a generic gradient boosting tree estimators to learn the expected price and demand from the data.
# Get log_T and log_Y
log_T = np.log(T)
log_Y = np.log(Y)
# Train EconML model
est = LinearDML(
model_y=GradientBoostingRegressor(),
model_t=GradientBoostingRegressor(),
featurizer=PolynomialFeatures(degree=2, include_bias=False),
)
est.fit(log_Y, log_T, X=X, W=W, inference="statsmodels")
# Get treatment effect and its confidence interval
te_pred = est.effect(X_test)
te_pred_interval = est.effect_interval(X_test)
# Compare the estimate and the truth
plt.figure(figsize=(10, 6))
plt.plot(X_test.flatten(), te_pred, label="Sales Elasticity Prediction")
plt.plot(X_test.flatten(), truth_te_estimate, "--", label="True Elasticity")
plt.fill_between(
X_test.flatten(),
te_pred_interval[0],
te_pred_interval[1],
alpha=0.2,
label="90% Confidence Interval",
)
plt.fill_between(
X_test.flatten(),
truth_te_lower,
truth_te_upper,
alpha=0.2,
label="True Elasticity Range",
)
plt.xlabel("Income")
plt.ylabel("Songs Sales Elasticity")
plt.title("Songs Sales Elasticity vs Income")
plt.legend(loc="lower right")
# From the plot above, it's clear to see that the true treatment effect is a **nonlinear** function of income, with elasticity around -1.75 when income is smaller than 1 and a small negative value when income is larger than 1. The model fits a quadratic treatment effect, which is not a great fit. But it still captures the overall trend: the elasticity is negative and people are less sensitive to the price change if they have higher income.
# Get the final coefficient and intercept summary
est.summary(feat_name=X.columns)
# `LinearDML` estimator can also return the summary of the coefficients and intercept for the final model, including point estimates, p-values and confidence intervals. From the table above, we notice that $income$ has positive effect and ${income}^2$ has negative effect, and both of them are statistically significant.
# ## Nonparametric Heterogeneity
# Since we already know the true treatment effect function is nonlinear, let us fit another model using `ForestDML`, which assumes a fully **nonparametric estimation of the treatment effect**.
# Train EconML model
est = ForestDML(
model_y=GradientBoostingRegressor(), model_t=GradientBoostingRegressor()
)
est.fit(log_Y, log_T, X=X, W=W, inference="blb")
# Get treatment effect and its confidence interval
te_pred = est.effect(X_test)
te_pred_interval = est.effect_interval(X_test)
# Compare the estimate and the truth
plt.figure(figsize=(10, 6))
plt.plot(X_test.flatten(), te_pred, label="Sales Elasticity Prediction")
plt.plot(X_test.flatten(), truth_te_estimate, "--", label="True Elasticity")
plt.fill_between(
X_test.flatten(),
te_pred_interval[0],
te_pred_interval[1],
alpha=0.2,
label="90% Confidence Interval",
)
plt.fill_between(
X_test.flatten(),
truth_te_lower,
truth_te_upper,
alpha=0.2,
label="True Elasticity Range",
)
plt.xlabel("Income")
plt.ylabel("Songs Sales Elasticity")
plt.title("Songs Sales Elasticity vs Income")
plt.legend(loc="lower right")
# We notice that this model fits much better than the `LinearDML`, the 90% confidence interval correctly covers the true treatment effect estimate and captures the variation when income is around 1. Overall, the model shows that people with low income are much more sensitive to the price changes than higher income people.
# # Understand Treatment Effects with EconML <a id="interpret"></a>
# EconML includes interpretability tools to better understand treatment effects. Treatment effects can be complex, but oftentimes we are interested in simple rules that can differentiate between users who respond positively, users who remain neutral and users who respond negatively to the proposed changes.
#
# The EconML `SingleTreeCateInterpreter` provides interperetability by training a single decision tree on the treatment effects outputted by the any of the EconML estimators. In the figure below we can see in dark red users respond strongly to the discount and the in white users respond lightly to the discount.
intrp = SingleTreeCateInterpreter(include_model_uncertainty=True, max_depth=2, min_samples_leaf=10)
intrp.interpret(est, X_test)
plt.figure(figsize=(25, 5))
intrp.plot(feature_names=X.columns, fontsize=12)
# # Make Policy Decision with EconML <a id="policy"></a>
# We want to make policy decisions to maximum the **revenue** instead of the demand. In this scenario,
#
#
# \begin{align}
# Rev & = Y \cdot T \\
# & = \exp^{log(Y)} \cdot T\\
# & = \exp^{(\theta(X) \cdot log(T) + f(X,W) + \epsilon)} \cdot T \\
# & = \exp^{(f(X,W) + \epsilon)} \cdot T^{(\theta(X)+1)}
# \end{align}
#
#
# With the decrease of price, revenue will increase only if $\theta(X)+1<0$. Thus, we set `sample_treatment_cast=-1` here to learn **what kinds of customers we should give a small discount to maximum the revenue**.
#
# The EconML library includes policy interpretability tools such as `SingleTreePolicyInterpreter` that take in a treatment cost and the treatment effects to learn simple rules about which customers to target profitably. In the figure below we can see the model recommends to give discount for people with income less than $0.985$ and give original price for the others.
intrp = SingleTreePolicyInterpreter(risk_level=0.05, max_depth=2, min_samples_leaf=1, min_impurity_decrease=0.001)
intrp.interpret(est, X_test, sample_treatment_costs=-1, treatment_names=["Discount", "No-Discount"])
plt.figure(figsize=(25, 5))
intrp.plot(feature_names=X.columns, fontsize=12)
# Now, let us compare our policy with other baseline policies! Our model says which customers to give a small discount to, and for this experiment, we will set a discount level of 10% for those users. Because the model is misspecified we would not expect good results with large discounts. Here, because we know the ground truth, we can evaluate the value of this policy.
# define function to compute revenue
def revenue_fn(data, discount_level1, discount_level2, baseline_T, policy):
policy_price = baseline_T * (1 - discount_level1) * policy + baseline_T * (1 - discount_level2) * (1 - policy)
demand = demand_fn(data, policy_price)
rev = demand * policy_price
return rev
# +
policy_dic = {}
# our policy above
policy = intrp.treat(X)
policy_dic["Our Policy"] = np.mean(revenue_fn(train_data, 0, 0.1, 1, policy))
## previous strategy
policy_dic["Previous Strategy"] = np.mean(train_data["price"] * train_data["demand"])
## give everyone discount
policy_dic["Give Everyone Discount"] = np.mean(revenue_fn(train_data, 0.1, 0, 1, np.ones(len(X))))
## don't give discount
policy_dic["Give No One Discount"] = np.mean(revenue_fn(train_data, 0, 0.1, 1, np.ones(len(X))))
## follow our policy, but give -10% discount for the group doesn't recommend to give discount
policy_dic["Our Policy + Give Negative Discount for No-Discount Group"] = np.mean(revenue_fn(train_data, -0.1, 0.1, 1, policy))
## give everyone -10% discount
policy_dic["Give Everyone Negative Discount"] = np.mean(revenue_fn(train_data, -0.1, 0, 1, np.ones(len(X))))
# -
# get policy summary table
res = pd.DataFrame.from_dict(policy_dic, orient="index", columns=["Revenue"])
res["Rank"] = res["Revenue"].rank(ascending=False)
res
# **We beat the baseline policies!** Our policy gets the highest revenue except for the one raising the price for the No-Discount group. That means our currently baseline price is low, but the way we segment the user does help increase the revenue!
# # Conclusions <a id="conclusion"></a>
#
# In this notebook, we have demonstrated the power of using EconML to:
#
# * Estimate the treatment effect correctly even the model is misspecified
# * Interpret the resulting individual-level treatment effects
# * Make the policy decision beats the previous and baseline policies
#
# To learn more about what EconML can do for you, visit our [website](https://aka.ms/econml), our [GitHub page](https://github.com/microsoft/EconML) or our [documentation](https://econml.azurewebsites.net/).
| notebooks/CustomerScenarios/Case Study - Customer Segmentation at An Online Media Company.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Neural Networks - Part 1
#
# 2016-06-17, <NAME>
#
# Motivation, a little history, a naive implementation, and a discussion of neural networks.
#
#
# ## Logistic regression
#
# Recap of the structural pillars of logistic regression for classification ([previous RST](https://github.com/DrSkippy/Data-Science-45min-Intros/blob/master/logistic-regression-101/Logistic%20Regression.ipynb)).
#
# <img src="img/NN-1.jpeg">
# Let's see an example where logistic regression works. Consider some two-dimensional data that we'd like to classify.
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from mlxtend.evaluate import plot_decision_regions
from sklearn.datasets import make_blobs
from sklearn.linear_model import LogisticRegression
# +
samples = 20
X, y = make_blobs(n_samples=samples, n_features=2, cluster_std=0.25,
centers=[(0, 0.5), (1.5, 0.5)], shuffle=False, random_state=1)
# fit the LR model
clf = LogisticRegression().fit(X,y)
# plotting decision regions
plot_decision_regions(X, y, clf=clf, res=0.02)
plt.xlabel('x1'); plt.ylabel('x2'); plt.title('LR (linearly separable)')
# -
print('The model features are weighted according to: {}'.format(clf.coef_))
# ## A different view of logistic regression
#
# Consider a schematic reframing of the LR model above. This time we'll treat the inputs as nodes, and they connect to other nodes via vertices that represent the weight coefficients.
#
# <img src="img/NN-2.jpeg">
#
# The diagram above is a (simplified form of a) single-neuron model in biology.
#
# <img src="img/neuron.gif">
#
# As a result, this is the same model that is used to demonstrate a computational neural network.
#
# So that's great. Logistic regression works, why do we need something like a neural network? To start, consider an example where the LR model breaks down:
# +
rng = np.random.RandomState(1)
X = rng.randn(samples, 2)
y = np.array(np.logical_xor(X[:, 0] > 0, X[:, 1] > 0), dtype=int)
clf = LogisticRegression().fit(X,y)
plot_decision_regions(X=X, y=y, clf=clf, res=0.02, legend=2)
plt.xlabel('x1'); plt.ylabel('x2'); plt.title('LR (XOR)')
# -
# Why does this matter? Well...
#
#
# ## Neural Networks
#
# ### Some history
#
# In the 1960s, when the concept of neural networks were first gaining steam, this type of data was a show-stopper. In particular, the reason our model fails to be effective with this data is that it's not linearly separable; it has interaction terms.
#
# This is a specific type of data that is representative of an XOR logic gate. It's not magic, just well-known, and a fundamental type of logic in computing. We can say it in words, as approximately: "label is 1, if either x1 or x2 is 1, but not if both are 1."
#
# At the time, this led to an interesting split in computational work in the field: on the one hand, some people set off on efforts to **design very custom data and feature engineering tactics so that existing models would still work.** On the other hand, people set out to solve the challenge of **designing new algorithms**; for example, this is approximately the era when the support vector machine was developed. Since progress on neural network models slowed significantly in this era (rememeber that computers were entire rooms!), this is often referred to as the first "AI winter." Even though the multi-layer network was designed a few years later, and solved the XOR problem, the attention on the field of AI and neural networks had faded.
#
# Today, you might (sensibly) suggest something like an 'rbf-kernel SVM' to solve this problem, and that would totally work! But that's not where we're going today.
#
# With the acceleration of computational power in the last decade, there has been a resurgence in the interest (and capability) of neural network computation.
#
# ### So what does a neural network look like?
#
# What is a multi-layer model, and how does it help solve this problem? **Non-linearity and feature mixing leads to *new* features that we don't have to encode by hand.** In particular, we no longer depend just on combinations of input features. We combine input features, apply non-linearities, then combine all of those as *new* features, apply *additional* non-linearities, and so on until basically forever.
#
# It sounds like a mess, and it pretty much can be. But first, we'll start simply. Imagine that we put just a single layer of "neurons" between our input data and output. How would that change the evaluation approach we looked at earlier?
#
# <img src="img/NN-3.jpeg">
#
#
# ### DIY neural network!
#
# **Reminder:** manually writing out algorithms is a terrible idea for using them, but a great idea for learning how they work.
#
# To get a sense for how the diagram above works, let's first write out the "single-layer" version (which we saw above is equivalent to logistic regression and doesn't work!). We just want to see how it looks in the form of forward- and backward-propagation.
#
# Remember, we have a (``samples x 2``) input matrix, so we need a ``(2x1)`` matrix of weights. And to save space, we won't use the fully-accurate and correct implementation of backprop and SGD; instead, we'll use a simplified version that's easier to read but has very similar results.
# +
# make the same data as above (just a little closer so it's easier to find)
rng = np.random.RandomState(1)
X = rng.randn(samples, 2)
y = np.array(np.logical_xor(X[:, 0] > 0, X[:, 1] > 0), dtype=int)
# -
def activate(x, deriv=False):
"""sigmoid activation function and its derivative wrt the argument"""
if deriv is True:
return x*(1-x)
return 1/(1+np.exp(-x))
# +
# initialize synapse0 weights randomly with mean 0
syn0 = 2*np.random.random((2,1)) - 1
# nothing to see here... just some numpy vector hijinks for the next code
y = y[None].T
# -
# This is the iterative phase. We propagate the input data forward through the synapse (weights), calculate the errors, and then back-propogate those errors through the synapses (weights) according to the proper gradients. Note that the number of iterations is arbitary at this point. We'll come back to that.
for i in range(10000):
# first "layer" is the input data
l0 = X
# forward propagation
l1 = activate(np.dot(l0, syn0))
###
# this is an oversimplified version of backprop + gradient descent
#
# how much did we miss?
l1_error = y - l1
#
# how much should we scale the adjustments?
# (how much we missed by) * (gradient at l1 value)
# ~an "error-weighted derivative"
l1_delta = l1_error * activate(l1,True)
###
# how much should we update the weight matrix (synapse)?
syn0 += np.dot(l0.T,l1_delta)
# some insight into the update progress
if (i% 2000) == 0:
print("Mean error @ iteration {}: {}".format(i, np.mean(np.abs(l1_error))))
# As expected, this basically didn't work at all!
#
# Even though we aren't looking at the actual output data, we can use it to look at the accuracy; it never got much better than random guessing. Even after thousands of iterations! But remember, we knew that would be the case, because this single-layer network is functionally the same as vanilla logistic regression, which we saw fail on the xor data above!
#
# But, now that we have the framework and understanding for how to optimize backprogation, we can **add an additional layer to the network (a so-called "hidden" layer of neurons),** which will introduce the kind of mixing we need to represent this data.
#
# As we saw above in the diagram (and talked about), introduction of a new layer means that we get an extra step in both the forward- and backward-propagation steps. This new step means we need an additional weight (synapse) matrix, and an additional derivative calculation. Other than that, the code looks pretty much the same.
# + active=""
# #### fall-back data ####
# # convert to/from raw nbconvert
# X = np.array([[0,0],
# [0,1],
# [1,0],
# [1,1]])
#
#
# y = np.array([[0, 1, 1, 0]]).T
# +
# hold tight, we'll come back to choosing this number
hidden_layer_width = 3
# initialize synapse (weight) matrices randomly with mean 0
syn0 = 2*np.random.random((2,hidden_layer_width)) - 1
syn1 = 2*np.random.random((hidden_layer_width,1)) - 1
# -
for i in range(60000):
# forward propagation through layers 0, 1, and 2
l0 = X
l1 = activate(np.dot(l0,syn0))
l2 = activate(np.dot(l1,syn1))
# how much did we miss the final target value?
l2_error = y - l2
# how much should we scale the adjustments?
l2_delta = l2_error*activate(l2,deriv=True)
# project l2 error back onto l1 values according to weights
l1_error = l2_delta.dot(syn1.T)
# how much should we scale the adjustments?
l1_delta = l1_error * activate(l1,deriv=True)
# how much should we update the weight matrices (synapses)?
syn1 += l1.T.dot(l2_delta)
syn0 += l0.T.dot(l1_delta)
if (i % 10000) == 0:
print("Error @ iteration {}: {}".format(i, np.mean(np.abs(l2_error))))
# Ok, this time we started at random guessing (sensible), but notice that we quickly reduced our overall error! That's excellent!
#
# **Note:** I didn't have time to debug the case where the full XOR data only trained to label one quadrant correctly. To get a sense for how it can look with a smaller set, change the "fall-back data" cell to code, and run the cells starting there!
#
# Knowing that the error is lower is great, but we can also inspect the results of the fit network by looking at the forward propagation results from the trained synapses (weights).
def forward_prop(X):
"""forward-propagate data X through the pre-fit network"""
l1 = activate(np.dot(X,syn0))
l2 = activate(np.dot(l1,syn1))
return l2
# +
# numpy and plotting shenanigans come from:
# http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html
# mesh step size
h = .02
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# calculate the surface (by forward-propagating)
Z = forward_prop(np.c_[xx.ravel(), yy.ravel()])
# reshape the result into a grid
Z = Z.reshape(xx.shape)
# +
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# we can use this to inspect the smaller dataset
#plt.plot(X[:, 0], X[:, 1], 'o')
# -
# Success! (Possibly... depending on whether Josh debugged the larger network problem :) ). If only one quadrant was trained correctly, go use the smaller dataset!
#
#
# # Wrap-up
#
# The stuff in this session is just a very basic start! The limits to the increasing complexity are now at the hardware level! Networks can be amazingly complicated, too. Below is an example from a talk I saw - note how interestingly the layers are *building* on each other to represent increasingly complicated structure in the context of facial recognition.
#
# <img src="img/3l-face.png">
#
# It's not clear how you'd encode "this is a face," but once you see how the first layer's "atomic" components are assembled into abstract *parts* of a face, and how those *parts* are combined into representations of kinds of faces, it seems more believable!
#
# ## Don't actually do it like this
#
# And, as you probably guessed, what we've done above isn't how you use these in practice. There are many Python libraries for building and using various neural network models. And, as you might expect, many are built with an object-oriented expressiveness:
#
# ```python
# # pseudo-code (that is actually very nearly valid)
# nn = Network(optimizer='sgd')
# nn.add_layer('fully_connected', name='l0', nodes=4)
# nn.add_layer('fully_connected', name='l1', nodes=5)
# nn.add_layer('fully_connected', name='l2', nodes=2)
# nn.compile()
# nn.fit(X,y)
# ```
#
# In Neural Networks - Part 2, we'll look at some of these libraries and use them for some learning tasks! (*hold me to it!*)
#
# In addition to using optimized libraries, there are many other issues and topics that go into developing and using neural networks for practical purposes. Below is a bag-of-words approach to some terms and phrases that you'll invariably see when reading about neural networks.
#
#
# ## Neural Network Word Salad
#
# - GPU (graphical processing unit)
# - The matrix manipulations needed for large network training are typically bottlenecked by the compute throughput of a CPU. Starting in ~2013, people figured out the computer graphics chips were much faster at computing these steps and are now the go-to hardware for training networks. CPUs still work! They just tend to be an order of magnitude slower.
#
# - architecture
# - We only looked at so-called "fully-connected" networks - that is, every node was connected to every other node downstream. This is not the only way to design the layout!
# - Among many others, so-called "convolution networks" are very common in image recognition tasks; each layer combines a *region* of the previous layer's outputs into a single node in the subsequent layer.
# - There are still other choices to be made in designing a network: the number of nodes in a hidden layer, the activation function, and more.
#
# - batching
# - If you're training a network on the entirety of the internet's search queries, you can't exactly feed it all forward and backward through the network at once. The concept of batching is deciding how much of the input data to feed forward (and backward) before updating your weight matrices.
#
# - training epochs
# - the magic numbers in our ``for`` loops above were chosen arbitrarily. A lot of work has also gone into deciding how to optimize the convergence of network training.
#
# - regularization
# - Neural networks, too, can suffer from overfitting. There are tactics to
# - "dropout"
# - "pooling"
#
# - "deep learning"
# - lots of layers
#
#
# ## Links
#
# To save you some time if you want to learn more, here are some of the references that I found the most helpful while researching for this RST:
#
# - [Hacker's guide to Neural Networks](http://karpathy.github.io/neuralnets/)
# - [Deep Learning Basics: Neural Networks, Backpropagation and Stochastic Gradient Descent](http://alexminnaar.com/deep-learning-basics-neural-networks-backpropagation-and-stochastic-gradient-descent.html)
# - [A Neural Network in 11 lines of Python](http://iamtrask.github.io/2015/07/12/basic-python-network/)
# - [A Neural Network in 13 lines of Python](http://iamtrask.github.io/2015/07/27/python-network-part2/)
# - [Intro to Neural Networks](http://www.slideshare.net/DeanWyatte/intro-to-neural-networks)
# - [Single-Layer Neural Networks and Gradient Descent](http://sebastianraschka.com/Articles/2015_singlelayer_neurons.html)
# - [Tensorflow Playground](http://playground.tensorflow.org)
#
| neural-networks-101/Neural Networks - Part 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## FDP inference within selected clusters
# Import the necessary packages
import numpy as np
import numpy.matlib as npm
import matplotlib.pyplot as plt
import sys
#sys.path.insert(0, 'C:\\Users\\12SDa\\davenpor\\davenpor\\Toolboxes\\pyrft' )
import pyrft as pr
import sanssouci as sa
# +
# Set the dimension of the example and the number of subjects
Dim = (50,50)
N = 30
m = np.prod(Dim)
# Generate the category vector and obtain the corresponding design matrix
from sklearn.utils import check_random_state
rng = check_random_state(101)
categ = rng.choice(3, N, replace = True)
X = pr.group_design(categ);
# Specify the contrast matrix (here 2 contrasts are chosen)
C = np.array([[1, -1, 0], [0, 1, -1]])
# Calulate the number contrasts
L = C.shape[0]
# Calculate the number of p-values generated (L for each voxels)
npvals = m * L
# Generate a stationary random field with given FWHM
FWHM = 4
lat_data = pr.statnoise(Dim, N, FWHM)
# Plot a sample realization of the noise
plt.imshow(lat_data.field[:, :, 1])
# -
# Calculate the t-statistics
tstat_image, residuals = pr.contrast_tstats(lat_data, X, C)
plt.imshow(tstat_image.field[:, :, 1])
# +
# Specify the number of bootstraps to use
B = 100
alpha = 0.1
# Run the bootstrapped algorithm
minPperm, orig_pvalues, pivotal_stats, bs = pr.boot_contrasts(lat_data, X, C, B, 'linear', True, 1)
# Calculate the post-hoc bound
lambda_quant = np.quantile(pivotal_stats, alpha)
print('Lambda Quantile:', lambda_quant)
# Calculate the number of voxels in the mask
m = np.sum(lat_data.mask)
# Gives t_k^L(lambda) = lambda*k/m for k = 1, ..., m
thr = sa.t_linear(lambda_quant, np.arange(1, m + 1), m)
# -
clusters = pr.find_clusters(orig_pvalues.field[:, :, 1], cdt=0.01, below=1)
# plt.imshow(clusters)
print(clusters)
n_clusters = len(clusters[1])
for I in np.arange(n_clusters):
voxelsincluster = np.array(clusters[0] == (I + 1), dtype = bool)
cluster_pvalues = orig_pvalues.field[:, :, 1][voxelsincluster]
bound = sa.max_fp(cluster_pvalues, thr)
print(bound)
print(np.sum(voxelsincluster == 1))
# ## Add some spatial signal
# +
signal = np.zeros(Dim)
signal[10:20,10:20] = 0.4
signal[30:35,30:35] = 0.4
w2 = np.where(categ == 2)[0]
# Add the signal to the field
for I in np.arange(len(w2)):
lat_data.field[:, :, w2[I]] = lat_data.field[:, :, w2[I]] + signal
# +
# Specify the number of bootstraps to use
B = 100
alpha = 0.1
# Run the bootstrapped algorithm
minPperm, orig_pvalues, pivotal_stats, bs = pr.boot_contrasts(lat_data, X, C, B, 'linear', True, 1)
# Calculate the post-hoc bound
lambda_quant = np.quantile(pivotal_stats, alpha)
print('Lambda Quantile:', lambda_quant)
# Calculate the number of voxels in the mask
m = np.sum(lat_data.mask)
# Gives t_k^L(lambda) = lambda*k/m for k = 1, ..., m
thr = sa.t_linear(lambda_quant, np.arange(1, m + 1), m)
# -
clusters = pr.find_clusters(orig_pvalues.field[:, :, 1], cdt=0.01, below=1)
#plt.imshow(clusters)
n_clusters = len(clusters[1])
for I in np.arange(n_clusters):
voxelsincluster = np.array(clusters[0] == (I + 1), dtype = bool)
cluster_pvalues = orig_pvalues.field[:, :, 1][voxelsincluster]
bound = len(cluster_pvalues) - sa.max_fp(cluster_pvalues, thr)
print(bound)
print(np.sum(voxelsincluster == 1))
| examples/JER_control/cluster_inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import turtle
# +
def up():
turtle.goto(turtle.xcor(), turtle.ycor()+50)
def down():
turtle.goto(turtle.xcor(), turtle.ycor()-50)
def right():
turtle.goto(turtle.xcor()+50, turtle.ycor())
def left():
turtle.goto(turtle.xcor()+50, turtle.ycor()+50)
# -
turtle.onkeypress(up, "w")
turtle.onkeypress(left, "a")
turtle.onkeypress(down, "s")
turtle.onkeypress(right, "d")
turtle.listen()
turtle.mainloop
| tlab_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="HbbGwEA47kDj"
# Importing Libraries
# + id="roYk-Q3s6yVd" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="b8676cc9-bad4-4aae-faf1-b2d77076f0bc"
import numpy as np
import pandas as pd
import matplotlib as plt
import seaborn as sns
import matplotlib.pyplot as plt
# + [markdown] id="7v8I08N58jcA"
# Data Wrangling
# + id="Ttq0PCrZ8i_d" colab={"base_uri": "https://localhost:8080/", "height": 197} outputId="9d02d850-01ce-443f-e093-96dc6516e825"
data = pd.read_csv('dataset.csv')
data.head(5)
# + id="8PJwi9rc8tp-" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="162e9962-f103-495f-dadb-b99ca69d4724"
print("(Rows, columns): " + str(data.shape))
data.columns
# + id="PxY79ivL80gB" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="bd6ed805-91a5-449f-d065-2d71b4b53924"
data.nunique(axis=0)# returns the number of unique values for each variable.
# + id="wn63NFgG88dd" colab={"base_uri": "https://localhost:8080/", "height": 467} outputId="80187cdd-c809-4a94-96a1-2822be8c7844"
#summarizes the count, mean, standard deviation, min, and max for numeric variables.
data.describe().transpose()
# + id="a6ushlSP9EJP" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="b75d5826-d26f-4ffa-a377-bd300bb3866e"
# Display the Missing Values
print(data.isna().sum())
# + id="IXOMCsIl9JpJ" colab={"base_uri": "https://localhost:8080/", "height": 612} outputId="f43d391b-a702-4d0e-9d42-98170d63d722"
# calculate correlation matrix
corr = data.corr()
plt.subplots(figsize=(15,10))
sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, annot=True, cmap=sns.diverging_palette(220, 20, as_cmap=True))
sns.heatmap(corr, xticklabels=corr.columns,
yticklabels=corr.columns,
annot=True,
cmap=sns.diverging_palette(220, 20, as_cmap=True))
# + [markdown] id="qqZ2RHNI9gFK"
# Filtering data by positive & negative Heart Disease patient
# + id="n9lROx7X9T2W" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="1e46959c-7449-4721-a272-7a218c221764"
# Filtering data by POSITIVE Heart Disease patient
pos_data = data[data['target']==1]
pos_data.describe()
# + id="iNdi8cwN9wmU" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="b67f565d-7e05-4f7e-ac88-bac6c15ca709"
# Filtering data by NEGATIVE Heart Disease patient
pos_data = data[data['target']==0]
pos_data.describe()
# + [markdown] id="DbzJPYQV9lPu"
# Assign the 13 features to X, & the last column to our classification predictor, y
# + id="FjMSVs94960R"
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
# + [markdown] id="kYwABWTb-AJx"
# Split: the data set into the Training set and Test set
# + id="DlgSSiYE99tk"
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X,y,test_size = 0.2, random_state = 1)
# + [markdown] id="MuoZklrK-FJY"
# Normalize: Standardizing the data a mean of 0 and a standard deviation of 1.
# + id="rGfWqZir-LLs"
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
# + [markdown] id="CwBqTSrI-WVv"
# Modeling /Training
# + [markdown] id="Pxb_vKN2-dmc"
# Model 1: K-NN (K-Nearest Neighbors)
# + id="WwTHdnS2-Ysh" colab={"base_uri": "https://localhost:8080/", "height": 176} outputId="80e3157c-dddc-44ab-fb76-8508ec3ae969"
from sklearn.metrics import classification_report
from sklearn.neighbors import KNeighborsClassifier
model1 = KNeighborsClassifier() # get instance of model
model1.fit(x_train, y_train) # Train/Fit model
y_pred1 = model1.predict(x_test) # get y predictions
print(classification_report(y_test, y_pred1)) # output accuracy
# + [markdown] id="7HeMf82L_l2M"
# Making the Confusion Matrix
# + id="5Nrcd6au_dw0" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="bdb6872f-218c-486f-b581-4614bdbfbac5"
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred1)
print(cm)
accuracy_score(y_test, y_pred1)
# + id="NUwd6OzI2BNa" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5236745f-7860-4ef6-b548-77172b9affc2"
TN = cm[0][0]
FN = cm[1][0]
TP = cm[1][1]
FP = cm[0][1]
specificity = (TN/TN+FP)
print ('Specificity = ', specificity)
# + id="v7q04OW45jVn" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b91a4392-c89b-435a-dc11-6c086b1ea39d"
senstivity = (TP/TP+FN)
print ('Senstivity = ', senstivity)
| Data Science/DSA_Project_on_KNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import sys
curr_dir=os.getcwd()
p=curr_dir.find('dev')
root=curr_dir[:p]
sys.path.append(root+'lib')
import psi4
import numpy as np
from P4toC4_aux import *
from SO_aux import SymOrbs
#BASIS='STO-3G'
#BASIS='def2-SV'
#BASIS='def2-SVP'
BASIS='CC-PVDZ'
# +
psi4.set_memory('500 MB')
psi4.core.set_global_option("BASIS", BASIS)
psi4.core.set_global_option("SCF_TYPE", "pk")
psi4.core.set_global_option("REFERENCE", "RHF")
psi4.core.set_global_option("D_CONVERGENCE", 1e-8)
psi4.core.set_global_option("PUREAM", "True")
psi4.core.set_output_file('output.dat', False)
mol = psi4.geometry("""
C
C 1 R
H 1 R2 2 A
H 1 R2 2 A 3 D18
H 2 R2 1 A 3 D0
H 2 R2 1 A 4 D0
R=1.327
R2=1.085
A=121.8
D18=180.0
D0=0.0
""")
#units Bohr
#no_reorient
E, wf = psi4.energy('scf', return_wfn=True, molecule=mol)
E
# -
basisset=wf.basisset()
#
# C4_MO[p2c[i]] = P4_MO[i] P4_MO[c2p[i]] = C4_MO[i]
#
p2c_map, p2c_scale = basis_mapping(basisset, verbose=0)
#c2p_map = invert_mapping(p2c_map)
# Print a new basis set in GENBAS format
print(basisset.genbas())
mol=wf.molecule()
ptgr=mol.point_group()
print(f'{ptgr.symbol()}: order = {ptgr.order()}')
n_irrep=wf.nirrep()
g=wf.nmopi()
n_mo_pi=np.array(g.to_tuple())
print('MOs per irrep', n_mo_pi)
Ls=wf.aotoso()
print(Ls.shape)
# Psi4 MOs in SO basis
C_SO=wf.Ca()
#Cb=np.array(wf.Cb())
C_SO.shape
# +
irrep_lst = []
for isym in range(ptgr.order()):
SOs=SymOrbs(Ls.nph[isym], order=wf.nirrep())
#SOs.print()
p4_first_AOs = SOs.first_AOs()
cfour_first_AOs = p2c_map[SOs.first_AOs()]
ao_scale = p2c_scale[SOs.first_AOs()]
so_c2p = np.argsort(cfour_first_AOs)
nsos=len(so_c2p)
so_p2c = invert_mapping(so_c2p)
so_scale=SOs.inv_coef()
scale = so_scale*ao_scale
C=psi4_to_c4(C_SO.nph[isym], so_p2c, scale)
irrep_lst.append(C)
print(f'\nIrrep {isym}')
print('AO-order AO-order Cfour argsort AO SO')
print(' Psi4 Cfour argsort inverted scale scale')
for i in range(SOs.nsos):
print(f'{p4_first_AOs[i]:4d}{cfour_first_AOs[i]:9d}', end='')
print(f'{so_c2p[i]:11d}{so_p2c[i]:10d}', end='')
print(f'{ao_scale[i]:11.3f}{so_scale[i]:7.3f}')
C_SOr = psi4.core.Matrix.from_array(irrep_lst)
C_SOr.shape
# -
p2c_irrep_map=np.array([0,2,3,1])
c2p_irrep_map=invert_mapping(p2c_irrep_map)
print(p2c_irrep_map)
C4_cs = read_oldmos('OLDMOS.'+BASIS, n_mo_pi[p2c_irrep_map])
cfour_sym=3
psi4_sym=p2c_irrep_map[cfour_sym]
Corg=C_SO.nph[psi4_sym]
Creo=C_SOr.nph[psi4_sym]
Cc4=C4_cs[cfour_sym]
naos=n_mo_pi[psi4_sym]
mo=1
print(' Psi4 reordered Cfour')
for k in range(naos):
print(f'{k:3d} {Corg[k,mo]:10.6f} {Creo[k,mo]:10.6f} {Cc4[k,mo]:10.6f}')
print(np.max(Creo[:,mo]-Cc4[:,mo]))
#
# comparison Psi4-MOs and Cfour-MOs in their SO representation
#
for cfour_irrep in range(wf.nirrep()):
psi4_irrep=p2c_irrep_map[cfour_irrep]
print(cfour_irrep, psi4_irrep, C4_cs[cfour_irrep].shape, C_SOr.nph[psi4_irrep].shape )
print(np.max(abs(C_SOr.nph[psi4_irrep])-abs(C4_cs[cfour_irrep])))
| dev/D2h/C2H4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import sys
import time
import math
import torch
import torch.nn as nn
import torch.nn.init as init
# -
def getMeanAndStd(dataset):
# Compute the dataset mean and std to normarlize
dataloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True, num_worker=2)
mean = torch.zeros(3)
std = torch.zeros(3)
print('Computing mean and std')
for inputs, targets in dataloader:
for i in range(3):
mean[i] += inputs[:,i,:,:].mean()
std[i] += inputs[:,i:,:].std()
mean.div_(len(dataset))
std.div_(len(dataset))
return mean, std
def init_params(net):
for m in net.modules():
if isinstance(m, nn.Conv2d):
init.Kaiming_normal(m.weight, mode='fan_out')
if m.bias:
init.constant(m.bias, 0.1)
elif isinstance(m. nn.Linear):
init.normal(m.weight, std=1e-3)
if m.bias:
init.constant(m.bias, 0)
else:
pass
# +
# _, term_width = os.popen('stty size', 'r').read().split()
term_width = 80
TOTAL_BAR_LENGTH = 40
last_time = time.time()
begin_time = last_time
def progressBar(current, total, msg=None):
global last_time, begin_time
if current == 0:
begin_time = time.time()
cur_len = int(TOTAL_BAR_LENGTH * current / total)
rest_len = int(TOTAL_BAR_LENGTH -cur_len) - 1
sys.stdout.write('[')
for i in range(cur_len):
sys.stdout.write('=')
sys.stdout.write('>')
for i in range(rest_len):
sys.stdout.write('.')
sys.stdout.write(']')
cur_time = time.time()
step_time = cur_time - last_time
last_time = cur_time
tot_time = cur_time - begin_time
L = []
L.append('Step: %s' % format_time(step_time))
L.append('|Total: %s' % format_time(tot_time))
if msg:
L.append('|' + msg)
msg = ''.join(L)
sys.stdout.write(msg)
# for i in range(term_width-int(TOTAL_BAR_LENGTH)-len(msg)-3):
# sys.stdout.write(' ')
# # Go back to the center of the bar.
# for i in range(term_width-int(TOTAL_BAR_LENGTH/2)+2):
# sys.stdout.write('\b')
# sys.stdout.write(' %d/%d ' % (current+1, total))
if current < total-1:
sys.stdout.write('\r')
else:
sys.stdout.write('\n')
sys.stdout.flush()
def format_time(seconds):
days = int(seconds / 3600/24)
seconds = seconds - days*3600*24
hours = int(seconds / 3600)
seconds = seconds - hours*3600
minutes = int(seconds / 60)
seconds = seconds - minutes*60
secondsf = int(seconds)
seconds = seconds - secondsf
millis = int(seconds*1000)
f = ''
i = 1
if days > 0:
f += str(days) + 'D'
i += 1
if hours > 0 and i <= 2:
f += str(hours) + 'h'
i += 1
if minutes > 0 and i <= 2:
f += str(minutes) + 'm'
i += 1
if secondsf > 0 and i <= 2:
f += str(secondsf) + 's'
i += 1
if millis > 0 and i <= 2:
f += str(millis) + 'ms'
i += 1
if f == '':
f = '0ms'
return f
# -
| utils.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
from scipy.misc import imread, imresize
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
import cv2
from random import shuffle
import glob
import re
from sklearn.svm import SVC,LinearSVC
from sklearn.ensemble import RandomForestClassifier, BaggingClassifier,ExtraTreesClassifier
from sklearn import preprocessing
# %matplotlib inline
# -
DATA_DIR = 'D:/datasets/caltech/101_ObjectCategories'
INPUT_SIZE = 224
VALID_IMAGE_FORMATS = frozenset(['jpg', 'jpeg'])
def create_image_lists(image_dir, train_percent):
if not os.path.isdir(image_dir):
raise ValueError("Image directory {} not found.".format(image_dir))
image_lists = {}
sub_dirs = [x[0] for x in os.walk(image_dir)]
sub_dirs_without_root = sub_dirs[1:] # first element is root directory
num_classes=0
for sub_dir in sub_dirs_without_root:
file_list = []
dir_name = os.path.basename(sub_dir)
if dir_name == image_dir:
continue
#print("Looking for images in '{}'".format(dir_name))
for extension in VALID_IMAGE_FORMATS:
file_glob = os.path.join(image_dir, dir_name, '*.' + extension)
file_list.extend(glob.glob(file_glob))
if not file_list:
continue
num_classes+=1
label_name = re.sub(r'[^a-z0-9]+', ' ', dir_name.lower())
training_images = []
validation_images = []
shuffle(file_list)
if train_percent<1:
train_cnt=int(math.ceil(train_percent*len(file_list)))
#print(label_name,train_percent,len(file_list),train_cnt)
else:
train_cnt=train_percent
for i,file_name in enumerate(file_list):
base_name = os.path.basename(file_name)
if i < train_cnt:
training_images.append(base_name)
#elif i<train_cnt+15:
else:
validation_images.append(base_name)
image_lists[label_name] = {
'dir': dir_name,
'training': training_images,
'validation': validation_images,
}
return image_lists,num_classes
image_lists,num_classes = create_image_lists(DATA_DIR, train_percent=30)
print(num_classes)
# +
from keras.models import Model,Sequential, load_model,model_from_json
from keras.applications import mobilenet,mobilenet_v2,densenet,inception_resnet_v2,inception_v3,resnet_v2
from keras.utils.generic_utils import CustomObjectScope
from keras.layers import Flatten, Dense, Dropout,GlobalAveragePooling2D,Activation, Conv2D, Reshape,DepthwiseConv2D,Input
from keras.optimizers import SGD, Adam
from keras.utils import to_categorical
from keras.callbacks import ModelCheckpoint, TensorBoard, Callback, CSVLogger, EarlyStopping
from keras.metrics import top_k_categorical_accuracy
import numpy as np
from sklearn.metrics import confusion_matrix
import keras.applications
from keras.preprocessing.image import (ImageDataGenerator, Iterator,
array_to_img, img_to_array, load_img)
from keras import backend as K
#from myimage import ImageDataGenerator
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
# +
input_shape=(INPUT_SIZE,INPUT_SIZE,3)
if True:
net_model=inception_v3
net_description='inception_v3'
base_model = inception_v3.InceptionV3(input_shape=input_shape, include_top=True, weights='imagenet')
else:
net_model=mobilenet
net_description='mobilenet'
base_model = mobilenet.MobileNet(input_shape=input_shape, include_top=True, weights='imagenet', pooling='avg')
#base_model = mobilenet_v2.MobileNetV2(alpha=1.4, input_shape=input_shape, include_top=True, weights='imagenet', pooling='avg')
#base_model = densenet.DenseNet121(input_shape=input_shape, include_top=True, weights='imagenet', pooling='avg')
#base_model = inception_resnet_v2.InceptionResNetV2(input_shape=input_shape, include_top=True, weights='imagenet', pooling='avg')
preprocessing_function=net_model.preprocess_input
x=base_model.layers[-2].output
#base_model.summary()
#x = Dense(1024, activation='relu')(x)
#x = Dropout(0.5)(x)
x = Dense(num_classes, activation='softmax', use_bias=True,name='preds')(x)
model=Model(base_model.inputs, x)
# +
import efficientnet.keras as enet
base_model = enet.EfficientNetB5(weights=None)
base_model.load_weights('enet_pretrained/efficientnet-b5-weights.h5') #-train
x=base_model.layers[-2].output
#base_model.summary()
#x = Dense(1024, activation='relu')(x)
#x = Dropout(0.5)(x)
x = Dense(num_classes, activation='softmax', use_bias=True,name='preds')(x)
model=Model(base_model.inputs, x)
INPUT_SIZE = model.input_shape[1]
input_shape=(INPUT_SIZE,INPUT_SIZE,3)
preprocessing_function=enet.preprocess_input
net_description='enet5_train'
model.summary()
print(INPUT_SIZE)
# +
class CustomImageDataGenerator(ImageDataGenerator):
def flow_from_image_lists(self, image_lists,
category, image_dir,
target_size=(256, 256), color_mode='rgb',
class_mode='categorical',
batch_size=32, shuffle=True, seed=None,
save_to_dir=None,
save_prefix='',
save_format='jpeg'):
return ImageListIterator(
image_lists, self,
category, image_dir,
target_size=target_size, color_mode=color_mode,
class_mode=class_mode,
data_format=self.data_format,
batch_size=batch_size, shuffle=shuffle, seed=seed,
save_to_dir=save_to_dir,
save_prefix=save_prefix,
save_format=save_format)
class ImageListIterator(Iterator):
"""Iterator capable of reading images from a directory on disk.
# Arguments
image_lists: Dictionary of training images for each label.
image_data_generator: Instance of `ImageDataGenerator`
to use for random transformations and normalization.
target_size: tuple of integers, dimensions to resize input images to.
color_mode: One of `"rgb"`, `"grayscale"`. Color mode to read images.
classes: Optional list of strings, names of sudirectories
containing images from each class (e.g. `["dogs", "cats"]`).
It will be computed automatically if not set.
class_mode: Mode for yielding the targets:
`"binary"`: binary targets (if there are only two classes),
`"categorical"`: categorical targets,
`"sparse"`: integer targets,
`None`: no targets get yielded (only input images are yielded).
batch_size: Integer, size of a batch.
shuffle: Boolean, whether to shuffle the data between epochs.
seed: Random seed for data shuffling.
data_format: String, one of `channels_first`, `channels_last`.
save_to_dir: Optional directory where to save the pictures
being yielded, in a viewable format. This is useful
for visualizing the random transformations being
applied, for debugging purposes.
save_prefix: String prefix to use for saving sample
images (if `save_to_dir` is set).
save_format: Format to use for saving sample images
(if `save_to_dir` is set).
"""
def __init__(self, image_lists, image_data_generator,
category, image_dir,
target_size=(256, 256), color_mode='rgb',
class_mode='categorical',
batch_size=32, shuffle=True, seed=None,
data_format=None,
save_to_dir=None, save_prefix='', save_format='jpeg'):
if data_format is None:
data_format = K.image_data_format()
classes = list(image_lists.keys())
self.category = category
self.num_classes = len(classes)
self.image_lists = image_lists
self.image_dir = image_dir
how_many_files = 0
for label_name in classes:
for _ in self.image_lists[label_name][category]:
how_many_files += 1
self.samples = how_many_files
self.class_indices = dict(zip(classes, range(len(classes))))
self.id2class = dict((v, k) for k, v in self.class_indices.items())
self.classes = np.zeros((self.samples,), dtype='int32')
self.image_data_generator = image_data_generator
self.target_size = tuple(target_size)
if color_mode not in {'rgb', 'grayscale'}:
raise ValueError('Invalid color mode:', color_mode,
'; expected "rgb" or "grayscale".')
self.color_mode = color_mode
self.data_format = data_format
if self.color_mode == 'rgb':
if self.data_format == 'channels_last':
self.image_shape = self.target_size + (3,)
else:
self.image_shape = (3,) + self.target_size
else:
if self.data_format == 'channels_last':
self.image_shape = self.target_size + (1,)
else:
self.image_shape = (1,) + self.target_size
if class_mode not in {'categorical', 'binary', 'sparse', None}:
raise ValueError('Invalid class_mode:', class_mode,
'; expected one of "categorical", '
'"binary", "sparse", or None.')
self.class_mode = class_mode
self.save_to_dir = save_to_dir
self.save_prefix = save_prefix
self.save_format = save_format
i = 0
self.filenames = []
for label_name in classes:
for j, _ in enumerate(self.image_lists[label_name][category]):
self.classes[i] = self.class_indices[label_name]
img_path = get_image_path(self.image_lists,
label_name,
j,
self.image_dir,
self.category)
self.filenames.append(img_path)
i += 1
print("Found {} {} files".format(len(self.filenames), category))
super(ImageListIterator, self).__init__(self.samples, batch_size, shuffle,
seed)
def next(self):
"""For python 2.x.
# Returns
The next batch.
"""
# Keeps under lock only the mechanism which advances
# the indexing of each batch.
with self.lock:
index_array = next(self.index_generator)
# The transformation of images is not under thread lock
# so it can be done in parallel
return self._get_batches_of_transformed_samples(index_array)
def _get_batches_of_transformed_samples(self, index_array):
current_batch_size=len(index_array)
batch_x = np.zeros((current_batch_size,) + self.image_shape,
dtype=K.floatx())
grayscale = self.color_mode == 'grayscale'
# build batch of image data
for i, j in enumerate(index_array):
img = load_img(self.filenames[j],
grayscale=grayscale,
target_size=self.target_size)
x = img_to_array(img, data_format=self.data_format)
x = self.image_data_generator.random_transform(x)
x = self.image_data_generator.standardize(x)
batch_x[i] = x
# optionally save augmented images to disk for debugging purposes
if self.save_to_dir:
for i, j in enumerate(index_array):
img = array_to_img(batch_x[i], self.data_format, scale=True)
fname = '{prefix}_{index}_{hash}.{format}'.format(
prefix=self.save_prefix,
index=j,
hash=np.random.randint(10000),
format=self.save_format)
img.save(os.path.join(self.save_to_dir, fname))
# build batch of labels
if self.class_mode == 'sparse':
batch_y = self.classes[index_array]
elif self.class_mode == 'binary':
batch_y = self.classes[index_array].astype(K.floatx())
elif self.class_mode == 'categorical':
batch_y = np.zeros((len(batch_x), self.num_classes),
dtype=K.floatx())
for i, label in enumerate(self.classes[index_array]):
batch_y[i, label] = 1.
else:
return batch_x
return batch_x, batch_y
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py
def get_image_path(image_lists, label_name, index, image_dir, category):
""""Returns a path to an image for a label at the given index.
# Arguments
image_lists: Dictionary of training images for each label.
label_name: Label string we want to get an image for.
index: Int offset of the image we want. This will be moduloed by the
available number of images for the label, so it can be arbitrarily large.
image_dir: Root folder string of the subfolders containing the training
images.
category: Name string of set to pull images from - training, testing, or
validation.
# Returns
File system path string to an image that meets the requested parameters.
"""
if label_name not in image_lists:
raise ValueError('Label does not exist ', label_name)
label_lists = image_lists[label_name]
if category not in label_lists:
raise ValueError('Category does not exist ', category)
category_list = label_lists[category]
if not category_list:
raise ValueError('Label %s has no images in the category %s.',
label_name, category)
mod_index = index % len(category_list)
base_name = category_list[mod_index]
sub_dir = label_lists['dir']
full_path = os.path.join(image_dir, sub_dir, base_name)
return full_path
# +
BATCH_SIZE=32
RANDOM_SEED=123
train_datagen = CustomImageDataGenerator(rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
preprocessing_function=preprocessing_function)
test_datagen = CustomImageDataGenerator(preprocessing_function=preprocessing_function)
train_generator = train_datagen.flow_from_image_lists(
image_lists=image_lists,
category='training',
image_dir=DATA_DIR,
target_size=(INPUT_SIZE, INPUT_SIZE),
batch_size=BATCH_SIZE,
class_mode='categorical',seed=RANDOM_SEED)
val_generator = test_datagen.flow_from_image_lists(
image_lists=image_lists,
category='validation',
image_dir=DATA_DIR,
target_size=(INPUT_SIZE, INPUT_SIZE),
batch_size=BATCH_SIZE,
class_mode='categorical',seed=RANDOM_SEED)
# +
N_CLASS=val_generator.num_classes
nb_train_samples=train_generator.samples
nb_validation_samples=val_generator.samples
print(N_CLASS,nb_train_samples,nb_validation_samples)
class_to_idx=val_generator.class_indices
idx_to_class={class_to_idx[cls]:cls for cls in class_to_idx}
print(idx_to_class)
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight(
'balanced',
np.unique(train_generator.classes),
train_generator.classes)
# +
start_epoch=0
for l in base_model.layers:
l.trainable=False
model.compile('adam', 'categorical_crossentropy', metrics=['accuracy'])
model.summary()
mc = ModelCheckpoint(net_description+'.h5', monitor='val_acc', verbose=1, save_best_only=True)
es=EarlyStopping(monitor='val_acc',patience=2)
FIRST_EPOCHS=10
#tb,
hist1=model.fit_generator(train_generator, steps_per_epoch=nb_train_samples//BATCH_SIZE, epochs=FIRST_EPOCHS, verbose=1,
initial_epoch=0, callbacks=[mc, es], validation_data=val_generator, validation_steps=nb_validation_samples // BATCH_SIZE,class_weight=class_weights)
# +
#DOES NOT WORK!!!
start_epoch=len(hist1.history['loss'])
#start_epoch=2
model.load_weights(net_description+'.h5')
if True:
for l in base_model.layers:
l.trainable=True
else:
trainable=False
for layer in base_model.layers:
if layer.name=='block7c_expand_conv':
trainable=True
layer.trainable=trainable
model.compile('adam', 'categorical_crossentropy', metrics=['accuracy'])
model.summary()
mc = ModelCheckpoint(net_description+'_ft.h5', monitor='val_acc', verbose=1, save_best_only=True)
es=EarlyStopping(monitor='val_acc',patience=2 )
SECOND_EPOCHS=5+start_epoch
hist2=model.fit_generator(train_generator, steps_per_epoch=nb_train_samples//BATCH_SIZE, epochs=SECOND_EPOCHS, verbose=1,
initial_epoch=start_epoch, callbacks=[mc], validation_data=val_generator, validation_steps=nb_validation_samples // BATCH_SIZE,class_weight=class_weights)
# -
| tf_keras/train_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.5 ('base')
# language: python
# name: python3
# ---
# First baseline approach in which we perform multilabel classification at each timestep. It differs from lstm.ipynb and lstm2.ipynb with different approach to training. Here I create big batches and do not backpropagate after processing every sequence.
# +
# %load_ext autoreload
# %autoreload 2
import numpy as np
import torch
import torch.nn as nn
import torch.utils.data as data
import os
import numpy as np
from tqdm import tqdm
import pypianoroll
# -
#some constants
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
LEARNING_RATE = 0.001
TRAIN_BATCH_SIZE = 60
VAL_BATCH_SIZE = 30
DATA_PATH = '../data/Nottingham/' # I was using very small dataset here. Just to check if this approach work and don't spend
#Hours on training just to understand the approach is wrong
NUM_EPOCHS = 5
POSITIVE_WEIGHT = 2 # Since it is more likely not to play a note. I've introduced some small positive weight, so that model
#doesn't converge to predicting all 0's
def path_to_pianoroll(path, resolution = 8):
#The bigger resolution will be the more detailed the representation, but also the longer sequences will become.
#So it may be hard for the LSTM to handle them.
midi_data = pypianoroll.read(path, resolution=resolution)
piano_roll = midi_data.blend()[:, 21:109] #Taking just 88 useful notes. This will have shape
#[length_of_sequence, number_of_notes]
#we want to perform multilabel classification at each step so we need to binaryze the roll
piano_roll[piano_roll > 0] = 1
return piano_roll
def collate(batch):
#Helper function for DataLoader
#Batch is a list of tuple in the form (input, target)
#We do not have to padd everything thanks to pack_sequence
# #!Using this function we decide how batches are prepared
data = [item[0] for item in batch] #
data = nn.utils.rnn.pack_sequence(data, enforce_sorted=False) # we prepare batch as a packed_sequence.
#This function is very cool as we do not need to pad these sequences
targets = [item[1] for item in batch]
targets = nn.utils.rnn.pack_sequence(targets, enforce_sorted=False)
return [data, targets]
class NotesGenerationDataset(data.Dataset):
"""I've decided not to work on text and convert it to the piano roll since this only makes more work. We can
work directly on the pianoroll, and if needed convert it to text representation.
"""
def __init__(self, path):
self.path = path
self.full_filenames = []
#Here we assume that all midi files are valid, we do not check anything here.
for root, subdirs, files in os.walk(path):
for f in files:
self.full_filenames.append(os.path.join(root, f))
def __len__(self):
return len(self.full_filenames)
def __getitem__(self, index):
full_filename = self.full_filenames[index]
piano_roll = path_to_pianoroll(full_filename)
#input and gt are shifted by one step w.r.t one another.
input_sequence = piano_roll[:-1, :]
ground_truth_sequence = piano_roll[1:, :]
return torch.tensor(input_sequence, dtype=torch.float32), torch.tensor(ground_truth_sequence, dtype=torch.float32)
# +
trainset = NotesGenerationDataset(os.path.join(DATA_PATH, "train"))
#ofc we want big batch_size. However, one training sample takes quite a lot of memory.
#We will use torch.cuda.amp.autocast() so that we can make bigger batches
trainset_loader = torch.utils.data.DataLoader(trainset, batch_size=TRAIN_BATCH_SIZE,
shuffle=True, drop_last=True, collate_fn=collate)
valset = NotesGenerationDataset(os.path.join(DATA_PATH, "valid"))
valset_loader = torch.utils.data.DataLoader(valset, batch_size=VAL_BATCH_SIZE, shuffle=False, drop_last=False, collate_fn=collate)
# -
#Small sanity check that our sets do not intersect at any moment
train_songs = set(trainset.full_filenames)
for song in valset.full_filenames:
assert not song in train_songs
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_classes, n_layers=2):
super(RNN, self).__init__()
self.input_size = input_size # amount of different notes
self.hidden_size = hidden_size
self.num_classes = num_classes
self.n_layers = n_layers
#At first we need layer that will encode our vector with only once to better representation
self.notes_encoder = nn.Linear(in_features=input_size, out_features=hidden_size)
self.lstm = nn.LSTM(hidden_size, hidden_size, n_layers)
#At the end we want to get vector with logits of all notes
self.logits_fc = nn.Linear(hidden_size, num_classes)
def forward(self, inp, hidden=None):
#During training the input is packedSequence, but during inference this will be just a tensor
if isinstance(inp, nn.utils.rnn.PackedSequence):
#If we have Packed sequence we proceed a little bit differently
batch_sizes = inp.batch_sizes
notes_encoded = self.notes_encoder(inp.data) #PackedSequence.data is a tensor representation of shape [samples, num_of_notes]
rnn_in = nn.utils.rnn.PackedSequence(notes_encoded,batch_sizes) #This is not recommended in PyTorch documentation.
#However this saves a day here. Since otherwise we would have to create padded sequences
outputs, hidden = self.lstm(rnn_in, hidden)
logits = self.logits_fc(outputs.data) #Again we go from packedSequence to tensor.
else:
#If we have tensor at the input this is pretty straightforward
notes_encoded = self.notes_encoder(inp)
outputs, hidden = self.lstm(notes_encoded, hidden)
logits = self.logits_fc(outputs)
return logits, hidden
# +
#Now sanity check about Packed Sequences. So I check if Unpacking -> packing the packed Sequence will lead to exactly the same Object.
inp, targets = next(iter(trainset_loader))
batch_sizes = inp.batch_sizes
inp2 = nn.utils.rnn.PackedSequence(inp.data, batch_sizes)
assert torch.all(torch.eq(inp.data, inp2.data))
# +
rnn = RNN(input_size=88, hidden_size=256, num_classes=88)
rnn = rnn.to(DEVICE)
criterion = nn.BCEWithLogitsLoss(pos_weight=torch.full((88,), POSITIVE_WEIGHT, device=DEVICE))
optimizer = torch.optim.Adam(rnn.parameters(), lr=LEARNING_RATE)
scaler = torch.cuda.amp.GradScaler()
# -
def validate():
rnn.eval()
loop = tqdm(valset_loader, leave=True)
losses = []
with torch.no_grad():
for idx, (inp, target) in enumerate(loop):
inp, target = inp.to(DEVICE), target.to(DEVICE)
logits, _ = rnn(inp)
loss = criterion(logits, target.data)
losses.append(loss.item())
loop.set_postfix(loss=loss.item())
rnn.train()
return sum(losses) / len(losses)
# +
clip = 1.0 #with Rnn's batch normalization is tricky to implement so instead we can use gradient clipping
#but just the clipping may be not enough, so we perform kind of normalization too
best_val_loss = float("inf")
loss_list = []
val_list = []
for epoch_number in range(NUM_EPOCHS):
loop = tqdm(trainset_loader, leave=True)
losses = []
for idx, (inp, target) in enumerate(loop):
inp, target = inp.to(DEVICE), target.to(DEVICE)
optimizer.zero_grad() # remember to do this every time not to accumulate gradient
with torch.cuda.amp.autocast():
logits, _ = rnn(inp)
loss = criterion(logits, target.data)
scaler.scale(loss).backward()
# Unscales the gradients of optimizer's assigned params in-place
scaler.unscale_(optimizer)
# Since the gradients of optimizer's assigned params are unscaled, clips as usual:
torch.nn.utils.clip_grad_norm_(rnn.parameters(), clip)
scaler.step(optimizer)
scaler.update()
losses.append(loss.item())
loop.set_postfix(loss=loss.item())
train_loss = sum(losses)/len(losses)
loss_list.append(train_loss)
current_val_loss = validate()
val_list.append(current_val_loss)
print(f"Epoch {epoch_number}, train_loss: {train_loss}, val_loss: {current_val_loss}")
if current_val_loss < best_val_loss:
torch.save(rnn.state_dict(), 'music_rnn.pth')
best_val_loss = current_val_loss
# -
def sample_from_piano_rnn(sample_length=4, temperature=1, starting_sequence=None):
#Sem some default starting sequence if noone was given
if starting_sequence is None:
current_sequence_input = torch.zeros(1,1, 88, dtype=torch.float32, device=DEVICE)
current_sequence_input[0, 0, 40] = 1
current_sequence_input[0, 0, 50] = 1
current_sequence_input[0, 0, 56] = 1
final_output_sequence = [current_sequence_input.squeeze(1)]
hidden = None
with torch.no_grad():
for i in range(sample_length):
output, hidden = rnn(current_sequence_input, hidden)
#By dividing by temperature before passing it to the sigmoid we can either make it more peaked
#or more uniform. It works because rate of change of sigmoid is not linear w.r.t input.
#So changing from 0.01 to 0.1 won't make that big difference But change from 0.1 to 1 make a difference of about 25%
probabilities = torch.sigmoid(output.div(temperature))
prob_of_0 = 1 - probabilities
dist = torch.stack((prob_of_0, probabilities), dim=3).squeeze() #Here we will get tensor [num_of_notes, 2]
#from multinomial we have [num_of_notes, 1]. But eventually we want to have [1,1,num_of_notes]
current_sequence_input = torch.multinomial(dist, 1).squeeze().unsqueeze(0).unsqueeze(1).to(torch.float32)
final_output_sequence.append(current_sequence_input.squeeze(1))
sampled_sequence = torch.cat(final_output_sequence, dim=0).cpu().numpy()
return sampled_sequence
sample = sample_from_piano_rnn(sample_length=200, temperature=0.05)
np.sum(sample) # Just to check how many notes are played withing these 200 timesteps.
roll = np.zeros((201,128))
roll[:, 21:109] = sample
roll[roll == 1] = 100
track = pypianoroll.Multitrack(resolution=3)
track.append(pypianoroll.StandardTrack(pianoroll=roll))
pypianoroll.write("baseline1_song2.mid", track)
| notebooks/baseline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
print('import successfull')
torch.cuda.is_available()
torch.cuda.current_device()
torch.device(0)
torch.cuda.get_device_name()
# device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
# Hyper-parameters
num_epochs = 10
batch_size = 4
learning_rate = 0.001
# dataset has PILImage images of range [0, 1].
# We transform them to Tensors of normalized range [-1, 1]
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))# Sequence of mean / std for each channel
])
transform
# CIFAR10: 60000 32x32 color images in 10 classes, with 6000 images per class
train_dataset= torchvision.datasets.CIFAR10(root='./data',
train=True,
download=True,
transform = transform)
test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
# +
# defining train and tets loader
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False)
# -
dataset = iter(train_dataset)
next(dataset)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# +
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)#input channel,ouputchanel, kenel size
self.pool = nn.MaxPool2d(2, 2)# kenel size , stride
self.conv2 = nn.Conv2d(6, 16, 5)# intput chanel, output , kernel
self.fc1 = nn.Linear(16 * 5 * 5, 120)# int feature , out features
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# -> n, 3, 32, 32
x = self.pool(F.relu(self.conv1(x))) # -> n, 6, 14, 14
x = self.pool(F.relu(self.conv2(x))) # -> n, 16, 5, 5
x = x.view(-1, 16 * 5 * 5) # -> n, 400
x = F.relu(self.fc1(x)) # -> n, 120
x = F.relu(self.fc2(x)) # -> n, 84
x = self.fc3(x) # -> n, 10
return x
model = ConvNet().to(device)
# -
model
# ### input size for fc1
conv1 = nn.Conv2d(3,6,5)
pool = nn.MaxPool2d(2,2)
conv2= nn.Conv2d(6,16,5)
print(images.shape)# batch size is 4, 3 channel , 32*32 images
x= conv1(images)
print(x.shape) # (W-F+2P)/S+1 , w = width of input ,F= fiter_size,P= padding, S= stride
x= pool(x)
print(x.shape)
x= conv2(x)
print(x.shape)
x= pool(x)
print(x.shape)
## defining loss function and optimiser
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
print(criterion)
print(optimizer)
n_total_steps = len(train_loader)
print(n_total_steps)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# origin shape: [32, 3, 32, 32] = 4, 3, 1024
# input_layer: 3 input channels, 6 output channels, 5 kernel size
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 4000 == 0:
print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')
print('Trainning finished')
PATH = './cnn.pth'
torch.save(model.state_dict(), PATH)
with torch.no_grad():
n_correct = 0
n_samples = 0
n_class_correct = [0 for i in range(10)]
n_class_samples = [0 for i in range(10)]
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
# max returns (value ,index)
_, predicted = torch.max(outputs, 1)
n_samples += labels.size(0)
n_correct += (predicted == labels).sum().item()
acc = 100.0 * n_correct / n_samples
print(f'Accuracy of the network: {acc} %')
| pytorch/py_engg/CIFAR10_cnn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [__UCI Bank Marketing Dataset__](https://archive.ics.uci.edu/ml/datasets/bank+marketing#) - Classification Problem
# ### __Table of Contents__
#
# 1. Explore dataset
# 2. Feature Summary
# 3. Approach
# 4. Exploratory Data Analysis
# 5. Model Building
# 6. Model evaluation
# ### __Aim__
# To analyse the input variables from the data set and build a model to __*classify *__ whether a candidate subscribes for a term deposit or not.
#
# __Dataset__ - _bank-additional-full.csv_
# ### __Feature Summary__
#
# |Variable|Description|Type|Unique values |
# | :- |:-|:-:|:-|
# |age | age in years|numeric||
# | job | type of job |categorical|admin., blue-collar, entrepreneur, housemaid, management, retired, self-employed, services, student, technician, unemployed, unknown|
# | education | education |categorical|basic.4y, basic.6y, basic.9y, high.school, illiterate, professional.course, university.degree, unknown|
# |marital | marital status |categorical|divorced, married, single, unknown|
# | default| has credit in default? |categorical|no, yes, unknown|
# |housing| has housing loan? |categorical|no, yes, unknown|
# |loan| has personal loan? |categorical|no, yes, unknown|
# |contact| contact communication type |categorical|cellular, telephone|
# |month| last contact month of year |categorical|jan, feb, mar, ..., nov, dec|
# |day_of_week| last contact day of the week |categorical|mon, tue, wed, thu, fri|
# |duration| last contact duration, in seconds |numeric||
# |campaign| number of contacts performed during this campaign and for this client |numeric||
# |pdays| number of days that passed by after the client was last contacted from a previous campaign |numeric||
# |previous| number of contacts performed before this campaign and for this client |numeric||
# |poutcome| outcome of the previous marketing campaign |categorical|failure, nonexistent, success|
# |emp.var.rate| employment variation rate |numeric||
# |cons.price.idx| consumer price index |numeric||
# |cons.conf.idx| consumer confidence index |numeric||
# |euribor3m| euribor 3 month rate |numeric||
# |nr.employed| number of employees |numeric||
# |y | has the client subscribed a term deposit? |categorical|yes, no|
# ### __Approach__
#
# 1. Explore the data
# - Read the dataset using pandas library. Use *head(), info()* and *describe()*.
# 2. Clean the data
# - Check for missing/ null/ NaN values in the dataset. Impute values or delete records on a case by case basis.
# - Check for skewness on numerical variables. Impute median values for NaNs, if the data is skewed. Otherwise mean values are sufficient.
# 3. EDA
# - Explore categorial variables using seaborn library's countplot.
# - Explore each variable on basis of the subscriber to non-subscriber ratio
# 4. Model building and evaluation
# - Convert categorical values(strings) to numeric
# - Split the dataset into train and test datasets
# - Define the classifiers from *sklearn* library
# - Fit the classifier on the training dataset
# - Predict the fitted model on the test dataset
# - Evaluate the model on the basis of Area under the Curve
# - Summarise the results
#
# ### __Libraries__
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import skew
# %matplotlib inline
# -
# ### __Explore the dataset__
#Read the dataset
Banco = pd.read_csv('bank-additional-full.csv',sep=';')
Banco.head(2)
Banco.info()
Banco.describe()
# ### __Exploratory Data Analysis__
#Check the missing/ NaN values for each column
Banco.isnull().sum()
# > Missing values are reported as _unknown_ in the dataset. Hence, there are no NaNs.
#Check the skewness - Numerical features only - sorted the absolute values
Banco.skew().abs().sort_values(ascending= False)
# > All the economic indicators and *age* are mildly skewed. All the other remaining features hihgly skewed.
# #### __Custom Functions__
#
# - Define *feature_ratio*, *feature_count* and *feature_count2* functions to extract the ratio of subscribers to non-subscribers for each unique value in a column.
# - Define *replace_unknown* function to impute/ replace unknowns in a column
# - Define *range_count* to get the count of subcribers, non-subscribers and subscription ratio for a user-defined range in numerical features
# - Define *stack_plot* to plot a stacked plot for a column on the target variable
#Feature ratio - Ratio of subscribers to non-subscribers for each unique value in a given column
def feature_ratio(colname):
'''feature_ratio(colname)
Feature ratio - Ratio of subscribers to non-subscribers for each unique value in a given column
'''
col_types = Banco[colname].dropna().unique()
for col in col_types:
a = Banco[(Banco[colname] == col) & (Banco['y'] == 'yes')][colname].count()
b = Banco[(Banco[colname] == col) & (Banco['y'] == 'no')][colname].count()
print col, " \t%.3f" % (a/float(b))
#Feature count - Count of subscribers and non-subscribers for each unique value in a given column
def feature_count(colname):
'''feature_count(colname)
Feature count - Count of subscribers and non-subscribers for each unique value in a given column
'''
x = []
a = []
b = []
c = []
col_types = Banco[colname].dropna().unique()
for col in col_types:
x.append(col)
a.append(Banco[(Banco[colname] == col) & (Banco['y'] == 'yes')][colname].count())
b.append(Banco[(Banco[colname] == col) & (Banco['y'] == 'no')][colname].count())
c = [i/float(j) for i,j in zip(a, b)]
print pd.DataFrame(list(zip(x,c,a,b)),columns=[colname,'Ratio','Yes' ,'No']).sort_values(by=['Ratio'], ascending= False)
#Feature count2 - Count of subscribers and non-subscribers for each unique value in two different columns
def feature_count2(colname1, colname2):
'''feature_count(colname1, colname)
Feature count - Count of subscribers and non-subscribers for each unique value in two different columns
'''
col_type1 = Banco[colname1].dropna().unique()
col_type2 = Banco[colname2].dropna().unique()
print colname1,"\tDay\tYes\tNo\tRatio"
print "--------------------------------------"
for col1 in col_type1:
for col2 in col_type2:
a = Banco[(Banco[colname1] == col1) & (Banco[colname2] == col2) & (Banco['y'] == 'yes')][colname1].count()
b = Banco[(Banco[colname1] == col1) & (Banco[colname2] == col2) & (Banco['y'] == 'no')][colname1].count()
print col1,"\t",col2, "\t",a,"\t", b, "\t%.3f" % (a/float(b))
print "--------------------------------------"
#Function to impute unknown with Mode.
def replace_unknown(var, col):
'''replace_unknown(var, col)
Function to impute 'unknown' with Mode.
'''
if var == 'unknown':
#Index is used to pick the string from the series
return Banco[col].mode()[0]
else:
return var
#Function to get the count of subcribers, non-subscribers and subscription ratio for a user-defined range in numerical features
def range_count(start, end, split, col_name):
'''range_count(start, end, split, col_name)
Function to get the count of subcribers, non-subscribers and subscription ratio for a user-defined range in numerical features
'''
print "Range\t\tRatio\tYes\tNo"
print "----------------------------------"
num = np.arange(start,end+1,split)
for i in range(len(num)):
if i==0:
pass
else:
a = Banco[(Banco[col_name] > num[i-1]) & (Banco[col_name] < num[i]) & (Banco['y'] == 'yes')][col_name].count()
b = Banco[(Banco[col_name] > num[i-1]) & (Banco[col_name] < num[i]) & (Banco['y'] == 'no')][col_name].count()
if b == 0:
print num[i-1],"-",num[i],"\t",np.nan,"\t" ,a,"\t", b
else:
print num[i-1],"-",num[i]," \t%.3f" % (a/float(b)),"\t" ,a,"\t", b
#Function to plot the stacked plot for a column
def stack_plot(col_name, target, nbins):
'''stack_plot(col_name, target, nbins)
Function to plot the stacked plot for a column
'''
dct = {}
for name in Banco[target].unique():
dct[name] = Banco.groupby(target).get_group(name)[col_name]
pd.DataFrame(dct).plot.hist(stacked=True, bins = nbins)
#Distribution of unique values of Job feature on the target variable
sns.countplot(y = Banco['job'], hue = Banco['y'])
feature_count('job')
# > People with Admin jobs have the highest subscription (1389 subscriptions). Also, students and retired people have high subcription ratio.
#Impute unknowns with Mode
Banco['job'] = Banco['job'].apply(lambda x: replace_unknown(x,'job'))
#Check if there is any change in ratios after imputing mode
feature_count('job')
# > There is not a lot of change in the yes/no ratio for the Admin jobs after the replacement
Banco['marital'].value_counts()
sns.countplot(x = Banco['marital'], hue = Banco['y'])
feature_count('marital')
# > People who are _Single_ have a better subscription ratio compared to others.
#Impute unknowns with Mode in marital column
Banco['marital'] = Banco['marital'].apply(lambda x: replace_unknown(x,'marital'))
feature_count('marital')
# > Again, no significant change noticed in the ratio after replacement
sns.countplot(y = Banco['education'], hue = Banco['y'])
# > University Degree and High School completed people have high subscriptions
feature_count('education')
# There are ~1700 unknowns for the education column. We will consider _unknown_ as another unique value.
# > In addition to these, Professional course completed students also have good subscription rates. Illiterates have the highest ratio, but the frequency is low.
sns.countplot(x=Banco['default'], hue=Banco['y'])
feature_count('default')
# > A lot of samples are unknown. However, only 3 samples have defaulted.
# Since the *unknown* values are around 8600, imputing mode for the unknowns in this column can alter the model fit. We can let the unknown be a another unique value for this column.
Banco['default'].value_counts()
Banco[Banco['default']=='yes']['y']
# > All 3 cases of loan defaulters did not subscribe. Possibly, no savings -> no subscription?
sns.countplot(x = Banco['housing'], hue=Banco['y'])
feature_count('housing')
# Since there are around 1000 records of unknowns, we will not impute the mode.
# > People with housing loan have a slightly higher subscription rate compare to ones without.
sns.countplot(x=Banco['loan'], hue=Banco['y'])
feature_count('loan')
# > Subscription ratio is same for all three values. Non-loan takers have slightly better ratio.
sns.countplot(x=Banco['contact'], hue=Banco['y'])
feature_count('contact')
# > Cellular contact to users resulted in a relatively better subcription
sns.countplot(x=Banco['month'], hue= Banco['y'])
# > _May_ has the highest number of subscriptions and non-takers as well
feature_count('month')
# > Mar, Dec, Sep, Oct have better subscription ratios. However, the volumes are very less.
sns.countplot(x=Banco['day_of_week'], hue=Banco['y'])
feature_count('day_of_week')
# > Subscription ratios are good mid-week i.e., Tuesday to Thursday
#Compare the subscription ratios for month and Day_of_week columns.
#To see if there is any pattern for weekday in every month
feature_count2('month','day_of_week')
#Plot a stacked plot
stack_plot('campaign','y', 60)
range_count(0,60,2,'campaign')
# > Ratio is more when the number of times contacted is below 5. Infact, the subscribers are maximum when the contacted once. If the contact is more than 20, usually the subscription is not taken (Only single outlier).
stack_plot('duration','y',20)
#Distribution of duration column
range_count(0,5000,250,'duration')
# > 500-1500 call duration range indicates good conversion i.e., good subscription. Beyond that range, the frequency is low to have any significance
sns.countplot(x = Banco['poutcome'], hue=Banco['y'])
feature_count('poutcome')
# > For fresh campaign, it was not very successful. For previously successful campaigns, subscription rates are good
sns.countplot(x=Banco['previous'], hue=Banco['y'])
feature_count('previous')
# > Maximum subscribers are fresh. They have not been contacted previously. People subsribe better if the previous contact is more than or equal to 2 days. The frequency is low, though, for more than 2 days.
#Total number of subscribers and non-subscribers for the given dataset
Banco['y'].value_counts()
#Percentage of subscribers in the data set
print "%0.4f" % (4640/float(4640+36548))
# > 11.27% - Subscribers 88.73% of non-subscribers - Dataset is __imbalanced__
# ### __Model Building__
# We will model the training dataset with these classifiers
# - Logistic Regression
# - Random Forest
# - XGBoost
# #### __Convert the categorical values to numeric values__
# _scikit-learn_ models do not work with categorial variables(String). Hence, converting them to numeric values.
#Convert categorical values to numeric for each categorical feature
for col in Banco.columns:
if Banco[col].dtype == object:
Banco[col] = Banco[col].astype('category').cat.codes
#Check the dataset to see the changed dataset
Banco.head(5)
# +
#Define function to get all the model metrics
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score, roc_curve, auc
def model_metrics(X_test,y_test,y_model,obj):
conf = confusion_matrix(y_test, y_model)
tp = conf[0][0]
fn = conf[1][0]
tn = conf[1][1]
fp = conf[0][1]
sens = tp/float(tp+fn)
spec = tn/float(tn+fp)
mcc = (tp*tn - fp*fn)/float((tp+fp)*(tp+fn)*(fp+tn)*(tn+fn))**0.5
y_pred_proba = obj.predict_proba(X_test)[::,1]
fpr, tpr, threshold = roc_curve(y_test, y_pred_proba)
roc_auc = auc(fpr, tpr)
print "Classifier:",obj
print "----------------------------------------------------------------------------"
print "Accuracy\t\t: %0.4f" % accuracy_score(y_test, y_model)
print "Sensitivity\t\t: %0.4f" % sens
print "Specificity\t\t: %0.4f" % spec
print "Matthews Corr. Coeff.\t: %0.4f" % mcc
print "----------------------------------------------------------------------------"
print "Confusion Matrix: \n", conf
print "----------------------------------------------------------------------------"
print "Classification Report: \n",classification_report(y_test, y_model)
print "----------------------------------------------------------------------------"
plt.title('Receiver Operating Characteristic Curve')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.4f' % roc_auc)
plt.legend(loc = 'best')
plt.plot([0, 1], [0, 1],'r--')
#plt.xlim([0, 1])
#plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
# -
# #### __Predictor, Taget variables__
#Define the predictors and the target variable. No column is being dropped from the predictors.
X = Banco.drop('y', axis=1)
y = Banco['y']
# #### __Split the data__
#Split the data in 70:30 train-test ratio. We will train the model on X-train, y_train set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 101)
#Check the size of the training data
X_train.shape
# #### __Logistic Regression__
from sklearn.linear_model import LogisticRegression
#Define classifier
lr = LogisticRegression(random_state=101)
#Fit the model on training set
model_lr = lr.fit(X_train, y_train)
#Predict on the test set
pred_lr = model_lr.predict(X_test)
model_metrics(X_test,y_test, pred_lr, model_lr)
#Get the importance of each feature
def feature_imp(obj):
print pd.DataFrame(obj.feature_importances_,index = Banco.drop('y', axis=1).columns, columns=['imp']).sort_values('imp', ascending = False)
feature_imp(model_rf)
# > As mentioned in the dataset summary, __duration__ has the largest importance
# #### __Oversampling with SMOTE __
#
# The target variable is heavily skewed. We will perform SMOTE to oversample the training dataset.
from imblearn.over_sampling import SMOTE
#define the SMOTE object
sm = SMOTE(random_state=101)
#Fit the sample on the training dataset
X_sm, y_sm = sm.fit_sample(X_train,y_train)
#Check the fitted sample
X_sm, y_sm
#Size of the training set after SMOTE
X_sm.shape, y_sm.shape
#Count of subscribers in Train set after SMOTE
np.count_nonzero(y_sm == 1)
# #### __Logistic Regression with SMOTE on training dataset__
#Define classifier
lr_sm = LogisticRegression()
#Fit the model on SMOTE modified training set
model_lr_sm = lr_sm.fit(X_sm, y_sm)
# ##### __kFold Cross Validation__
# Perform a kFold Cross validation on the model to see if the model is overfitting the data. Applying SMOTE can sometimes overfit the model.
from sklearn.model_selection import cross_val_score
cvs_lr_sm = cross_val_score(model_lr_sm, X_sm, y_sm, cv=5, n_jobs=3).mean()
print "%0.4f" % cvs_lr_sm
# > Validation accuracy is 86.37%
#Prediction on the test set
pred_lr_sm = model_lr_sm.predict(X_test)
#Model Evaluation
model_metrics(X_test,y_test, pred_lr_sm, model_lr_sm)
# #### __Random Forest Classifier__
from sklearn.ensemble import RandomForestClassifier
#Define the classifier - 100 trees
rf = RandomForestClassifier(n_estimators=100, random_state=101)
#Fit the model on training set
model_rf = rf.fit(X_train, y_train)
# Predict the outcome
pred_rf = model_rf.predict(X_test)
#Model Evaluation
model_metrics(X_test,y_test, pred_rf, model_rf)
# #### __XGBoost Classifier__
from xgboost import XGBClassifier
#MDefine classifier
xgb = XGBClassifier(learning_rate=0.05, colsample_bylevel=1,colsample_bytree=0.8, max_depth=6, max_delta_step=0.9, n_estimators=300, scale_pos_weight=1, reg_lambda=0.1)
#Fit the model on training set
model_xgb = xgb.fit(X_train, y_train)
#Predict the values for the test set
pred_xgb = model_xgb.predict(X_test)
#Model Evaluation
model_metrics(X_test,y_test, pred_xgb, model_xgb)
# ### __Summary__
#
# | Classifier | Accuracy | AUC |
# |------|------|------|------|
# | Logistic Regression | 0.9091| 0.9250|
# | Logistic Regression + SMOTE | 0.8555| 0.9326|
# | Random Forest | 0.9137| 0.9399|
# | XGBoost | 0.9168| 0.9483|
#
# - Based on the table above we find that both in terms of *accuracy* and *Area Under the Curve (AUC)*, __XGBoost__ model performs well, followed closely by Random Forest.
# - Logistic Regression with SMOTE gives better AUC, however, performs worse when compared to Logistic regression in terms of accuracy.
# +
import tensorflow as tf
import pandas as pd
from sklearn.cross_validation import train_test_split
FILE_PATH = 'C:/Users/campus/Downloads/TensorFlow_Tutorials/bank_normalised.csv' # Path to .csv dataset
raw_data = pd.read_csv(FILE_PATH) # Open raw .csv
print("Raw data loaded successfully...\n")
#------------------------------------------------------------------------------
# Variables
Y_LABEL = 'y' # Name of the variable to be predicted
KEYS = [i for i in raw_data.keys().tolist() if i != Y_LABEL]# Name of predictors
N_INSTANCES = raw_data.shape[0] # Number of instances
N_INPUT = raw_data.shape[1] - 1 # Input size
N_CLASSES = raw_data[Y_LABEL].unique().shape[0] # Number of classes (output size)
TEST_SIZE = 0.1 # Test set size (% of dataset)
TRAIN_SIZE = int(N_INSTANCES * (1 - TEST_SIZE)) # Train size
LEARNING_RATE = 0.001 # Learning rate
TRAINING_EPOCHS = 4000 # Number of epochs
BATCH_SIZE = 100 # Batch size
DISPLAY_STEP = 20 # Display progress each x epochs
HIDDEN_SIZE = 200 # Number of hidden neurons 256
ACTIVATION_FUNCTION_OUT = tf.nn.tanh # Last layer act fct
STDDEV = 0.1 # Standard deviation (for weights random init)
RANDOM_STATE = 100
print("Variables loaded successfully...\n")
print("Number of predictors \t%s" %(N_INPUT))
print("Number of classes \t%s" %(N_CLASSES))
print("Number of instances \t%s" %(N_INSTANCES))
print("\n")
print("Metrics displayed:\tPrecision\n")
#------------------------------------------------------------------------------
# Loading data
# Load data
X1 = raw_data[KEYS].get_values() # X data
y = raw_data[Y_LABEL].get_values() # y data
X = raw_data.drop('y',axis=1).values
print(type(X))
print(type(X1))
# One hot encoding for labels
labels_ = np.zeros((N_INSTANCES, N_CLASSES))
labels_[np.arange(N_INSTANCES), labels] = 1
y = labels_
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size = TEST_SIZE,random_state = 101)
print("Data loaded and splitted successfully...\n")
#------------------------------------------------------------------------------
# Neural net construction
# Net params
n_input = N_INPUT # input n labels
n_hidden_1 = HIDDEN_SIZE # 1st layer
n_hidden_2 = HIDDEN_SIZE # 2nd layer
n_hidden_3 = HIDDEN_SIZE # 3rd layer
n_hidden_4 = HIDDEN_SIZE # 4th layer
n_classes = N_CLASSES # output m classes
# Tf placeholders
X = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
dropout_keep_prob = tf.placeholder(tf.float32)
def mlp(_X, _weights, _biases, dropout_keep_prob):
layer1 = tf.nn.dropout(tf.nn.tanh(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1'])), dropout_keep_prob)
layer2 = tf.nn.dropout(tf.nn.tanh(tf.add(tf.matmul(layer1, _weights['h2']), _biases['b2'])), dropout_keep_prob)
layer3 = tf.nn.dropout(tf.nn.tanh(tf.add(tf.matmul(layer2, _weights['h3']), _biases['b3'])), dropout_keep_prob)
layer4 = tf.nn.dropout(tf.nn.tanh(tf.add(tf.matmul(layer3, _weights['h4']), _biases['b4'])), dropout_keep_prob)
out = ACTIVATION_FUNCTION_OUT(tf.add(tf.matmul(layer4, _weights['out']), _biases['out']))
return out
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1],stddev=STDDEV)),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2],stddev=STDDEV)),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3],stddev=STDDEV)),
'h4': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_4],stddev=STDDEV)),
'out': tf.Variable(tf.random_normal([n_hidden_4, n_classes],stddev=STDDEV)),
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'b3': tf.Variable(tf.random_normal([n_hidden_3])),
'b4': tf.Variable(tf.random_normal([n_hidden_4])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Build model
pred = mlp(X, weights, biases, dropout_keep_prob)
# Loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y)) # softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate = LEARNING_RATE).minimize(cost)
# Accuracy
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Net built successfully...\n")
print("Starting training...\n")
#------------------------------------------------------------------------------
# Training
# Initialize variables
init_all = tf.initialize_all_variables()
# Launch session
sess = tf.Session()
sess.run(init_all)
# Training loop
for epoch in range(TRAINING_EPOCHS):
avg_cost = 0.
total_batch = int(data_train.shape[0] / BATCH_SIZE)
# Loop over all batches
for i in range(total_batch):
randidx = np.random.randint(int(TRAIN_SIZE), size = BATCH_SIZE)
batch_xs = X_train[randidx, :]
batch_ys = y_train[randidx, :]
#print(batch_xs.shape)
#print(batch_ys.shape)
# Fit using batched data
sess.run(optimizer, feed_dict={X: batch_xs, y: batch_ys, dropout_keep_prob: 0.9})
# Calculate average cost
avg_cost += sess.run(cost, feed_dict={X: batch_xs, y: batch_ys, dropout_keep_prob:1.})/total_batch
# Display progress
if epoch % DISPLAY_STEP == 0:
print ("Epoch: %04d/%04d cost: %.9f" % (epoch, TRAINING_EPOCHS, avg_cost))
train_acc = sess.run(accuracy, feed_dict={X: batch_xs, y: batch_ys, dropout_keep_prob:1.})
print ("Training accuracy: %.3f" % (train_acc))
print ("End of training.\n")
print("Testing...\n")
#------------------------------------------------------------------------------
# Testing
test_acc = sess.run(accuracy, feed_dict={X: data_test, y: labels_test, dropout_keep_prob:1.})
print ("Test accuracy: %.3f" % (test_acc))
sess.close()
print("Session closed!")
# -
a
| Projects/Misc - Machine Learning/Models/.ipynb_checkpoints/UCI_Bank_Marketing-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ex 1
# ## Numpy warmup exercises
# ### Preamble
import ucamcl
GRADER = ucamcl.autograder('https://markmy.solutions', course='scicomp').subsection('ex1')
# ### Q1
import numpy as np
import math, random
rng = np.random.default_rng(0)
x = rng.uniform(-1, 1, 1000000)
y = rng.uniform(-1, 1, 1000000)
d = np.sqrt(x**2 + y**2)
mean = np.mean(d)
sd = np.std(d)
print(mean, sd)
q = GRADER.fetch_question('q1')
print(q)
# Store x and y as two columns in an n*2 matrix
xy = rng.uniform(-1,1,size=(1000000,2))
# axis=1 means "compute the norm of each row"
d = np.linalg.norm(xy, axis=1)
{'mean': np.mean(d), 'sd': np.std(d)}
# ### Q2
# +
x = np.linspace(.1 ,10, 1000)
a = x**2
b = 0.01*(np.exp(x)-1)
print(x[np.argmax(a<b)])
# -
q = GRADER.fetch_question('q2')
print(q)
# ### Q3
names = np.array(['alexis','chloe','guarav','shay','adrian','rebecca'])
i = np.argsort(names)
print(i)
names_rank = np.char.add(names[i], np.arange(1,7).astype(str))
# j = np.array([1, 2, 3, 5, 0, 4])
# names_rank[j]
print(np.argsort(i))
names_rank[np.argsort(i)]
q = GRADER.fetch_question('q3')
print(q)
# ### Q4
# +
# a.shape gives a tuple of dimensions, and the first is len(a)
q = GRADER.fetch_question('q4')
print(q)
# -
# ### Q5
vec = np.arange(1, 16)
vec = np.reshape(vec, (3, 5))
colsums = np.sum(vec, axis = 0)
rowsums = np.sum(vec, axis = 1)
# can use -1 for one of the reshape dimensions: infers from the size of the array and the other dimensions
q = GRADER.fetch_question('q5')
print(q)
# ### Q6
n = 5
np.arange(1, n+1)[:, None]
np.arange(1, n+1).reshape(-1, 1)
np.array([np.arange(1, n+1)]).transpose()
q = GRADER.fetch_question('q6')
print(q)
# ### Q7
n = 5
cols = rng.permutation(n)
rows = rng.permutation(n)
a = np.zeros((n, n), dtype=int)
a[rows, cols] = np.ones(n)
# Can just use 1 instead on RHS
a
q = GRADER.fetch_question('q7')
print(q)
# ### Q8
a = np.array([4,1,2,8,2,3,1])
C = 3
q_0 = 1
x = np.cumsum(a - C)
y = np.minimum.accumulate(q_0 + x)
q = q_0 + x - np.minimum(0, y)
q
q = GRADER.fetch_question('q8')
print(q)
| Work/ex1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pyjanitor-dev
# language: python
# name: pyjanitor-dev
# ---
# + [markdown] pycharm={"metadata": false, "name": "#%% md\n"}
# # Tinytuesday example
#
# Example of using `pyjanitor` based on [tidytuesday-2019-05-07](https://github.com/rfordatascience/tidytuesday/tree/master/data/2019/2019-05-07)
#
# + pycharm={"is_executing": false, "metadata": false, "name": "#%%\n"}
import janitor
import pandas as pd
import pandas_flavor as pf
dirty_csv = "https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-05-07/EDULIT_DS_06052019101747206.csv"
dirty_df= pd.read_csv(dirty_csv)
dirty_df.head()
# -
# We need some custom functions.
# + pycharm={"is_executing": false, "metadata": false, "name": "#%%\n"}
@pf.register_dataframe_method
def str_remove(df, column_name: str, pat: str, *args, **kwargs):
"""Remove a substring, given its pattern from a string value, in a given column"""
df[column_name] = df[column_name].str.replace(pat, '', *args, **kwargs)
return df
@pf.register_dataframe_method
def str_trim(df, column_name: str, *args, **kwargs):
"""Remove trailing and leading characters, in a given column"""
df[column_name] = df[column_name].str.strip(*args, **kwargs)
return df
@pf.register_dataframe_method
def str_title(df, column_name: str, *args, **kwargs):
"""Make the first letter in each word upper case"""
df[column_name] = df[column_name].str.title(*args, **kwargs)
return df
@pf.register_dataframe_method
def drop_duplicated_column(df, column_name: str, column_order: int=0):
"""Remove duplicated columns and retain only a column given its order.
Order 0 is to remove the first column, Order 1 is to remove the second column, and etc"""
cols = list(df.columns)
col_indexes = [col_idx for col_idx, col_name in enumerate(cols) if col_name == column_name]
# given that a column could be duplicated, user could opt based on its order
removed_col_idx = col_indexes[column_order]
# get the column indexes without column that is being removed
filtered_cols = [c_i for c_i, c_v in enumerate(cols) if c_i != removed_col_idx]
return df.iloc[:, filtered_cols]
# +
py_clean_df = (
dirty_df
.clean_names()
# modify string values
.str_remove("indicator", "Pupil-teacher ratio in")
.str_remove("indicator", "(headcount basis)")
.str_remove("indicator", "\\(\\)")
.str_trim("indicator")
.str_trim("country")
.str_title("indicator")
# remove `time` column (which is duplicated). The second `time` is being removed
.drop_duplicated_column("time", 1)
# renaming columns
.rename_column("location", "country_code")
.rename_column("value", "student_ratio")
.rename_column("time", "year")
)
py_clean_df.head()
# + pycharm={"is_executing": false, "metadata": false, "name": "#%%\n"}
# ensure that the output from janitor is similar with the clean r's janitor
r_clean_csv = "https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-05-07/student_teacher_ratio.csv"
r_clean_df = pd.read_csv(r_clean_csv)
pd.testing.assert_frame_equal(r_clean_df, py_clean_df)
| examples/notebooks/tidytuesday_2019-05-07.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1>Quiz 1 : Pemahaman Machine Learning</h1>
#
# Jawab/Kerjakan pertanyaan/perintah di bawah ini dengan bahasa kalian sendiri :
#
# - Apa itu machine learning?
# - Apa itu data feature dan data target
# - Apa Perbedaan Supervised Learning, dan Unsupervised Learning
# - Apa Jenis2 yang ada di dalam Supervised Learning? Jelaskan Perbedaannya!
# - Apa perbedaan Hyperparameter dan Parameter
# - Sebutkan step2 dalam mengapplikasikan algoritma apapun dalam machine learning
# 1. kemampuan mesin yang bisa belajar dari sebuah data untuk meningkatkan kecerdasannya
# 2. feautre adalah variable predictor yang mana akan digunakan untuk memprediksi suatu kejadian atau target
# data target adalah hasil dari sebuah prediksi
# 3. supervised learning memiliki column dan output yang jelas
# unsupervised learning tidak memiliki column dan output yang jelas
# 4. classification = memberikan output class label
# regression = output sebuah quantity yang continous
# 5. hyperparameter = dia di definisikan di awal, tidak ada sangkut pautnya dengan model
# parameter = dihasilkan di akhir, ada sangkut pautnya dengan model
# 6. pilih model yang akan digunakan
# pilih hyperparameter ketika membuat objek model
# pisahkan antara data feature dengan data target (sklearn hanya menerima input 2 dimensi/matriks)
# perintahkan model untuk mempelajari data dengan method .fit()
# aplikasikan model ke dalam test data dengan menggunakan methode predict(), atau dengan transform() untuk unsupervised
# learning
# <h1>Quiz 2 : Pemahaman Machine Learning</h1>
#
# Pelajarilah secara garis besar suatu model/algoritma machine learning, kemudian applikasikan untuk membuat model bagi data (variable x, y) di bawah ini. kemudian buatlah prediksi terhadap data training dan data baru di interval 20-30.
#
# - Plot data asli, data hasil prediksi terhadap data training, data hasil prediksi terhadap data baru
# - Tunjukan beberapa parameter yang dimiliki model yang telah kalian buat
# +
import matplotlib.pyplot as plt
import numpy as np
rng = np.random.RandomState(42)
x = 20 * rng.rand(50)
y = x**2 + 2 * x + - 1 + rng.randn(50)
x_matrix = x[:, np.newaxis]
y_matrix = y[:, np.newaxis]
# data baru yang akan di prediksi
x_new = np.arange(20, 30, 0.5)
# data x_new harus dijadikan matriks agar bisa di proses
x_test = x_new[:, np.newaxis]
fig, ax = plt.subplots(figsize=(12, 6))
ax.scatter(x, y)
ax.set_xlabel('X-Value')
ax.set_ylabel('Y-Value')
plt.show()
# +
# from sk.learn import LinearRegression
from sklearn.tree import DecisionTreeRegressor
model = DecisionTreeRegressor(max_depth=5,max_leaf_nodes=7,min_samples_leaf=5)
# step ke 4, perintahkan model untuk mempelajari berdasarkan data yang sudah ada
model.fit(x_matrix,y)
# step 5, aplikasikan dengan method predict()
# y merupakan hasil data dari sebuah proses prediksi dengan parameter data x
# misal dibawah ini y_new adalah hasil dari proses dari parameter x_test , logika ini sama seperti (y = 2x)
y_test = model.predict(x_test)
# melihat data data yang telah belajar dari data yang sebelumnya
y_train = model.predict(x_matrix)
fig, ax = plt.subplots(figsize=(12, 8))
ax.scatter(x,y, c='b', label='data asli')
ax.scatter(x,y_train, c='y', label='prediksi terhadap data training')
ax.plot(x_test,y_test, c='g', label='prediksi terhadap data baru')
plt.legend()
plt.show()
# +
# dengan linearRegression
# from sk.learn import LinearRegression
from sklearn.linear_model import LinearRegression
model = LinearRegression()
# step ke 4, perintahkan model untuk mempelajari berdasarkan data yang sudah ada
model.fit(x_matrix,y)
# step 5, aplikasikan dengan method predict()
# y merupakan hasil data dari sebuah proses prediksi dengan parameter data x
# misal dibawah ini y_new adalah hasil dari proses dari parameter x_test , logika ini sama seperti (y = 2x)
y_test = model.predict(x_test)
# melihat data data yang telah belajar dari data yang sebelumnya
y_train = model.predict(x_matrix)
fig, ax = plt.subplots(figsize=(12, 8))
ax.scatter(x,y, c='g')
ax.scatter(x,y_train)
ax.plot(x_test,y_test)
plt.show()
# -
# Contoh Jawaban Hasil Pembuatan Model dengan Algoritma DecisionTreeRegressor (Jawaban tidak haru sama) :
#
# 
# Contoh Jawaban Parameters (Tidak Harus Sama) :
#
# - Feature Importance : array([1.])
# - n features : 1
#
# Tree Graph :
#
# 
# +
# soon install graphviz
from sklearn import tree
import pydotplus
from IPython.display import Image
hasil = tree.DecisionTreeRegressor(criterion='mse',max_leaf_nodes=5)
hasil_train = hasil.fit(x_test,y_test)
dot_data = tree.export_graphviz(hasil_train)
graph = pydotplus.graph_from_dot_data(dot_data)
# +
# DecisionTreeRegressor?
# +
from sklearn import tree
clf = tree.DecisionTreeRegressor(max_depth=5,max_leaf_nodes=7,min_samples_leaf=5)
clf = clf.fit(x_matrix, y_matrix)
tree.plot_tree(clf) # doctest: +SKIP
plt.show()
# -
| JupyterNote Data Science/Tugas Harian 1 Minggu 4-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercises (with solutions)
#
# +
import numpy as np
from astropy import table
from astropy.table import Table
# %matplotlib inline
from matplotlib import pyplot as plt
# -
# ### Read the data
# To start with, read in the two data files representing the master source list and observations source list. The fields for the two tables are respectively documented in:
#
# - [master_sources](http://cxc.harvard.edu/csc/columns/master.html)
# - [obs_sources](http://cxc.harvard.edu/csc/columns/persrc.html)
master_sources = Table.read('cdfs_master_sources.fits')
obs_sources = Table.read('cdfs_obs_sources.fits')
# **`master_sources`**
#
# Each distinct X-ray source identified on the sky is represented in the catalog by a single "master source" entry and one or more "source observation" entries, one for each observation in which the source has been detected. The master source entry records the best estimates of the properties of a source, based on the data extracted from the set of observations in which the source has been detected. The subset of fields in our exercise table file are:
#
# Name | Description
# ------ | ------------
# msid | Master source ID
# name | Source name in the Chandra catalog
# ra | Source RA (deg)
# dec | Source Dec (deg)
#
# **`obs_sources`**
#
# The individual source entries record all of the properties about a detection extracted from a single observation, as well as associated file-based data products, which are observation-specific. The subset of fields in our exercise table file are:
#
# Name | Description
# ------ | ------------
# obsid | Observation ID
# obi | Observation interval
# targname | Target name
# gti_obs | Observation date
# flux_aper_b | Broad band (0.5 - 7 keV) flux (erg/cm2/sec)
# src_cnts_aper_b | Broad band source counts
# ra_b | Source RA (deg)
# dec_b | Source Dec (deg)
# livetime | Observation duration (sec)
# posid | Position ID
# theta | Off-axis angle (arcmin)
# msid | Master source ID
# ### Exploring the data
# Do the following to explore the two tables:
#
# - Display the data for each table in IPython notebook using the normal way of showing the value of a variable.
# - Get a list of the column names for each table. *Hint*: use `<TAB>` completion to conveniently discover all the attributes and methods (e.g., type `master_sources.` and then hit the `<TAB>` key).
# - Find the length of each table.
# - Find the column datatypes for each table.
#
# Normally you display a table in IPython notebook by entering the variable name in a cell and pressing `shift-Enter`. In a terminal session the default method is using something like `print(my_table)`. In both cases the `Table` object prefers to display only a screenful of data to prevent having a zillion lines of output if the table is huge. If you really want to see all the data you can use the [Table.pprint](http://astropy.readthedocs.org/en/stable/api/astropy.table.Table.html#astropy.table.Table.pprint) method. If you are using a Jupyter notebook interface, try the `show_in_notebook()` method.
#
# - Display all the rows of the `master_sources` table using its `pprint()` method.
# - If you are working in a regular terminal window (not IPython notebook), try the `more()` method as well.
master_sources.pprint()
obs_sources.show_in_notebook()
# ### Modifying tables
# For our analysis we don't actually need the `obi` (observation interval) column in the `obs_sources` table.
#
# - Remove the `obi` column from the `obs_sources` table.
#
# The `gti_obs` column name is a bit obscure (GTI is a good time interval, FWIW).
#
# - Rename the `gti_obs` column to `obs_date`.
#
# It would be nice to have a count rate in addition to the source counts.
#
# - Add a new column `src_rate_aper_b` which is the source counts divided by observation duration in sec.
#
# Some of the sources have a negative net flux in the broad band.
obs_sources.remove_column('obi')
obs_sources.rename_column("gti_obs", "obs_date")
obs_sources['src_rate_aper_b'] = obs_sources['src_cnts_aper_b'] / obs_sources['livetime']
# ### Looking at the observation source data
# For each source detected in an individual observation (in the `obs_sources` table), let's look at the source flux values.
#
# - Use the matplotlib [`hist()`]( http://matplotlib.org/api/pyplot_api.html?highlight=pyplot.hist#matplotlib.pyplot.hist) function to make a histogram of the source fluxes. Since the fluxes vary by orders of magnitude,
# use the `numpy.log10` to put the fluxes in log space.
#
# - Also make the same plot but using only sources within 4 arcmin of the center. *HINT*: use a boolean mask to select values of `theta` that are less than 4.0.
plt.figure()
plt.hist(np.log10(obs_sources['flux_aper_b']))
plt.show()
mask = obs_sources['theta'] < 4.0
plt.figure()
plt.hist(np.log10(obs_sources[mask]['flux_aper_b']))
plt.show()
# ### Join the master_sources and obs_sources tables
#
# The `master_sources` and `obs_sources` tables share a common `msid` column. What we now want is to join the master RA and Dec positions and master source names with the individual observations table.
#
# - Use the [table.join()](http://astropy.readthedocs.org/en/stable/table/operations.html#join) function to make a single table called `sources` that has the master RA, Dec, and name included for each observation source.
#
# *HINT*: the defaults for `keys` and `join_type='inner'` are correct in this case, so the simplest possible call to `join()` will work!
#
# - *Intermediate*: Is the length of the new `sources` the same as `obs_sources`? What happened?
#
# - *Advanced*: Make a scatter plot of the RA (x-axis) and Dec (y-axis) difference between the master source position and the observation source position. You'll need to use `coordinates`!
sources = table.join(master_sources, obs_sources, join_type='inner')
len(sources), len(master_sources), len(obs_sources)
sources.colnames
# +
from astropy.coordinates import SkyCoord
import astropy.units as u
src_coord = SkyCoord(ra=sources['ra'], dec=sources['dec'], unit=(u.hourangle, u.deg))
obs_coord = SkyCoord(ra=sources['ra_b'], dec=sources['dec_b'], unit=(u.hourangle, u.deg))
d_ra = src_coord.ra - obs_coord.ra
d_dec = src_coord.dec - obs_coord.dec
# convert degrees to arcsec
plt.scatter(d_ra.arcsec, d_dec.arcsec)
# -
# ### Grouped properties of `sources`
#
# Finally, we can look at the variability properties of sources in the CDFS using the [`group_by()`](http://astropy.readthedocs.org/en/stable/table/operations.html#id2) functionality.
#
# This method makes a new table in which all of the sources with identical master ID are next to each other.
#
# - Make a new table `g_sources` which is the `sources` table grouped by the `msid` key using the `group_by()` method.
#
# The `g_sources` table is just a regular table with all of the `sources` in a particular order. The attribute `g_sources.groups` is an object that provides access to the `msid` subgroups. You can access the $i^{th}$ group with `g_sources.groups[i]`.
#
# In addition, the `g_sources.groups.indices` attribute is an array with the indicies of the group boundaries.
#
# - Using `np.diff()` find the number of repeat observations of each master sources. *HINT*: use the indices, Luke.
# - Print the 50th group and note which columns are the same for all group members and which are different. Does this make sense? In these few observations how many different target names were provided by observers?
g_sources = sources.group_by('msid')
np.diff(g_sources.groups.indices)
g_sources.groups[50]
# ### Aggregation
#
# The real power of grouping comes in the ability to create aggregate values for each of the groups, for instance the mean flux for each unique source. This is done with the [`aggregate()`](http://astropy.readthedocs.org/en/stable/table/operations.html#aggregation) method, which takes a function reference as its input. This function must take as input an array of values and return a single value.
#
# `aggregate` returns a new table that has a length equal to the number of groups.
#
# - Compute the mean of all columns for each unique source (i.e. each group) using `aggregate` and the `np.mean` function. Call this table `g_sources_mean`.
# - Notice that aggregation cannot form a mean for certain columns and these are dropped from the output. Use the `join()` function to restore the `master_sources` information to `g_sources_mean`.
g_sources_mean = table.join(g_sources.groups.aggregate(np.mean), master_sources, keys=['msid'], join_type='inner')
g_sources_mean
# [Back to top](#Tables-introduction)
| 06-Tables/Tables_solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [Table of Contents](./table_of_contents.ipynb)
# + active=""
# \appendix
# -
# # Installation
from __future__ import division, print_function
# This book is written in Jupyter Notebook, a browser based interactive Python environment that mixes Python, text, and math. I choose it because of the interactive features - I found Kalman filtering nearly impossible to learn until I started working in an interactive environment. It is difficult to form an intuition about many of the parameters until you can change them and immediately see the output. An interactive environment also allows you to play 'what if' scenarios. "What if I set $\mathbf{Q}$ to zero?" It is trivial to find out with Jupyter Notebook.
#
# Another reason I choose it is because most textbooks leaves many things opaque. For example, there might be a beautiful plot next to some pseudocode. That plot was produced by software, but software that is not available to the reader. I want everything that went into producing this book to be available to you. How do you plot a covariance ellipse? You won't know if you read most books. With Jupyter Notebook all you have to do is look at the source code.
#
# Even if you choose to read the book online you will want Python and the SciPy stack installed so that you can write your own Kalman filters. There are many different ways to install these libraries, and I cannot cover them all, but I will cover a few typical scenarios.
# ## Installing the SciPy Stack
# This book requires IPython, Jupyter, NumPy, SciPy, SymPy, and Matplotlib. The SciPy stack of NumPy, SciPy, and Matplotlib depends on third party Fortran and C code, and is not trivial to install from source code. The SciPy website strongly urges using a pre-built installation, and I concur with this advice.
#
# Jupyter notebook is the software that allows you to run Python inside of the browser - the book is a collection of Jupyter notebooks. IPython provides the infrastructure for Jupyter and data visualization. NumPy and Scipy are packages which provide the linear algebra implementation that the filters use. Sympy performs symbolic math - I use it to find derivatives of algebraic equations. Finally, Matplotlib provides plotting capability.
#
# I use the Anaconda distribution from Continuum Analytics. This is an excellent distribution that combines all of the packages listed above, plus many others. IPython recommends this package to install Ipython. Installation is very straightforward, and it can be done alongside other Python installations you might already have on your machine. It is free to use. You may download it from here: http://continuum.io/downloads I strongly recommend using the latest Python 3 version that they provide. For now I support Python 2.7, but perhaps not much longer.
#
# There are other choices for installing the SciPy stack. You can find instructions here: http://scipy.org/install.html It can be very cumbersome, and I do not support it or provide any instructions on how to do it.
#
# Many Linux distributions come with these packages pre-installed. However, they are often somewhat dated and they will need to be updated as the book depends on recent versions of all. Updating a specific Linux installation is beyond the scope of this book. An advantage of the Anaconda distribution is that it does not modify your local Python installation, so you can install it and not break your linux distribution. Some people have been tripped up by this. They install Anaconda, but the installed Python remains the default version and then the book's software doesn't run correctly.
#
# I do not run regression tests on old versions of these libraries. In fact, I know the code will not run on older versions (say, from 2014-2015). I do not want to spend my life doing tech support for a book, thus I put the burden on you to install a recent version of Python and the SciPy stack.
#
# You will need Python 2.7 or later installed. Almost all of my work is done in Python 3.6, but I periodically test on 2.7. I do not promise any specific check in will work in 2.7 however. I use Python's `from __future__ import ...` statement to help with compatibility. For example, all prints need to use parenthesis. If you try to add, say, `print x` into the book your script will fail; you must write `print(x)` as in Python 3.X.
#
# Please submit a bug report at the book's [github repository](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python) if you have installed the latest Anaconda and something does not work - I will continue to ensure the book will run with the latest Anaconda release. I'm rather indifferent if the book will not run on an older installation. I'm sorry, but I just don't have time to provide support for everyone's different setups. Packages like `jupyter notebook` are evolving rapidly, and I cannot keep up with all the changes *and* remain backwards compatible as well.
#
# If you need older versions of the software for other projects, note that Anaconda allows you to install multiple versions side-by-side. Documentation for this is here:
#
# https://conda.io/docs/user-guide/tasks/manage-python.html
#
# ## Installing FilterPy
#
# FilterPy is a Python library that implements all of the filters used in this book, and quite a few others. Installation is easy using `pip`. Issue the following from the command prompt:
#
# pip install filterpy
#
#
# FilterPy is written by me, and the latest development version is always available at https://github.com/rlabbe/filterpy.
# ## Downloading and Running the Book
# The book is stored in a github repository. From the command line type the following:
#
# git clone --depth=1 https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python.git
#
# This will create a directory named Kalman-and-Bayesian-Filters-in-Python. The `depth` parameter just gets you the latest version. Unless you need to see my entire commit history this is a lot faster and saves space.
#
# If you do not have git installed, browse to https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python where you can download the book via your browser.
#
# Now, from the command prompt change to the directory that was just created, and then run Jupyter notebook:
#
# cd Kalman-and-Bayesian-Filters-in-Python
# jupyter notebook
#
# A browser window should launch showing you all of the chapters in the book. Browse to the first chapter by clicking on it, then open the notebook in that subdirectory by clicking on the link.
#
# More information about running the notebook can be found here:
#
# http://jupyter-notebook-beginner-guide.readthedocs.org/en/latest/execute.html
# ## Companion Software
# Code that is specific to the book is stored with the book in the subdirectory *./kf_book*. This code is in a state of flux; I do not wish to document it here yet. I do mention in the book when I use code from this directory, so it should not be a mystery.
#
# In the *kf_book* subdirectory there are Python files with a name like *xxx*_internal.py. I use these to store functions that are useful for a specific chapter. This allows me to hide away Python code that is not particularly interesting to read - I may be generating a plot or chart, and I want you to focus on the contents of the chart, not the mechanics of how I generate that chart with Python. If you are curious as to the mechanics of that, just go and browse the source.
#
# Some chapters introduce functions that are useful for the rest of the book. Those functions are initially defined within the Notebook itself, but the code is also stored in a Python file that is imported if needed in later chapters. I do document when I do this where the function is first defined, but this is still a work in progress. I try to avoid this because then I always face the issue of code in the directory becoming out of sync with the code in the book. However, IPython Notebook does not give us a way to refer to code cells in other notebooks, so this is the only mechanism I know of to share functionality across notebooks.
#
# There is an undocumented directory called **experiments**. This is where I write and test code prior to putting it in the book. There is some interesting stuff in there, and feel free to look at it. As the book evolves I plan to create examples and projects, and a lot of this material will end up there. Small experiments will eventually just be deleted. If you are just interested in reading the book you can safely ignore this directory.
#
# The subdirectory *./kf_book* contains a css file containing the style guide for the book. The default look and feel of IPython Notebook is rather plain. Work is being done on this. I have followed the examples set by books such as [Probabilistic Programming and Bayesian Methods for Hackers](http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Chapter1.ipynb). I have also been very influenced by Professor <NAME>'s fantastic work, [available here](https://github.com/barbagroup/CFDPython). I owe all of my look and feel to the work of these projects.
#
# ## Using Jupyter Notebook
# A complete tutorial on Jupyter Notebook is beyond the scope of this book. Many are available online. In short, Python code is placed in cells. These are prefaced with text like `In [1]:`, and the code itself is in a boxed area. If you press CTRL-ENTER while focus is inside the box the code will run and the results will be displayed below the box. Like this:
print(3+7.2)
# If you have this open in Jupyter Notebook now, go ahead and modify that code by changing the expression inside the print statement and pressing CTRL+ENTER. The output should be changed to reflect what you typed in the code cell.
# ## SymPy
# SymPy is a Python package for performing symbolic mathematics. The full scope of its abilities are beyond this book, but it can perform algebra, integrate and differentiate equations, find solutions to differential equations, and much more. For example, we use use it to compute the Jacobian of matrices and expected value integral computations.
#
# First, a simple example. We will import SymPy, initialize its pretty print functionality (which will print equations using LaTeX). We will then declare a symbol for SymPy to use.
# +
import sympy
sympy.init_printing(use_latex='mathjax')
phi, x = sympy.symbols('\phi, x')
phi
# -
# Notice how it prints the symbol `phi` using LaTeX. Now let's do some math. What is the derivative of $\sqrt{\phi}$?
sympy.diff('sqrt(phi)')
# We can factor equations
sympy.factor(phi**3 -phi**2 + phi - 1)
# and we can expand them.
((phi+1)*(phi-4)).expand()
# You can evauate an equation for specific values of its variables:
w =x**2 -3*x +4
print(w.subs(x, 4))
print(w.subs(x, 12))
# You can also use strings for equations that use symbols that you have not defined:
x = sympy.expand('(t+1)*2')
x
# Now let's use SymPy to compute the Jacobian of a matrix. Given the function
#
# $$h=\sqrt{(x^2 + z^2)}$$
#
# find the Jacobian with respect to x, y, and z.
# +
x, y, z = sympy.symbols('x y z')
H = sympy.Matrix([sympy.sqrt(x**2 + z**2)])
state = sympy.Matrix([x, y, z])
H.jacobian(state)
# -
# Now let's compute the discrete process noise matrix $\mathbf Q$ given the continuous process noise matrix
# $$\mathbf Q = \Phi_s \begin{bmatrix}0&0&0\\0&0&0\\0&0&1\end{bmatrix}$$
#
# The integral is
#
# $$\mathbf Q = \int_0^{\Delta t} \mathbf F(t)\mathbf Q\mathbf F^T(t)\, dt$$
#
# where
# $$\mathbf F(\Delta t) = \begin{bmatrix}1 & \Delta t & {\Delta t}^2/2 \\ 0 & 1 & \Delta t\\ 0& 0& 1\end{bmatrix}$$
# +
dt = sympy.symbols('\Delta{t}')
F_k = sympy.Matrix([[1, dt, dt**2/2],
[0, 1, dt],
[0, 0, 1]])
Q = sympy.Matrix([[0,0,0],
[0,0,0],
[0,0,1]])
sympy.integrate(F_k*Q*F_k.T,(dt, 0, dt))
# -
# ## Various Links
# https://ipython.org/
#
# https://jupyter.org/
#
# https://www.scipy.org/
| Appendix-A-Installation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="tOtQTzjP7v4o"
# ### Source:
# 1) https://github.com/eitanrich/torch-mfa
#
# 2) https://github.com/eitanrich/gans-n-gmms
#
#
# + [markdown] id="wA4dtq2LOy-6"
# ### Getting: CelebA Dataset
# + id="_2KaFVMC5OsQ"
# !wget -q https://raw.githubusercontent.com/sayantanauddy/vae_lightning/main/data.py
# + [markdown] id="vjOJIbIt787p"
# ### Getting helper functions
# + id="JbQAykeR60Jt"
# !wget -q https://raw.githubusercontent.com/probml/pyprobml/master/scripts/mfa_celeba_helpers.py
# + [markdown] id="1dFSnp4890hi"
# ### Get the Kaggle api token and upload it to colab. Follow the instructions [here](https://github.com/Kaggle/kaggle-api#api-credentials).
#
# + colab={"base_uri": "https://localhost:8080/"} id="vbKcXwCedIN8" outputId="41eae946-da19-44ea-9b10-12819886ab54"
# !pip install kaggle
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 72} id="qncitxB3oyVF" outputId="381edb7e-dfdc-4954-fe22-7d7da87e2b2a"
from google.colab import files
uploaded = files.upload()
# + id="Yjc9kbpaoyki"
# !mkdir /root/.kaggle
# + id="I3hCPht5pjwa"
# !cp kaggle.json /root/.kaggle/kaggle.json
# + id="81GmoSPCpj4T"
# !chmod 600 /root/.kaggle/kaggle.json
# + [markdown] id="DWU2S_TslNHS"
# ### Train and saving the checkpoint
# + colab={"base_uri": "https://localhost:8080/"} id="cQxrKPIJ7d1O" outputId="91d9a158-6798-4e17-e0a8-4b471aeab54a"
# !pip install torchvision
# + id="GvyDg8146HIX" colab={"base_uri": "https://localhost:8080/"} outputId="6413f170-08b3-4718-90f1-381f8568e8ae"
# !pip install pytorch-lightning
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="mOIYhDs5kgYh" outputId="3885483c-b3fd-46e6-bc10-8d39a5f4b1b2"
import sys, os
import torch
from torchvision.datasets import CelebA, MNIST
import torchvision.transforms as transforms
from pytorch_lightning import LightningDataModule, LightningModule, Trainer
from torch.utils.data import DataLoader, random_split
import numpy as np
from matplotlib import pyplot as plt
from imageio import imwrite
from packaging import version
from mfa_celeba_helpers import *
from data import CelebADataset, CelebADataModule
"""
MFA model training (data fitting) example.
Note that actual EM (and SGD) training code are part of the MFA class itself.
"""
def main(argv):
assert version.parse(torch.__version__) >= version.parse('1.2.0')
dataset = argv[1] if len(argv) == 2 else 'celeba'
print('Preparing dataset and parameters for', dataset, '...')
if dataset == 'celeba':
image_shape = [64, 64, 3] # The input image shape
n_components = 300 # Number of components in the mixture model
n_factors = 10 # Number of factors - the latent dimension (same for all components)
batch_size = 1000 # The EM batch size
num_iterations = 30 # Number of EM iterations (=epochs)
feature_sampling = 0.2 # For faster responsibilities calculation, randomly sample the coordinates (or False)
mfa_sgd_epochs = 0 # Perform additional training with diagonal (per-pixel) covariance, using SGD
init_method = 'rnd_samples' # Initialize each component from few random samples using PPCA
trans = transforms.Compose([CropTransform((25, 50, 25+128, 50+128)), transforms.Resize(image_shape[0]), transforms.ToTensor(), ReshapeTransform([-1])])
train_set = CelebADataset(root='./data', split='train', transform=trans, download=True)
test_set = CelebADataset(root='./data', split='test', transform=trans, download=True)
elif dataset == 'mnist':
image_shape = [28, 28] # The input image shape
n_components = 50 # Number of components in the mixture model
n_factors = 6 # Number of factors - the latent dimension (same for all components)
batch_size = 1000 # The EM batch size
num_iterations = 30 # Number of EM iterations (=epochs)
feature_sampling = False # For faster responsibilities calculation, randomly sample the coordinates (or False)
mfa_sgd_epochs = 0 # Perform additional training with diagonal (per-pixel) covariance, using SGD
init_method = 'kmeans' # Initialize by using k-means clustering
trans = transforms.Compose([transforms.ToTensor(), ReshapeTransform([-1])])
train_set = MNIST(root='./data', train=True, transform=trans, download=True)
test_set = MNIST(root='./data', train=False, transform=trans, download=True)
else:
assert False, 'Unknown dataset: ' + dataset
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model_dir = './models/'+dataset
os.makedirs(model_dir, exist_ok=True)
figures_dir = './figures/'+dataset
os.makedirs(figures_dir, exist_ok=True)
model_name = 'c_{}_l_{}_init_{}'.format(n_components, n_factors, init_method)
print('Defining the MFA model...')
model = MFA(n_components=n_components, n_features=np.prod(image_shape), n_factors=n_factors,
init_method=init_method).to(device=device)
print('EM fitting: {} components / {} factors / batch size {} ...'.format(n_components, n_factors, batch_size))
ll_log = model.batch_fit(train_set, test_set, batch_size=batch_size, max_iterations=num_iterations,
feature_sampling=feature_sampling)
if mfa_sgd_epochs > 0:
print('Continuing training using SGD with diagonal (instead of isotropic) noise covariance...')
model.isotropic_noise = False
ll_log_sgd = model.sgd_mfa_train(train_set, test_size=256, max_epochs=mfa_sgd_epochs,
feature_sampling=feature_sampling)
ll_log += ll_log_sgd
print('Saving the model...')
torch.save(model.state_dict(), os.path.join(model_dir, 'model_'+model_name+'.pth'))
print('Visualizing the trained model...')
model_image = visualize_model(model, image_shape=image_shape, end_component=10)
imwrite(os.path.join(figures_dir, 'model_'+model_name+'.jpg'), model_image)
print('Generating random samples...')
rnd_samples, _ = model.sample(100, with_noise=False)
mosaic = samples_to_mosaic(rnd_samples, image_shape=image_shape)
imwrite(os.path.join(figures_dir, 'samples_'+model_name+'.jpg'), mosaic)
print('Plotting test log-likelihood graph...')
plt.plot(ll_log, label='c{}_l{}_b{}'.format(n_components, n_factors, batch_size))
plt.grid(True)
plt.savefig(os.path.join(figures_dir, 'training_graph_'+model_name+'.jpg'))
print('Done')
if __name__ == "__main__":
main(sys.argv)
| notebooks/mix_PPCA_celeba_training.ipynb |
// -*- coding: utf-8 -*-
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .java
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Java
// language: java
// name: java
// ---
// # PaddleOCR在DJL 上的實現
// 在這個教程裡,我們會展示利用 PaddleOCR 下載預訓練好文字處理模型並對指定的照片進行文學文字檢測 (OCR)。這個教程總共會分成三個部分:
//
// - 文字區塊檢測: 從圖片檢測出文字區塊
// - 文字角度檢測: 確認文字是否需要旋轉
// - 文字識別: 確認區塊內的文字
//
// ## 導入相關環境依賴及子類別
// 在這個例子中的前處理飛槳深度學習引擎需要搭配DJL混合模式進行深度學習推理,原因是引擎本身沒有包含ND數組操作,因此需要藉用其他引擎的數組操作能力來完成。這邊我們導入Pytorch來做協同的前處理工作:
// +
// // %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
// %maven ai.djl:api:0.15.0
// %maven ai.djl.paddlepaddle:paddlepaddle-model-zoo:0.15.0
// %maven org.slf4j:slf4j-simple:1.7.32
// second engine to do preprocessing and postprocessing
// %maven ai.djl.pytorch:pytorch-engine:0.15.0
// -
import ai.djl.*;
import ai.djl.inference.Predictor;
import ai.djl.modality.Classifications;
import ai.djl.modality.cv.Image;
import ai.djl.modality.cv.ImageFactory;
import ai.djl.modality.cv.output.*;
import ai.djl.modality.cv.util.NDImageUtils;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.DataType;
import ai.djl.ndarray.types.Shape;
import ai.djl.repository.zoo.*;
import ai.djl.paddlepaddle.zoo.cv.objectdetection.PpWordDetectionTranslator;
import ai.djl.paddlepaddle.zoo.cv.imageclassification.PpWordRotateTranslator;
import ai.djl.paddlepaddle.zoo.cv.wordrecognition.PpWordRecognitionTranslator;
import ai.djl.translate.*;
import java.util.concurrent.ConcurrentHashMap;
// ## 圖片讀取
// 首先讓我們載入這次教程會用到的機票範例圖片:
String url = "https://resources.djl.ai/images/flight_ticket.jpg";
Image img = ImageFactory.getInstance().fromUrl(url);
img.getWrappedImage();
// ## 文字區塊檢測
// 我們首先從 [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.0/doc/doc_en/inference_en.md#convert-detection-model-to-inference-model) 開發套件中讀取文字檢測的模型,之後我們可以生成一個DJL `Predictor` 並將其命名為 `detector`.
//
var criteria1 = Criteria.builder()
.optEngine("PaddlePaddle")
.setTypes(Image.class, DetectedObjects.class)
.optModelUrls("https://resources.djl.ai/test-models/paddleOCR/mobile/det_db.zip")
.optTranslator(new PpWordDetectionTranslator(new ConcurrentHashMap<String, String>()))
.build();
var detectionModel = criteria1.loadModel();
var detector = detectionModel.newPredictor();
// 接著我們檢測出圖片中的文字區塊,這個模型的原始輸出是含有標註所有文字區域的圖算法(Bitmap),我們可以利用`PpWordDetectionTranslator` 函式將圖算法的輸出轉成長方形的方框來裁剪圖片
var detectedObj = detector.predict(img);
Image newImage = img.duplicate();
newImage.drawBoundingBoxes(detectedObj);
newImage.getWrappedImage();
// 如上所示,所標註的文字區塊都非常窄,且沒有包住所有完整的文字區塊。讓我們嘗試使用`extendRect`函式來擴展文字框的長寬到需要的大小, 再利用 `getSubImage` 裁剪並擷取出文子區塊。
// +
Image getSubImage(Image img, BoundingBox box) {
Rectangle rect = box.getBounds();
double[] extended = extendRect(rect.getX(), rect.getY(), rect.getWidth(), rect.getHeight());
int width = img.getWidth();
int height = img.getHeight();
int[] recovered = {
(int) (extended[0] * width),
(int) (extended[1] * height),
(int) (extended[2] * width),
(int) (extended[3] * height)
};
return img.getSubImage(recovered[0], recovered[1], recovered[2], recovered[3]);
}
double[] extendRect(double xmin, double ymin, double width, double height) {
double centerx = xmin + width / 2;
double centery = ymin + height / 2;
if (width > height) {
width += height * 2.0;
height *= 3.0;
} else {
height += width * 2.0;
width *= 3.0;
}
double newX = centerx - width / 2 < 0 ? 0 : centerx - width / 2;
double newY = centery - height / 2 < 0 ? 0 : centery - height / 2;
double newWidth = newX + width > 1 ? 1 - newX : width;
double newHeight = newY + height > 1 ? 1 - newY : height;
return new double[] {newX, newY, newWidth, newHeight};
}
// -
// 讓我們輸出其中一個文字區塊
List<DetectedObjects.DetectedObject> boxes = detectedObj.items();
var sample = getSubImage(img, boxes.get(5).getBoundingBox());
sample.getWrappedImage();
// ## 文字角度檢測
// 我們從 [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.0/doc/doc_en/inference_en.md#convert-angle-classification-model-to-inference-model) 輸出這個模型並確認圖片及文字是否需要旋轉。以下的代碼會讀入這個模型並生成a `rotateClassifier` 子類別
var criteria2 = Criteria.builder()
.optEngine("PaddlePaddle")
.setTypes(Image.class, Classifications.class)
.optModelUrls("https://resources.djl.ai/test-models/paddleOCR/mobile/cls.zip")
.optTranslator(new PpWordRotateTranslator())
.build();
var rotateModel = criteria2.loadModel();
var rotateClassifier = rotateModel.newPredictor();
// ## 文字識別
//
// 我們從 [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.0/doc/doc_en/inference_en.md#convert-recognition-model-to-inference-model) 輸出這個模型並識別圖片中的文字, 我們一樣仿造上述的步驟讀取這個模型
//
var criteria3 = Criteria.builder()
.optEngine("PaddlePaddle")
.setTypes(Image.class, String.class)
.optModelUrls("https://resources.djl.ai/test-models/paddleOCR/mobile/rec_crnn.zip")
.optTranslator(new PpWordRecognitionTranslator())
.build();
var recognitionModel = criteria3.loadModel();
var recognizer = recognitionModel.newPredictor();
// 接著我們可以試著套用這兩個模型在先前剪裁好的文字區塊上
System.out.println(rotateClassifier.predict(sample));
recognizer.predict(sample);
// 最後我們把這些模型串連在一起並套用在整張圖片上看看結果會如何。DJL提供了豐富的影像工具包讓你可以從圖片中擷取出文字並且完美呈現
// +
Image rotateImg(Image image) {
try (NDManager manager = NDManager.newBaseManager()) {
NDArray rotated = NDImageUtils.rotate90(image.toNDArray(manager), 1);
return ImageFactory.getInstance().fromNDArray(rotated);
}
}
List<String> names = new ArrayList<>();
List<Double> prob = new ArrayList<>();
List<BoundingBox> rect = new ArrayList<>();
for (int i = 0; i < boxes.size(); i++) {
Image subImg = getSubImage(img, boxes.get(i).getBoundingBox());
if (subImg.getHeight() * 1.0 / subImg.getWidth() > 1.5) {
subImg = rotateImg(subImg);
}
Classifications.Classification result = rotateClassifier.predict(subImg).best();
if ("Rotate".equals(result.getClassName()) && result.getProbability() > 0.8) {
subImg = rotateImg(subImg);
}
String name = recognizer.predict(subImg);
names.add(name);
prob.add(-1.0);
rect.add(boxes.get(i).getBoundingBox());
}
newImage.drawBoundingBoxes(new DetectedObjects(names, prob, rect));
newImage.getWrappedImage();
| jupyter/paddlepaddle/paddle_ocr_java_zh.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PythonData
# language: python
# name: pythondata
# ---
# List of high schools
high_schools = ["Hernandez High School", "Figueroa High School", "Wilson High School", "Wright High School"]
print (high_schools)
for school in high_schools :
print (school)
# A dictionary of high schools and the type of school.
high_school_types = [{"High School": "Griffin", "Type":"District"},
{"High School": "Figueroa", "Type": "District"},
{"High School": "Wilson", "Type": "Charter"},
{"High School": "Wright", "Type": "Charter"}]
for school in high_school_types:
print(school)
# +
# 4.3.5 OVERVIEW OF PANDAS LIBRARIES
# -
# List of high schools
high_schools = ["Huang High School", "Figueroa High School", "Shelton High School", "Hernandez High School","Griffin High School","Wilson High School", "Cabrera High School", "Bailey High School", "Holden High School", "Pena High School", "Wright High School","Rodriguez High School", "Johnson High School", "Ford High School", "Thomas High School"]
# Add the Pandas dependency (pd).
import pandas as pd
# Create a Pandas Series from the list.
school_series = pd.Series(high_schools)
school_series
# SKILL DRILL Iterate through the school_series and print out each high school.
for school in school_series :
print(school)
# SKILL DRILL print using index method
for index in range(0,len(school_series)) :
print(school_series[index])
# +
# PANDAS DATAFRAMES
# CONVERT A LIST OF DICTIONARIES TO DATAFRAME
# -
# CREATING A dictionary of high schools
high_school_dicts = [{"School ID": 0, "school_name": "Huang High School", "type": "District"},
{"School ID": 1, "school_name": "Figueroa High School", "type": "District"},
{"School ID": 2, "school_name":"Shelton High School", "type": "Charter"},
{"School ID": 3, "school_name":"Hernandez High School", "type": "District"},
{"School ID": 4, "school_name":"Griffin High School", "type": "Charter"}]
# convert the array, or list of dictionaries, to a DataFrame using school_df = pd.DataFrame(high_school_dicts)
school_df = pd.DataFrame(high_school_dicts)
school_df
# +
# CONVERT A LIST OR SERIES TO A DATAFRAME
# -
import pandas as pd
# +
# CONVERTING LISTS TO A DATAFRAME
# Three separate lists of information on high schools
school_id = [0, 1, 2, 3, 4]
school_name = ["Huang High School", "Figueroa High School",
"Shelton High School", "Hernandez High School","Griffin High School"]
type_of_school = ["District", "District", "Charter", "District","Charter"]
# -
# Initialize a new DataFrame. Creates an empty DataFrame.
schools_df = pd.DataFrame()
# +
# Add the list to a new DataFrame.
schools_df["School ID"] = school_id
# Print the DataFrame.
schools_df
# +
# SKILL DRILL Add the school_name list as "School Name" and the type_of_school list as "Type"
# to the schools_df DataFrame. Then print out the schools_df DataFrame.
schools_df["school_name"] = school_name
schools_df["type"] = type_of_school
# Print the DataFrame.
schools_df
# -
# Create a dictionary of information on high schools.
high_schools_dict = {"School ID":school_id, "school_name":school_name, "type":type_of_school}
high_schools_dict
# SKILL DRILL - Convert dictionary high_school_dict to a DataFrame
pd.DataFrame(high_schools_dict)
# SKILL DRILL - Convert dictionary high_school_dict to a DataFrame
schoolDict_df = pd.DataFrame(high_schools_dict)
schoolDict_df
# +
# ANATOMY OF A DATAFRAME
# -
school_df.columns
school_df.index
school_df.values
# +
# SKILL DRILL
# -
school_dict = {
"School ID": [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],
"school_name": ["Huang High School", "Figueroa High School", "Shelton High School", "Hernandez High School","Griffin High School","Wilson High School", "Cabrera High School", "Bailey High School", "Holden High School", "Pena High School", "Wright High School","Rodriguez High School", "Johnson High School", "Ford High School", "Thomas High School"],
"type": ["District","District","Charter","District","Charter","Charter","Charter","District","Charter","Charter","Charter","District","District","District","Charter"]
}
# convert to DataFrame
pd.DataFrame (school_dict)
# +
# 4.4.1 IMPORT AND INSPECT CSV FILES
# -
| pandas_practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py3_sci]
# language: python
# name: conda-env-py3_sci-py
# ---
# # Measuring the potential for internal priming in Nanopore reads
#
# Does nanopore data suffer from internal priming in the same way as Illumina?
# +
import sys
import os
import re
from glob import glob
import random
from collections import defaultdict
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn as sns
import pyBigWig as pybw
import pysam
## Default plotting params
# %matplotlib inline
sns.set(font='Arial')
plt.rcParams['svg.fonttype'] = 'none'
style = sns.axes_style('white')
style.update(sns.axes_style('ticks'))
style['xtick.major.size'] = 2
style['ytick.major.size'] = 2
sns.set(font_scale=2, style=style)
pal = sns.color_palette(['#0072b2', '#d55e00', '#009e73', '#f0e442', '#cc79a7'])
cmap = ListedColormap(pal.as_hex())
sns.set_palette(pal)
sns.palplot(pal)
plt.show()
# +
FASTA = '/cluster/ggs_lab/mtparker/Arabidopsis_annotations/TAIR10/ensembl/release_35/Arabidopsis_thaliana.TAIR10.dna.toplevel.fa'
with pysam.FastaFile(FASTA) as fasta, open('polya_sites.bed', 'w') as polya:
for chrom in fasta.references:
seq = fasta.fetch(chrom)
for polya_site in re.finditer('(A{6})|(T{6})', seq):
strand = '+' if polya_site.group(1) else '-'
start = polya_site.start()
end = polya_site.end()
polya.write(f'{chrom}\t{start}\t{end}\tpolya\t.\t{strand}\n')
# -
# !head polya_sites.bed
# !bedtools getfasta -s -fi {FASTA} -bed polya_sites.bed -fo stdout | head
# + language="bash"
#
# ARAPORT='/cluster/ggs_lab/mtparker/Arabidopsis_annotations/Araport/v11/201606/Araport11_GFF3_genes_transposons.201606.no_chr.gtf'
#
# bedtools intersect -s -f 1 -u \
# -a polya_sites.bed \
# -b <(awk '$3 == "CDS"' $ARAPORT) |
# bedtools intersect -v -s \
# -a stdin \
# -b <(awk '$3 == "3UTR"' $ARAPORT) > polya_cds.bed
# wc -l polya_cds.bed
# +
def parse_exons(record):
start = int(record[1])
end = int(record[2])
exstarts = np.fromstring(record[11], sep=',') + start
exends = exstarts + np.fromstring(record[10], sep=',')
exons = np.dstack([exstarts, exends])[0]
return exons
def get_last_exon(record, flanksize=200):
chrom = record[0].replace('Chr', '')
strand = record[5]
exons = parse_exons(record)
if strand == '+':
last_exon = exons[-1]
else:
last_exon = exons[0]
return chrom, last_exon[0], last_exon[1], strand
# -
last_exons = []
with open('/cluster/ggs_lab/mtparker/Arabidopsis_annotations/Araport/v11/201606/Araport11_protein_coding.201606.bed') as bed:
for record in bed:
last_exons.append(get_last_exon(record.split()))
last_exons = pd.DataFrame(last_exons, columns=['chrom', 'start', 'end', 'strand'])
last_exons.head()
# +
fwd_bws = [
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180201_1617_20180201_FAH45730_WT_Col0_2916_regular_seq/aligned_data/TAIR10/201901_col0_2916_fwd_three-prime.bigwig',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180413_1558_20180413_FAH77434_mRNA_WT_Col0_2917/aligned_data/TAIR10/201901_col0_2917_fwd_three-prime.bigwig',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180416_1534_20180415_FAH83697_mRNA_WT_Col0_2918/aligned_data/TAIR10/201901_col0_2918_fwd_three-prime.bigwig',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180418_1428_20180418_FAH83552_mRNA_WT_Col0_2919/aligned_data/TAIR10/201901_col0_2919_fwd_three-prime.bigwig',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180405_FAH59362_WT_Col0_2917/aligned_data/TAIR10/201903_col0_2917_exp2_fwd_three-prime.bigwig'
]
fwd_bws = [pybw.open(fn) for fn in fwd_bws]
rev_bws = [
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180201_1617_20180201_FAH45730_WT_Col0_2916_regular_seq/aligned_data/TAIR10/201901_col0_2916_rev_three-prime.bigwig',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180413_1558_20180413_FAH77434_mRNA_WT_Col0_2917/aligned_data/TAIR10/201901_col0_2917_rev_three-prime.bigwig',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180416_1534_20180415_FAH83697_mRNA_WT_Col0_2918/aligned_data/TAIR10/201901_col0_2918_rev_three-prime.bigwig',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180418_1428_20180418_FAH83552_mRNA_WT_Col0_2919/aligned_data/TAIR10/201901_col0_2919_rev_three-prime.bigwig',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180405_FAH59362_WT_Col0_2917/aligned_data/TAIR10/201903_col0_2917_exp2_rev_three-prime.bigwig'
]
rev_bws = [pybw.open(fn) for fn in rev_bws]
# +
def has_three_prime_termination(chrom, start, end, strand, bw, w=13):
win_start = start - w
win_end = end + w
three_prime_ends = bw.values(chrom, win_start, win_end, numpy=True)
three_prime_ends[np.isnan(three_prime_ends)] = 0
return three_prime_ends.sum()
internal_priming_counts = defaultdict(list)
with open('polya_cds.bed') as bed:
for record in bed:
for fwd_bw, rev_bw in zip(fwd_bws, rev_bws):
chrom, start, end, *_, strand = record.split()
start, end = int(start), int(end)
bw = fwd_bw if strand == '+' else rev_bw
ip = has_three_prime_termination(chrom, start, end, strand, bw)
internal_priming_counts[(chrom, start, end, strand)].append(ip)
internal_priming_counts = pd.DataFrame(internal_priming_counts).T
internal_priming_counts.columns = ['2916', '2917a', '2918', '2919', '2917b']
internal_priming_counts['2917'] = internal_priming_counts.pop('2917a') + internal_priming_counts.pop('2917b')
internal_priming_counts.head()
# -
internal_priming_counts.shape
is_last_exon = []
for chrom, start, end, strand in internal_priming_counts.index.to_frame().itertuples(index=False):
if len(last_exons.query(f'chrom == "{chrom}" & strand == "{strand}" & start <= {start} & end >= {end}')):
is_last_exon.append(True)
else:
is_last_exon.append(False)
internal_priming_counts['last_exon'] = is_last_exon
internal_priming_counts.head()
internal_priming_counts.shape
internal_priming_counts[internal_priming_counts[['2916', '2917', '2918', '2919']].sum(1) > 0].shape
len(internal_priming_counts[internal_priming_counts[['2916', '2917', '2918', '2919']].sum(1) > 0]) / len(internal_priming_counts) * 100
len(internal_priming_counts[internal_priming_counts[['2916', '2917', '2918', '2919']].astype(bool).sum(1) == 1])
len(internal_priming_counts[internal_priming_counts[['2916', '2917', '2918', '2919']].astype(bool).sum(1) == 1]) / len(internal_priming_counts) * 100
internal_priming_counts[(internal_priming_counts[['2916', '2917', '2918', '2919']].sum(1).astype(bool)) & internal_priming_counts.last_exon].shape
internal_priming_counts['supported_in_all'] = internal_priming_counts[['2916', '2917', '2918', '2919']].astype(bool).sum(1) == 4
internal_priming_counts[internal_priming_counts.supported_in_all]
| notebooks/07_internal_priming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Adversarial Attacks Example in PyTorch
# + [markdown] colab_type="text" id="17loqDVddeFB"
# ## Import Dependencies
#
# This section imports all necessary libraries, such as PyTorch.
# + colab={} colab_type="code" id="eDXHEl0AdU3q"
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
import math
import torch.backends.cudnn as cudnn
import os
import argparse
# + [markdown] colab_type="text" id="nlTHx8sOdg27"
# ### GPU Check
# + colab={"base_uri": "https://localhost:8080/", "height": 33} colab_type="code" executionInfo={"elapsed": 900, "status": "ok", "timestamp": 1560944452374, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="3w_z5lpRds2T" outputId="c37de9a4-2bc0-488b-8f71-dd75b16d15f5"
device = 'cuda' if torch.cuda.is_available() else 'cpu'
if torch.cuda.is_available():
print("Using GPU.")
else:
print("Using CPU.")
# + [markdown] colab_type="text" id="2fjrmAzkdtqH"
# ## Data Preparation
# + colab={"base_uri": "https://localhost:8080/", "height": 33} colab_type="code" executionInfo={"elapsed": 1347, "status": "ok", "timestamp": 1560944456251, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="2DweIjUMdxFm" outputId="3b1ef0e2-219b-4b80-bd00-95351cb70e60"
# MNIST dataloader declaration
print('==> Preparing data..')
# The standard output of the torchvision MNIST data set is [0,1] range, which
# is what we want for later processing. All we need for a transform, is to
# translate it to tensors.
# We first download the train and test datasets if necessary and then load them into pytorch dataloaders.
mnist_train_dataset = datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True)
mnist_test_dataset = datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor(), download=True)
mnist_dataset_sizes = {'train' : mnist_train_dataset.__len__(), 'test' : mnist_test_dataset.__len__()} # a dictionary to keep both train and test datasets
mnist_train_loader = torch.utils.data.DataLoader(
dataset=mnist_train_dataset,
batch_size=256,
shuffle=True)
mnist_test_loader = torch.utils.data.DataLoader(
dataset=mnist_test_dataset,
batch_size=1,
shuffle=True)
mnist_dataloaders = {'train' : mnist_train_loader ,'test' : mnist_test_loader} # a dictionary to keep both train and test loaders
# + colab={"base_uri": "https://localhost:8080/", "height": 67} colab_type="code" executionInfo={"elapsed": 2702, "status": "ok", "timestamp": 1560944463208, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="6asXL_9mAnvY" outputId="9494f003-4238-4daa-af0d-6f518e64ca35"
# CIFAR10 dataloader declaration
print('==> Preparing data..')
# The standard output of the torchvision CIFAR data set is [0,1] range, which
# is what we want for later processing. All we need for a transform, is to
# translate it to tensors.
# we first download the train and test datasets if necessary and then load them into pytorch dataloaders
cifar_train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transforms.ToTensor())
cifar_train_loader = torch.utils.data.DataLoader(cifar_train_dataset, batch_size=128, shuffle=True, num_workers=2)
cifar_test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transforms.ToTensor())
cifar_test_loader = torch.utils.data.DataLoader(cifar_test_dataset, batch_size=100, shuffle=False, num_workers=2)
# these are the output categories from the CIFAR dataset
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# + [markdown] colab_type="text" id="NZx4ThVgd7Ad"
# ## Model Definition
#
# We used LeNet model to train against MNIST dataset because the dataset is not very complex and LeNet can easily reach a high accuracy to then demonstrate an ttack. For CIFAR10 dataset, however, we used the more complex DenseNet model to reach an accuracy of 90% to then attack.
# -
# ### LeNet
# + colab={} colab_type="code" id="yee8Mby2d6Hl"
# LeNet Model definition
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2)) #first convolutional layer
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) #secon convolutional layer with dropout
x = x.view(-1, 320) #making the data flat
x = F.relu(self.fc1(x)) #fully connected layer
x = F.dropout(x, training=self.training) #final dropout
x = self.fc2(x) # last fully connected layer
return F.log_softmax(x, dim=1) #output layer
# + [markdown] colab_type="text" id="G9MAzraE7jgn"
# This is the standard implementation of the DenseNet proposed in the following paper.
# [DenseNet paper](https://arxiv.org/abs/1608.06993)
#
# The idea of Densely Connected Networks is that every layer is connected to all its previous layers and its succeeding ones, thus forming a Dense Block.
#
# 
#
# The implementation is broken to smaller parts, called a Dense Block with 5 layers. Each time there is a convolution operation of the previous layer, it is followed by concatenation of the tensors. This is allowed as the channel dimensions, height and width of the input stay the same after convolution with a kernel size 3×3 and padding 1.
# In this way the feature maps produced are more diversified and tend to have richer patterns. Also, another advantage is better information flow during training.
# -
# ### DenseNet
# + colab={} colab_type="code" id="6-OGOBcYEJXn"
# This is a basic densenet model definition.
class Bottleneck(nn.Module):
def __init__(self, in_planes, growth_rate):
super(Bottleneck, self).__init__()
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv1 = nn.Conv2d(in_planes, 4*growth_rate, kernel_size=1, bias=False)
self.bn2 = nn.BatchNorm2d(4*growth_rate)
self.conv2 = nn.Conv2d(4*growth_rate, growth_rate, kernel_size=3, padding=1, bias=False)
def forward(self, x):
out = self.conv1(F.relu(self.bn1(x)))
out = self.conv2(F.relu(self.bn2(out)))
out = torch.cat([out,x], 1)
return out
class Transition(nn.Module):
def __init__(self, in_planes, out_planes):
super(Transition, self).__init__()
self.bn = nn.BatchNorm2d(in_planes)
self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=1, bias=False)
def forward(self, x):
out = self.conv(F.relu(self.bn(x)))
out = F.avg_pool2d(out, 2)
return out
class DenseNet(nn.Module):
def __init__(self, block, nblocks, growth_rate=12, reduction=0.5, num_classes=10):
super(DenseNet, self).__init__()
self.growth_rate = growth_rate
num_planes = 2*growth_rate
self.conv1 = nn.Conv2d(3, num_planes, kernel_size=3, padding=1, bias=False)
self.dense1 = self._make_dense_layers(block, num_planes, nblocks[0])
num_planes += nblocks[0]*growth_rate
out_planes = int(math.floor(num_planes*reduction))
self.trans1 = Transition(num_planes, out_planes)
num_planes = out_planes
self.dense2 = self._make_dense_layers(block, num_planes, nblocks[1])
num_planes += nblocks[1]*growth_rate
out_planes = int(math.floor(num_planes*reduction))
self.trans2 = Transition(num_planes, out_planes)
num_planes = out_planes
self.dense3 = self._make_dense_layers(block, num_planes, nblocks[2])
num_planes += nblocks[2]*growth_rate
out_planes = int(math.floor(num_planes*reduction))
self.trans3 = Transition(num_planes, out_planes)
num_planes = out_planes
self.dense4 = self._make_dense_layers(block, num_planes, nblocks[3])
num_planes += nblocks[3]*growth_rate
self.bn = nn.BatchNorm2d(num_planes)
self.linear = nn.Linear(num_planes, num_classes)
def _make_dense_layers(self, block, in_planes, nblock):
layers = []
for i in range(nblock):
layers.append(block(in_planes, self.growth_rate))
in_planes += self.growth_rate
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv1(x)
out = self.trans1(self.dense1(out))
out = self.trans2(self.dense2(out))
out = self.trans3(self.dense3(out))
out = self.dense4(out)
out = F.avg_pool2d(F.relu(self.bn(out)), 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
# This creates a densenet model with basic settings for cifar.
def densenet_cifar():
return DenseNet(Bottleneck, [6,12,24,16], growth_rate=12)
# + colab={"base_uri": "https://localhost:8080/", "height": 33} colab_type="code" executionInfo={"elapsed": 4759, "status": "ok", "timestamp": 1560944481762, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="NFABsDtufRik" outputId="281f0f13-b738-45fb-9dcf-f2032ade665a"
#building model for MNIST data
print('==> Building the model for MNIST dataset..')
mnist_model = LeNet().to(device)
mnist_criterion = nn.CrossEntropyLoss()
mnist_optimizer = optim.Adam(mnist_model.parameters(), lr=0.001)
mnist_num_epochs= 20
# + colab={"base_uri": "https://localhost:8080/", "height": 33} colab_type="code" executionInfo={"elapsed": 951, "status": "ok", "timestamp": 1560944484643, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="kq_fHH4nFkd8" outputId="6c4bc51f-b516-4c36-99d3-c4051f4b1fb0"
#building model for CIFAR10
# Model
print('==> Building the model for CIFAR10 dataset..')
# initialize our datamodel
cifar_model = densenet_cifar()
cifar_model = cifar_model.to(device)
# use cross entropy as our objective function, since we are building a classifier
cifar_criterion = nn.CrossEntropyLoss()
# use adam as an optimizer, because it is a popular default nowadays
# (following the crowd, I know)
cifar_optimizer = optim.Adam(cifar_model.parameters(), lr=0.1)
best_acc = 0 # save the best test accuracy
start_epoch = 0 # start from epoch 0
cifar_num_epochs =20
# + [markdown] colab_type="text" id="oBGCIXIxhfuz"
# ##Model Training
# + colab={} colab_type="code" id="_ciQnoAjhkxf"
#Training for MNIST dataset
def train_mnist_model(model, data_loaders, dataset_sizes, criterion, optimizer, num_epochs, device):
model = model.to(device)
model.train() # set train mode
# for each epoch
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch+1, num_epochs))
running_loss, running_corrects = 0.0, 0
# for each batch
for inputs, labels in data_loaders['train']:
inputs = inputs.to(device)
labels =labels.to(device)
# making sure all the gradients of parameter tensors are zero
optimizer.zero_grad() # set gradient as 0
# get the model output
outputs = model(inputs)
# get the prediction of model
_, preds = torch.max(outputs, 1)
# calculate loss of the output
loss = criterion(outputs, labels)
# backpropagation
loss.backward()
# update model parameters using optimzier
optimizer.step()
batch_loss_total = loss.item() * inputs.size(0) # total loss of the batch
running_loss += batch_loss_total # cumluative sum of loss
running_corrects += torch.sum(preds == labels.data) # cumulative sum of correct count
#calculating the loss and accuracy for the epoch
epoch_loss = running_loss / dataset_sizes['train']
epoch_acc = running_corrects.double() / dataset_sizes['train']
print('Train Loss: {:.4f} Acc: {:.4f}'.format(epoch_loss, epoch_acc))
print('-' * 10)
# after tranining epochs, test epoch starts
else:
model.eval() # set test mode
running_loss, running_corrects = 0.0, 0
# for each batch
for inputs, labels in data_loaders['test']:
inputs = inputs.to(device)
labels =labels.to(device)
# same with the training part.
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
running_loss += loss.item() * inputs.size(0) # cumluative sum of loss
running_corrects += torch.sum(preds == labels.data) # cumluative sum of corrects count
#calculating the loss and accuracy
test_loss = running_loss / dataset_sizes['test']
test_acc = (running_corrects.double() / dataset_sizes['test']).item()
print('<Test Loss: {:.4f} Acc: {:.4f}>'.format(test_loss, test_acc))
# + colab={"base_uri": "https://localhost:8080/", "height": 1037} colab_type="code" executionInfo={"elapsed": 110371, "status": "ok", "timestamp": 1560944808379, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="l36skEQEiCGu" outputId="f7f0432f-1afb-4164-ef7e-0a4ac5165e4d"
train_mnist_model(mnist_model, mnist_dataloaders, mnist_dataset_sizes, mnist_criterion, mnist_optimizer, mnist_num_epochs, device)
# + colab={} colab_type="code" id="AxvOthaoHLZD"
# Training for CIFAR10 dataset
def train_cifar_model(model, train_loader, criterion, optimizer, num_epochs, device):
print('\nEpoch: %d' % num_epochs)
model.train() #set the mode to train
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad() # making sure all the gradients of parameter tensors are zero
outputs = model(inputs) #forward pass the model againt the input
loss = criterion(outputs, targets) #calculate the loss
loss.backward() #back propagation
optimizer.step() #update model parameters using the optimiser
train_loss += loss.item() #cumulative sum of loss
_, predicted = outputs.max(1) #the model prediction
total += targets.size(0)
correct += predicted.eq(targets).sum().item() #cumulative sume of corrects count
if batch_idx % 100 == 0:
#calculating and printig the loss and accuracy
print('Loss: %.3f | Acc: %.3f%% (%d/%d)' % (train_loss/(batch_idx+1), 100.*correct/total, correct, total))
#testing for CIFAR10 dataset
def test_cifar_model(model, test_loader, criterion, device, save=True):
"""Tests the model.
Taks the epoch number as a parameter.
"""
global best_acc
model.eval() # set the mode to test
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
#similar to the train part
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = model(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
if batch_idx % 100 == 0:
print('Loss: %.3f | Acc: %.3f%% (%d/%d) TEST' % (test_loss/(batch_idx+1), 100.*correct/total, correct, total))
#calculating the accuracy
acc = 100.*correct/total
if acc > best_acc and save:
best_acc = acc
# + colab={"base_uri": "https://localhost:8080/", "height": 2358} colab_type="code" executionInfo={"elapsed": 1516986, "status": "ok", "timestamp": 1560950744837, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="sz4F61bZJ4kn" outputId="bc004b02-fbb8-4d72-e8f1-37de0b8ec5af"
for epoch in range(start_epoch, start_epoch+cifar_num_epochs):
train_cifar_model(cifar_model, cifar_train_loader, cifar_criterion, cifar_optimizer, epoch, device)
test_cifar_model(cifar_model, cifar_test_loader, cifar_criterion, device)
# + [markdown] colab_type="text" id="y2CdxjRKinGh"
# ## Save and Reload the Model
# + colab={} colab_type="code" id="GulJu4twmRsz"
# Mounting Google Drive
from google.colab import auth
auth.authenticate_user()
from google.colab import drive
drive.mount('/content/gdrive')
gdrive_dir = 'gdrive/My Drive/ml/' # update with your own path
# + colab={} colab_type="code" id="sXTIE7tMisI-"
# Save and reload the mnist_model
print('==> Saving model for MNIST..')
torch.save(mnist_model.state_dict(), gdrive_dir+'lenet_mnist_model.pth')
#change the directory to load your own pretrained model
print('==> Loading saved model for MNIST..')
mnist_model = LeNet().to(device)
mnist_model.load_state_dict(torch.load(gdrive_dir+'lenet_mnist_model.pth'))
mnist_model.eval()
# + colab={} colab_type="code" id="fgKeqp2ALzyL"
# Save and reload the cifar_model
print('==> Saving model for CIFAR..')
torch.save(cifar_model.state_dict(), './densenet_cifar_model.pth')
#change the directory to load your own pretrained model
print('==> Loading saved model for CIFAR..')
cifar_model = densenet_cifar().to(device)
cifar_model.load_state_dict(torch.load(gdrive_dir+'densenet_cifar_model.pth'))
cifar_model.eval()
# + [markdown] colab_type="text" id="udoRJbOPi3KW"
# ## Attack Definition
#
# We used these two attack methods:
#
# * Fast Gradient Signed Method (FGSM)
# * Iterative Least Likely method (Iter.L.L.)
# + colab={} colab_type="code" id="mouGBnEti2tx"
# Fast Gradient Singed Method attack (FGSM)
#Model is the trained model for the target dataset
#target is the ground truth label of the image
#epsilon is the hyper parameter which shows the degree of perturbation
def fgsm_attack(model, image, target, epsilon):
# Set requires_grad attribute of tensor. Important for Attack
image.requires_grad = True
# Forward pass the data through the model
output = model(image)
init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability(the prediction of the model)
# If the initial prediction is already wrong, dont bother attacking
if init_pred[0].item() != target[0].item():
#if init_pred.item() != target.item():
return image
# Calculate the loss
loss = F.nll_loss(output, target)
# Zero all existing gradients
model.zero_grad()
# Calculate gradients of model in backward pass
loss.backward()
# Collect datagrad
data_grad = image.grad.data
# Collect the element-wise sign of the data gradient
sign_data_grad = data_grad.sign()
# Create the perturbed image by adjusting each pixel of the input image
perturbed_image = image + epsilon*sign_data_grad
# Adding clipping to maintain [0,1] range
perturbed_image = torch.clamp(perturbed_image, 0, 1)
# Return the perturbed image
return perturbed_image
# + colab={} colab_type="code" id="TEdiQORQjK-d"
# Iterative least likely method
# Model is the trained model for the target dataset
# target is the ground truth label of the image
# alpha is the hyper parameter which shows the degree of perturbation in each iteration, the value is borrowed from the refrenced paper [4] according to the report file
# iters is the no. of iterations
# no. of iterations can be set manually, otherwise (if iters==0) this function will take care of it
def ill_attack(model, image, target, epsilon, alpha, iters):
# Forward passing the image through model one time to get the least likely labels
output = model(image)
ll_label = torch.min(output, 1)[1] # get the index of the min log-probability
if iters == 0 :
# In paper [4], min(epsilon + 4, 1.25*epsilon) is used as number of iterations
iters = int(min(epsilon + 4, 1.25*epsilon))
# In the original paper the images were in [0,255] range but here our data is in [0,1].
# So we need to scale the epsilon value in a way that suits our data, which is dividing by 255.
epsilon = epsilon/255
for i in range(iters) :
# Set requires_grad attribute of tensor. Important for Attack
image.requires_grad = True
# Forward pass the data through the model
output = model(image)
init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability(the model's prediction)
# If the current prediction is already wrong, dont bother to continue
if init_pred.item() != target.item():
return image
# Calculate the loss
loss = F.nll_loss(output, ll_label)
# Zero all existing gradients
model.zero_grad()
# Calculate gradients of model in backward pass
loss.backward()
# Collect datagrad
data_grad = image.grad.data
# Collect the element-wise sign of the data gradient
sign_data_grad = data_grad.sign()
# Create the perturbed image by adjusting each pixel of the input image
perturbed_image = image - alpha*sign_data_grad
# Updating the image for next iteration
#
# We want to keep the perturbed image in range [image-epsilon, image+epsilon]
# based on the definition of the attack. However the value of image-epsilon
# itself must not fall behind 0, as the data range is [0,1].
# And the value of image+epsilon also must not exceed 1, for the same reason.
# So we clip the perturbed image between the (image-epsilon) clipped to 0 and
# (image+epsilon) clipped to 1.
a = torch.clamp(image - epsilon, min=0)
b = (perturbed_image>=a).float()*perturbed_image + (a>perturbed_image).float()*a
c = (b > image+epsilon).float()*(image+epsilon) + (image+epsilon >= b).float()*b
image = torch.clamp(c, max=1).detach_()
return image
# + [markdown] colab_type="text" id="4eKemT1Hjkjp"
# ## Model Attack Design
# + colab={} colab_type="code" id="hWyQUqMGmq11"
# We used the same values as described in the reference paper [4] in the report.
fgsm_epsilons = [0, .05, .1, .15, .2, .25, .3] # values for epsilon hyper-parameter for FGSM attack
ill_epsilons = [0, 2, 4, 8, 16] # values for epsilon hyper-parameter for Iter.L.L attack
# + colab={} colab_type="code" id="_45KGZafjn5M"
#This is where we test the effect of the attack on the trained model
#model is the pretrained model on your dataset
#test_loader contains the test dataset
#other parameters are set based on the type of the attack
def attack_test(model, device, test_loader, epsilon, iters, attack='fgsm', alpha=1 ):
# Accuracy counter. accumulates the number of correctly predicted exampels
correct = 0
adv_examples = [] # a list to save some of the successful adversarial examples for visualizing purpose
orig_examples = [] # this list keeps the original image before manipulation corresponding to the images in adv_examples list for comparing purpose
# Loop over all examples in test set
for data, target in test_loader:
# Send the data and label to the device
data, target = data.to(device), target.to(device)
# Forward pass the data through the model
output = model(data)
init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability (model prediction of the image)
# Call the Attack
if attack == 'fgsm':
perturbed_data = fgsm_attack(model, data, target, epsilon=epsilon )
else:
perturbed_data = ill_attack(model, data, target, epsilon, alpha, iters)
# Re-classify the perturbed image
output = model(perturbed_data)
# Check for success
#target refers to the ground truth label
#init_pred refers to the model prediction of the original image
#final_pred refers to the model prediction of the manipulated image
final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability (model prediction of the perturbed image)
if final_pred[0].item() == target[0].item(): #perturbation hasn't affected the classification
correct += 1
# Special case for saving 0 epsilon examples which is equivalent to no adversarial attack
if (epsilon == 0) and (len(adv_examples) < 5):
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
orig_ex = data.squeeze().detach().cpu().numpy()
adv_examples.append( (init_pred[0].item(), final_pred[0].item(), adv_ex) )
orig_examples.append( (target[0].item(), init_pred[0].item(), orig_ex) )
else:
# Save some adv examples for visualization later
if len(adv_examples) < 5:
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
orig_ex = data.squeeze().detach().cpu().numpy()
adv_examples.append( (init_pred[0].item(), final_pred[0].item(), adv_ex) )
orig_examples.append( (target[0].item(), init_pred[0].item(), orig_ex) )
# Calculate final accuracy for this epsilon
final_acc = correct/float(len(test_loader))
print("Epsilon: {}\tTest Accuracy = {} / {} = {}".format(epsilon, correct, len(test_loader), final_acc))
# Return the accuracy and an adversarial examples and their corresponding original images
return final_acc, adv_examples, orig_examples
# + [markdown] colab_type="text" id="5EYbvwnEj19f"
# ##Running the Attack for MNIST dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 134} colab_type="code" executionInfo={"elapsed": 184827, "status": "ok", "timestamp": 1560947028305, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="m72TMBNyj-Nt" outputId="dde3b262-6f91-4fb1-a740-b12b1eb6314b"
#FGSM attack
mnist_fgsm_accuracies = [] #list to keep the model accuracy after attack for each epsilon value
mnist_fgsm_examples = [] # list to collect adversarial examples returned from the attack_test function for every epsilon values
mnist_fgsm_orig_examples = [] #list to collect original images corresponding the collected adversarial examples
# Run test for each epsilon
for eps in fgsm_epsilons:
acc, ex, orig = attack_test(mnist_model, device, mnist_test_loader, eps, attack='fgsm', alpha=1, iters=0)
mnist_fgsm_accuracies.append(acc)
mnist_fgsm_examples.append(ex)
mnist_fgsm_orig_examples.append(orig)
# + colab={"base_uri": "https://localhost:8080/", "height": 100} colab_type="code" executionInfo={"elapsed": 713606, "status": "ok", "timestamp": 1560948082239, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="TkED4XKTkRiJ" outputId="1415b027-4357-43c2-9779-201fa7c0af1c"
#Iterative_LL attack
mnist_ill_accuracies = [] #list to keep the model accuracy after attack for each epsilon value
mnist_ill_examples = [] # list to collect adversarial examples returned from the attack_test function for every epsilon values
mnist_ill_orig_examples = [] #list to collect original images corresponding the collected adversarial examples
# Run test for each epsilon
for eps in ill_epsilons:
acc, ex, orig = attack_test(mnist_model, device, mnist_test_loader, eps, attack='ill', alpha=1, iters=0)
mnist_ill_accuracies.append(acc)
mnist_ill_examples.append(ex)
mnist_ill_orig_examples.append(orig)
# + [markdown] colab_type="text" id="_C0flgS6kYpq"
# ##Visualizing the results for MNIST dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" executionInfo={"elapsed": 877, "status": "ok", "timestamp": 1560948241301, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="Rf_6SxdXkYIL" outputId="991d586c-ccc8-4c15-8ce4-ac75eaedcc82"
#Accuracy after attack vs epsilon
plt.figure(figsize=(5,5))
plt.plot(fgsm_epsilons, mnist_fgsm_accuracies, "*-")
plt.yticks(np.arange(0, 1.1, step=0.1))
plt.xticks(np.arange(0, .35, step=0.05))
plt.title("FSGM Attack vs MNIST Model Accuracy vs Epsilon")
plt.xlabel("Epsilon")
plt.ylabel("Accuracy")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1449} colab_type="code" executionInfo={"elapsed": 2223, "status": "ok", "timestamp": 1560948247268, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="0OlzZ88mVYkw" outputId="5db53e58-020b-4dcb-8d8e-b71c607dc697"
# Plot several examples vs their adversarial samples at each epsilon for fgms attack
cnt = 0
plt.figure(figsize=(8,20))
for i in range(len(fgsm_epsilons)):
for j in range(2):
cnt += 1
plt.subplot(len(fgsm_epsilons),2,cnt)
plt.xticks([], [])
plt.yticks([], [])
if j==0:
plt.ylabel("Eps: {}".format(fgsm_epsilons[i]), fontsize=14)
orig,adv,ex = mnist_fgsm_orig_examples[i][0]
plt.title("target "+"{} -> {}".format(orig, adv)+ " predicted")
plt.imshow(ex, cmap="gray")
else:
orig,adv,ex = mnist_fgsm_examples[i][0]
plt.title("predicted "+"{} -> {}".format(orig, adv)+ " attacked")
plt.imshow(ex, cmap="gray")
plt.tight_layout()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" executionInfo={"elapsed": 1120, "status": "ok", "timestamp": 1560948269496, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="Xa6onA4PoY_l" outputId="6a918b24-2e62-4d29-b122-33c3c4ff20f9"
#Accuracy after attack vs epsilon
plt.figure(figsize=(5,5))
plt.plot(ill_epsilons, mnist_ill_accuracies, "*-", color='R')
plt.yticks(np.arange(0, 1.1, step=0.1))
plt.xticks(np.arange(0, 17, step=2))
plt.title("Iterative Least Likely vs MNIST Model / Accuracy vs Epsilon")
plt.xlabel("Epsilon")
plt.ylabel("Accuracy")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1449} colab_type="code" executionInfo={"elapsed": 1634, "status": "ok", "timestamp": 1560948275914, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="q5zzLrnbStw6" outputId="5db52527-0482-4779-b709-63b91ee5fffc"
# Plot several examples vs their adversarial samples at each epsilon for ill attack
cnt = 0
plt.figure(figsize=(8,20))
for i in range(len(ill_epsilons)):
for j in range(2):
cnt += 1
plt.subplot(len(ill_epsilons),2,cnt)
plt.xticks([], [])
plt.yticks([], [])
if j==0:
plt.ylabel("Eps: {}".format(ill_epsilons[i]), fontsize=14)
orig,adv,ex = mnist_ill_orig_examples[i][0]
plt.title("target "+"{} -> {}".format(orig, adv)+ " predicted")
plt.imshow(ex, cmap="gray")
else:
orig,adv,ex = mnist_ill_examples[i][0]
plt.title("predicted "+"{} -> {}".format(orig, adv)+ " attacked")
plt.imshow(ex, cmap="gray")
plt.tight_layout()
plt.show()
# + [markdown] colab_type="text" id="Y-7Ri7J8TmAQ"
# ##Running the Attack for CIFAR10 dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 134} colab_type="code" executionInfo={"elapsed": 143630, "status": "ok", "timestamp": 1560948445596, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="bbp2MFXMTyQm" outputId="5f13e5b6-e837-4e38-8373-2e1d3296b006"
#FGSM attack
cifar_fgsm_accuracies = [] #list to keep the model accuracy after attack for each epsilon value
cifar_fgsm_examples = [] # list to collect adversarial examples returned from the attack_test function for every epsilon values
cifar_fgsm_orig_examples = [] #list to collect original images corresponding the collected adversarial examples
# Run test for each epsilon
for eps in fgsm_epsilons:
acc, ex, orig = attack_test(cifar_model, device, cifar_test_loader, eps, attack='fgsm', alpha=1, iters=0)
cifar_fgsm_accuracies.append(acc)
cifar_fgsm_examples.append(ex)
cifar_fgsm_orig_examples.append(orig)
# + colab={"base_uri": "https://localhost:8080/", "height": 100} colab_type="code" executionInfo={"elapsed": 530559, "status": "ok", "timestamp": 1560949081435, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="UxF_zJARUY9O" outputId="584a1745-c4e7-452b-f08f-a98b8c015833"
#Iterative_LL attack
cifar_ill_accuracies = [] #list to keep the model accuracy after attack for each epsilon value
cifar_ill_examples = [] # list to collect adversarial examples returned from the attack_test function for every epsilon values
cifar_ill_orig_examples = [] #list to collect original images corresponding the collected adversarial examples
# Run test for each epsilon
for eps in ill_epsilons:
acc, ex, orig = attack_test(cifar_model, device, cifar_test_loader, eps, attack='ill', alpha=1, iters=0)
cifar_ill_accuracies.append(acc)
cifar_ill_examples.append(ex)
cifar_ill_orig_examples.append(orig)
# + [markdown] colab_type="text" id="5jB__I66Utah"
# ##Visualizing the results for CIFAR10 dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" executionInfo={"elapsed": 1063, "status": "ok", "timestamp": 1560949144362, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="h4iR2GFFU4nk" outputId="515ce883-6b75-40fe-8628-91dfcb0e2620"
#Accuracy after attack vs epsilon
plt.figure(figsize=(5,5))
plt.plot(fgsm_epsilons, cifar_fgsm_accuracies, "*-")
plt.yticks(np.arange(0, 1.1, step=0.1))
plt.xticks(np.arange(0, .35, step=0.05))
plt.title("FSGM Attack vs CIFAR Model Accuracy vs Epsilon")
plt.xlabel("Epsilon")
plt.ylabel("Accuracy")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1449} colab_type="code" executionInfo={"elapsed": 2555, "status": "ok", "timestamp": 1560949158162, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="Jtc7cF5aU9Al" outputId="d0b69ee2-86a6-41fe-b150-5971df81edf5"
# Plot several examples vs their adversarial samples at each epsilon for fgms attack
cnt = 0
# 8 is the separation between images
# 20 is the size of the printed image
plt.figure(figsize=(8,20))
for i in range(len(fgsm_epsilons)):
for j in range(2):
cnt += 1
plt.subplot(len(fgsm_epsilons),2,cnt)
plt.xticks([], [])
plt.yticks([], [])
if j==0:
plt.ylabel("Eps: {}".format(fgsm_epsilons[i]), fontsize=14)
orig,adv,ex = cifar_fgsm_orig_examples[i][0]
plt.title("target "+"{} -> {}".format(classes[orig], classes[adv])+ " predicted")
plt.imshow(ex[0].transpose(1,2,0), cmap="gray")
else:
orig,adv,ex = cifar_fgsm_examples[i][0]
plt.title("predicted "+"{} -> {}".format(classes[orig], classes[adv])+ " attacked")
plt.imshow(ex[0].transpose(1,2,0), cmap="gray")
plt.tight_layout()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" executionInfo={"elapsed": 1098, "status": "ok", "timestamp": 1560949186862, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="fsfjkR4VVKlk" outputId="a15c5194-a360-4f43-86ea-1d6ff6cab0d3"
#Accuracy after attack vs epsilon
plt.figure(figsize=(5,5))
plt.plot(ill_epsilons, cifar_ill_accuracies, "*-", color='R')
plt.yticks(np.arange(0, 1.1, step=0.1))
plt.xticks(np.arange(0, 17, step=2))
plt.title("Iterative Least Likely vs CIFAR Model / Accuracy vs Epsilon")
plt.xlabel("Epsilon")
plt.ylabel("Accuracy")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1449} colab_type="code" executionInfo={"elapsed": 1508, "status": "ok", "timestamp": 1560949194269, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-SPUHrtHWJKw/AAAAAAAAAAI/AAAAAAAAHS0/SE_z5oPt9c8/s64/photo.jpg", "userId": "13748027272382448636"}, "user_tz": -540} id="PAByNYNVVM3E" outputId="7cb6a073-1a30-438a-e837-b79a2f933479"
# Plot several examples vs their adversarial samples at each epsilon for iterative
# least likely attack.
cnt = 0
# 8 is the separation between images
# 20 is the size of the printed image
plt.figure(figsize=(8,20))
for i in range(len(ill_epsilons)):
for j in range(2):
cnt += 1
plt.subplot(len(ill_epsilons),2,cnt)
plt.xticks([], [])
plt.yticks([], [])
if j==0:
plt.ylabel("Eps: {}".format(ill_epsilons[i]), fontsize=14)
orig,adv,ex = cifar_ill_orig_examples[i][0]
plt.title("target "+"{} -> {}".format(classes[orig], classes[adv])+ " predicted")
plt.imshow(ex[0].transpose(1,2,0), cmap="gray")
else:
orig,adv,ex = cifar_ill_examples[i][0]
plt.title("predicted "+"{} -> {}".format(classes[orig], classes[adv])+ " attacked")
plt.imshow(ex[0].transpose(1,2,0), cmap="gray")
plt.tight_layout()
plt.show()
| aa-example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
class DecisionNode:
def __init__(self, col=-1, value=None, results=None, tb=None, fb=None):
self.col = col # attribute on which to split
self.value = value # value on which to split
self.results = results #If the node has no children - we store here class labels with their counts
self.tb = tb # True branch
self.fb = fb # False branch
def split(rows, column, value):
# define split function according to the value type
split_function = None
if isinstance(value, int) or isinstance(value, float):
split_function = lambda row: row[column] >= value
else:
split_function = lambda row: row[column] == value
# Divide the rows into two sets and return them
set1 = [row for row in rows if split_function(row)]
set2 = [row for row in rows if not split_function(row)]
return (set1, set2)
def count_labels(rows):
label_count = {}
for row in rows:
# The class label is in the last column
label = row[- 1]
if label not in label_count:
label_count[label] = 0
label_count[label] += 1
return label_count
data_file = "C:/Users/cuiji/Desktop/ML2020LAB/ml_covid_rules_lab-master/covid_categorical_good.csv"
import pandas as pd
data = pd.read_csv(data_file)
data = data.dropna(how="any")
data.columns
data_rows = data.to_numpy().tolist()
len(data_rows)
columns_list = data.columns.to_numpy().tolist()
print(columns_list)
outcomes = []
for r in data_rows:
if r[-1] not in outcomes:
outcomes.append(r[-1])
print(outcomes)
# +
R = []
a = 0
b = 0
gsize = len(data_rows)
def accuracy(s, col, value, c):
total = len(s)
count = 0
true = split(s, col, value)[0]
for r in true:
if r[-1] == c:
count += 1
return count/total
def learnOneRule(E, c):
global a
global b
global R
global gsize
M = None
column_count= len(E[0]) - 1
best_accuracy = 0
best_coverage = 0
best_rule = None
r_cover = None
for col in range(0, column_count):
column_values = set()
for row in E:
column_values.add(row[col])
for value in column_values:
(set1, set2) = split(E, col, value)
acc = accuracy(set1, col, value, c)
if type(value) is int or type(value) is float:
acc = max(acc, 1-acc)
p = float(len(set1)) / len(E)
if acc > best_accuracy or (acc == best_accuracy and p > best_coverage):
best_accuracy = acc
best_coverage = p
best_rule = (col, value)
r_cover = set1
M = set2
print(best_rule)
print(best_accuracy)
print(best_coverage)
R.append(best_rule)
a += best_accuracy*len(set1)/gsize
b += best_coverage
return M
def PRISM(rows, c, accuracy_threshold=1, coverage_threshold=0):
R = []
a = 0
b = 0
gsize = len(rows)
while len(rows) != 0 and (b == 0 or a < accuracy_threshold) and b < coverage_threshold:
rows = learnOneRule(rows, c)
PRISM(data_rows, 'alive', 0.99, 0.95)
# +
def prediction(leaf_labels):
total = 0
result = {}
for label, count in leaf_labels.items():
total += count
result[label] = count
for label, val in result.items():
result[label] = str(int(result[label]/total * 100))+"%"
return result
def print_tree(tree, current_branch, attributes=None, indent='', leaf_funct=prediction):
# Is this a leaf node?
if tree.results != None:
print(indent + current_branch + str(leaf_funct(tree.results)))
else:
# Print the split question
split_col = str(tree.col)
if attributes is not None:
split_col = attributes[tree.col]
split_val = str(tree.value)
if type(tree.value) == int or type(tree.value) == float:
split_val = ">=" + str(tree.value)
print(indent + current_branch + split_col + ': ' + split_val + '? ')
# Print the branches
indent = indent + ' '
print_tree(tree.tb, 'T->', attributes, indent)
print_tree(tree.fb, 'F->', attributes, indent)
print_tree(tree, " ", columns_list)
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn import preprocessing
from sklearn import metrics
from sklearn import neighbors
from sklearn import naive_bayes
from sklearn import discriminant_analysis
from sklearn import linear_model
from sklearn import tree
from sklearn import ensemble
from sklearn import svm
# -
# # Part 1: Visualizing the Data
# For categorical features like "SMOKING," a "0" indicates that the person does ***not*** belong to this class, while a "1" indicates that a person does belong to this class.
# +
data_file = "../data/survey-lung-cancer.csv"
GENDER = "GENDER"
AGE = "AGE"
SMOKING = "SMOKING"
LUNG_CANCER = "LUNG_CANCER"
CHRONIC_DISEASE = "CHRONIC DISEASE"
data = pd.read_csv(data_file)
# Convert the "1/2" categorical values to "0/1" and "No/Yes" in the lung cancer column into "0/1"
for col in data.columns:
if col != AGE:
data[col] = data[col].astype('category').cat.codes
data
# -
# Separate the data into people without lung cancer and those with it
no_data = data[data[LUNG_CANCER] == 0]
yes_data = data[data[LUNG_CANCER] == 1]
# ## Feature Investigation
# Let's look at a few features and how they correlate with lung cancer:
# ## Age
# Most of the survey participants were > 40 years of age.
yes_data[AGE].plot(title="Age vs. lung cancer", kind="hist")
no_data[AGE].plot(kind="hist")
# ## Smoking
yes_data[SMOKING].value_counts().plot(title="Smoking vs. lung cancer", kind="bar")
no_data[SMOKING].value_counts().plot(title="Non-smoking vs. lung cancer", kind="bar")
# ## Chronic Disease
yes_data[CHRONIC_DISEASE].value_counts().plot(title="Chronic disease vs. lung cancer", kind="bar")
no_data[CHRONIC_DISEASE].value_counts().plot(title="Chronic disease vs. lung cancer", kind="bar")
# # Part 2: Making Predictions
# Separate the data into training and validation sets
data_X = data.iloc[:, 0:15]
data_X = preprocessing.scale(data_X) # Scaling helps LinearSVC converge
data_y = data.iloc[:, 15]
train_X, test_X, train_y, test_y = model_selection.train_test_split(data_X, data_y, test_size=0.5, random_state=0)
def run_classifiers(classifiers, train_X, train_y, test_X, test_y):
"""
Fits each classifier to the training data and runs it on the test data.
Prints out the training and test accuracies.
"""
results = [] # list of 3-tuples: (classifier name, train accuracy, test accuracy)
# Baseline: a random predictor that guesses "YES" for each data point
rand_train_pred = np.full(train_y.size, 1)
rand_test_pred = np.full(test_y.size, 1)
rand_train_results = (rand_train_pred == train_y)
rand_train_acc = np.count_nonzero(rand_train_results) / train_y.size
rand_test_results = (rand_test_pred == test_y)
rand_test_acc = np.count_nonzero(rand_test_results) / test_y.size
# Print out precision/recall details for test class
conf_mat = metrics.confusion_matrix(test_y, rand_test_pred)
print(f"Random classifier")
# Precision rate = # true positives / (# predicted positives)
# Recall rate = # true positives / # positive data points
precision = conf_mat[1][1] / (conf_mat[1][1] + conf_mat[0][1])
recall = conf_mat[1][1] / (conf_mat[1][1] + conf_mat[1][0])
f1_score = 2 * ((precision * recall) / (precision + recall))
print(f"Precision rate = {precision}")
print(f"Recall rate = {recall}")
print(f"F1 score = {f1_score}\n")
# Add random results to list
results.append( ("Random Classifier", rand_train_acc, rand_test_acc))
for clf in classifiers:
# Run classifier on train and test data
clf.fit(train_X, train_y)
train_pred = clf.predict(train_X)
train_acc = metrics.accuracy_score(train_y, train_pred)
test_pred = clf.predict(test_X)
test_acc = metrics.accuracy_score(test_y, test_pred)
# Print out misclassification metrics
conf_mat = metrics.confusion_matrix(test_y, test_pred)
precision = conf_mat[1][1] / (conf_mat[1][1] + conf_mat[0][1])
recall = conf_mat[1][1] / (conf_mat[1][1] + conf_mat[1][0])
f1_score = 2 * ((precision * recall) / (precision + recall))
print(f"{type(clf).__name__}")
print(f"Precision rate = {precision}")
print(f"Recall rate = {recall}")
print(f"F1 score = {f1_score}\n")
# Store results
results.append( ((type(clf).__name__), train_acc, test_acc) )
return results
classifiers = [
neighbors.KNeighborsClassifier(),
naive_bayes.GaussianNB(),
discriminant_analysis.LinearDiscriminantAnalysis(),
linear_model.LogisticRegression(solver="lbfgs", max_iter=200),
tree.DecisionTreeClassifier(),
ensemble.AdaBoostClassifier(),
ensemble.RandomForestClassifier(n_estimators=100),
svm.LinearSVC(C=0.01, max_iter=100)
]
results = run_classifiers(classifiers, train_X, train_y, test_X, test_y)
results
# ## Training Results
# The precision rates are all lower than the recall rates. This indicates that these models are more likely to predict positive, which is a consequence of having data skewed toward the positive class.
#
# The Decision Tree and Random Forest Classifiers were able to classify the training data with 100% accuracy, but likely overfit the data.
# +
# Graph the results
classifier_names = [clf[0] for clf in results]
train_acc = [clf[1] for clf in results]
test_acc = [clf[2] for clf in results]
fig, ax = plt.subplots(1, len(classifier_names), figsize=(18, 3), sharey=True)
for i in range(len(classifier_names)):
ax[i].scatter(classifier_names[i], train_acc[i])
# -
# ## Test Results
# Although the Decision Tree and Random Forest Classifiers had the highest training accuracies, the Linear Discriminant Analysis (LDA) classifier performed the best on the test data, with an accuracy of about 92%. It is likely that the Decision Tree and Random Forest classifiers overfit the training data.
fig, ax = plt.subplots(1, len(classifier_names), figsize=(18, 3), sharey=True)
for i in range(len(classifier_names)):
ax[i].scatter(classifier_names[i], test_acc[i])
# # Conclusion
# There are significantly more data points belonging to class lung cancer than data points not in the class. **As a result, it seems that the predictive models were biased toward characteristics of data points in the lung cancer class.** This might explain the larger number of false positives than false negatives.
#
# Furthermore, it seems that the models do not generalize well to false negatives (i.e. when a person does not have lung cancer but the model predicts that they do). This could also be attributed to the disparity between the number data in the lung cancer class and those not in the class.
#
# **The random classifier is able to obtain an 85% accuracy just by predicting "Yes" on each data point, whereas the best predictive models are only able to obtain a 91% accuracy.**
print(f"Number of data points belonging to class lung cancer = {yes_data.shape[0]}")
print(f"Number of data points *not* belonging to class lung cancer = {no_data.shape[0]}")
print(f"Percentage of data points belonging to class lung cancer = {yes_data.shape[0] / (data.shape[0])}")
print(f"Percentage of data points *not belonging to class lung cancer = {no_data.shape[0] / (data.shape[0])}")
| code/lung_cancer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import mglearn
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
# %matplotlib inline
# -
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target,
stratify=cancer.target, random_state=42)
# # Default value of C=1
logreg = LogisticRegression().fit(X_train, y_train)
print('Training set score: {:.3f}'.format(logreg.score(X_train, y_train)))
print('Test set score: {:.3f}'.format(logreg.score(X_test, y_test)))
# # Example with C=100
logreg100 = LogisticRegression(C=100).fit(X_train, y_train)
print('Training set score: {:.3f}'.format(logreg100.score(X_train, y_train)))
print('Test set score: {:.3f}'.format(logreg100.score(X_test, y_test)))
# # Example with C=0.01
logreg001 = LogisticRegression(C=0.01).fit(X_train, y_train)
print('Training set score: {:.3f}'.format(logreg001.score(X_train, y_train)))
print('Test set score: {:.3f}'.format(logreg001.score(X_test, y_test)))
plt.plot(logreg.coef_.T, 'o', label='C=1')
plt.plot(logreg100.coef_.T, '^', label='C=100')
plt.plot(logreg001.coef_.T, 'v', label='C=0.001')
plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)
plt.hlines(0, 0, cancer.data.shape[1])
plt.ylim(-5, 5)
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
plt.legend()
# # Model using L1 regularization
# +
for C, marker in zip([0.001, 1, 100], ['o', '^', 'v']):
lr_l1 = LogisticRegression(C=C, penalty="l1").fit(X_train, y_train)
print('Training accuracy of l1 logreg with C={:.3f}: {:.2f}'.format(C, lr_l1.score(X_train, y_train)))
print('Test accuracy of l1 logreg with C={:.3f}: {:.2f}'.format(C, lr_l1.score(X_test, y_test)))
plt.plot(lr_l1.coef_.T, marker, label="C={:.3f}".format(C))
plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)
plt.hlines(0, 0, cancer.data.shape[1])
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
plt.ylim(-5, 5)
plt.legend(loc=3)
# -
| Coding_exercices/Linear Logistic Regression Breast Cancer Dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Screencast Code
#
# The follow code is the same used in the "Text Processing" screencast. Run each code cell to see how
# +
from pyspark.sql import SparkSession
from pyspark.ml.feature import RegexTokenizer, CountVectorizer, \
IDF, StringIndexer
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType
import re
# -
# create a SparkSession: note this step was left out of the screencast
spark = SparkSession.builder \
.master("local") \
.appName("Word Count") \
.getOrCreate()
# # Read in the Data Set
stack_overflow_data = 'Train_onetag_small.json'
df = spark.read.json(stack_overflow_data)
df.head()
# # Tokenization
#
# Tokenization splits strings into separate words. Spark has a [Tokenizer](https://spark.apache.org/docs/latest/ml-features.html#tokenizer) class as well as RegexTokenizer, which allows for more control over the tokenization process.
# split the body text into separate words
regexTokenizer = RegexTokenizer(inputCol="Body", outputCol="words", pattern="\\W")
df = regexTokenizer.transform(df)
df.head()
# # CountVectorizer
# find the term frequencies of the words
cv = CountVectorizer(inputCol="words", outputCol="TF", vocabSize=1000)
cvmodel = cv.fit(df)
df = cvmodel.transform(df)
df.take(1)
# show the vocabulary in order of
cvmodel.vocabulary
# show the last 10 terms in the vocabulary
cvmodel.vocabulary[-10:]
# # Inter-document Frequency
idf = IDF(inputCol="TF", outputCol="TFIDF")
idfModel = idf.fit(df)
df = idfModel.transform(df)
df.head()
# # StringIndexer
indexer = StringIndexer(inputCol="oneTag", outputCol="label")
df = indexer.fit(df).transform(df)
df.head()
| ml/2_text_processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sets
# > Sets have unique elements
#
# > Set is mutable
#
# > Set has no index and the order of the elements is not defined
my_set = set()
my_set = {1, 3, 5}
print(my_set)
my_set.add(6)
print(my_set)
my_set.update({2, 4}, {6, 8})
print(my_set)
my_set.remove(1)
print(my_set)
my_set.remove(1)
my_set.discard(1)
print(my_set)
print(my_set.pop())
print({1, 2} | {3, 4})
print({1, 2} & {3, 4})
print({1, 2} ^ {2, 3})
print({1, 2} - {2, 3})
print({1, 2}.issuperset({2}))
print({1, 2}.isdisjoint({1, 3}))
| python_collections/Sets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# export
from nbdev.imports import *
# +
# default_exp merge
# -
# # Fix merge conflicts
#
# > Fix merge conflicts in jupyter notebooks
# When working with jupyter notebooks (which are json files behind the scenes) and GitHub, it is very common that a merge conflict (that will add new lines in the notebook source file) will break some notebooks you are working on. This module defines the function `fix_conflicts` to fix those notebooks for you, and attempt to automatically merge standard conflicts. The remaining ones will be delimited by markdown cells like this:
# <img alt="Fixed notebook" width="700" caption="A notebook fixed after a merged conflict. The file couldn't be opened before the command was run, but after it the conflict is higlighted by markdown cells." src="images/merge.PNG" />
# ## Walk cells
#hide
tst_nb="""{
"cells": [
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"3"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"z=3\n",
"z"
]
},
{
"cell_type": "code",
"execution_count": 7,
=======
"execution_count": 5,
>>>>>>> a7ec1b0bfb8e23b05fd0a2e6cafcb41cd0fb1c35
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"6"
]
},
<<<<<<< HEAD
"execution_count": 7,
=======
"execution_count": 5,
>>>>>>> a7ec1b0bfb8e23b05fd0a2e6cafcb41cd0fb1c35
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x=3\n",
"y=3\n",
"x+y"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}"""
# This is an example of broken notebook we defined in `tst_nb`. The json format is broken by the lines automatically added by git. Such a file can't be opened again n jupyter notebook, leaving the user with no other choice than to fix the text file manually.
print(tst_nb)
# Note than in this example, the second conflict is easily solved: it just concerns the execution count of the second cell and can be solved by choosing either option without really impacting your notebook. This is the kind of conflicts `fix_conflicts` will (by default) fix automatically. The first conflict is more complicated as it spans across two cells and there is a cell present in one version, not the other. Such a conflict (and generally the ones where the inputs of the cells change form one version to the other) aren't automatically fixed, but `fix_conflicts` will return a proper json file where the annotations introduced by git will be placed in markdown cells.
#
# The first step to do this is to walk the raw text file to extract the cells. We can't read it as a JSON since it's broken, so we have to parse the text.
#export
def extract_cells(raw_txt):
"Manually extract cells in potential broken json `raw_txt`"
lines = raw_txt.split('\n')
cells = []
i = 0
while not lines[i].startswith(' "cells"'): i+=1
i += 1
start = '\n'.join(lines[:i])
while lines[i] != ' ],':
while lines[i] != ' {': i+=1
j = i
while not lines[j].startswith(' }'): j+=1
c = '\n'.join(lines[i:j+1])
if not c.endswith(','): c = c + ','
cells.append(c)
i = j+1
end = '\n'.join(lines[i:])
return start,cells,end
# This function returns the beginning of the text (before the cells are defined), the list of cells and the end of the text (after the cells are defined).
start,cells,end = extract_cells(tst_nb)
test_eq(len(cells), 3)
test_eq(cells[0], """ {
"cell_type": "code",
<<<<<<< HEAD
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"3"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"z=3\n",
"z"
]
},""")
#hide
#Test the whole text is there
#We add a , to the last cell (because we might add some after for merge conflicts at the end, so we need to remove it)
test_eq(tst_nb, '\n'.join([start] + cells[:-1] + [cells[-1][:-1]] + [end]))
# When walking the borken cells, we will add conflicts marker before and after the cells with conflicts as markdown cells. To do that we use this function.
#export
def get_md_cell(txt):
"A markdown cell with `txt`"
return ''' {
"cell_type": "markdown",
"metadata": {},
"source": [
"''' + txt + '''"
]
},'''
tst = ''' {
"cell_type": "markdown",
"metadata": {},
"source": [
"A bit of markdown"
]
},'''
assert get_md_cell("A bit of markdown") == tst
#export
conflicts = '<<<<<<< ======= >>>>>>>'.split()
#export
def _split_cell(cell, cf, names):
"Splict `cell` between `conflicts` given state in `cf`, save `names` of branches if seen"
res1,res2 = [],[]
for line in cell.split('\n'):
if line.startswith(conflicts[cf]):
if names[cf//2] is None: names[cf//2] = line[8:]
cf = (cf+1)%3
continue
if cf<2: res1.append(line)
if cf%2==0: res2.append(line)
return '\n'.join(res1),'\n'.join(res2),cf,names
#hide
tst = '\n'.join(['a', f'{conflicts[0]} HEAD', 'b', conflicts[1], 'c', f'{conflicts[2]} lala', 'd'])
v1,v2,cf,names = _split_cell(tst, 0, [None,None])
assert v1 == 'a\nb\nd'
assert v2 == 'a\nc\nd'
assert cf == 0
assert names == ['HEAD', 'lala']
#hide
tst = '\n'.join(['a', f'{conflicts[0]} HEAD', 'b', conflicts[1], 'c', f'{conflicts[2]} lala', 'd', f'{conflicts[0]} HEAD', 'e'])
v1,v2,cf,names = _split_cell(tst, 0, [None,None])
assert v1 == 'a\nb\nd\ne'
assert v2 == 'a\nc\nd'
assert cf == 1
assert names == ['HEAD', 'lala']
#hide
tst = '\n'.join(['a', f'{conflicts[0]} HEAD', 'b', conflicts[1], 'c', f'{conflicts[2]} lala', 'd', f'{conflicts[0]} HEAD', 'e', conflicts[1]])
v1,v2,cf,names = _split_cell(tst, 0, [None,None])
assert v1 == 'a\nb\nd\ne'
assert v2 == 'a\nc\nd'
assert cf == 2
assert names == ['HEAD', 'lala']
#hide
tst = '\n'.join(['b', conflicts[1], 'c', f'{conflicts[2]} lala', 'd'])
v1,v2,cf,names = _split_cell(tst, 1, ['HEAD',None])
assert v1 == 'b\nd'
assert v2 == 'c\nd'
assert cf == 0
assert names == ['HEAD', 'lala']
#hide
tst = '\n'.join(['c', f'{conflicts[2]} lala', 'd'])
v1,v2,cf,names = _split_cell(tst, 2, ['HEAD',None])
assert v1 == 'd'
assert v2 == 'c\nd'
assert cf == 0
assert names == ['HEAD', 'lala']
#export
_re_conflict = re.compile(r'^<<<<<<<', re.MULTILINE)
#hide
assert _re_conflict.search('a\nb\nc') is None
assert _re_conflict.search('a\n<<<<<<<\nc') is not None
#export
def same_inputs(t1, t2):
"Test if the cells described in `t1` and `t2` have the same inputs"
if len(t1)==0 or len(t2)==0: return False
try:
c1,c2 = json.loads(t1[:-1]),json.loads(t2[:-1])
return c1['source']==c2['source']
except Exception as e: return False
ts = [''' {
"cell_type": "code",
"source": [
"'''+code+'''"
]
},''' for code in ["a=1", "b=1", "a=1"]]
assert same_inputs(ts[0],ts[2])
assert not same_inputs(ts[0], ts[1])
#export
def analyze_cell(cell, cf, names, prev=None, added=False, fast=True, trust_us=True):
"Analyze and solve conflicts in `cell`"
if cf==0 and _re_conflict.search(cell) is None: return cell,cf,names,prev,added
old_cf = cf
v1,v2,cf,names = _split_cell(cell, cf, names)
if fast and same_inputs(v1,v2):
if old_cf==0 and cf==0: return (v2 if trust_us else v1),cf,names,prev,added
v1,v2 = (v2,v2) if trust_us else (v1,v1)
res = []
if old_cf == 0:
added=True
res.append(get_md_cell(f'`{conflicts[0]} {names[0]}`'))
res.append(v1)
if cf ==0:
res.append(get_md_cell(f'`{conflicts[1]}`'))
if prev is not None: res += prev
res.append(v2)
res.append(get_md_cell(f'`{conflicts[2]} {names[1]}`'))
prev = None
else: prev = [v2] if prev is None else prev + [v2]
return '\n'.join([r for r in res if len(r) > 0]),cf,names,prev,added
# THis is the main function used in the walk through the cells of a notebooks. `cell` is the cell we're at, `cf` the conflict state: `0` if we.re not in any conflict, `1` if we are inside the first part of a conflict (between `<<<<<<<` and `=======`) and `2` for the second part of a conflicts. `names` contains the names of the branches (they start at `[None,None]` and get updated as we pass along conflicts). `prev` contains a copy of what should be included at the start of the second version (if `cf=1` or `cf=2`). `added` starts at `False` and keeps track of whether we added any markdown cells (this flag allows us to know if a fast merge didn't leave any conflicts at the end). `fast` and `trust_us` are passed along by `fix_conflicts`: if `fast` is `True`, we don't point out conflict between cells if the inputs in the two versions are the same. Instead we merge using the local or remote branch, depending on `trust_us`.
#
# The function then returns the updated text (with one or several cells, depending on the conflicts to solve), the updated `cf`, `names`, `prev` and `added`.
tst = '\n'.join(['a', f'{conflicts[0]} HEAD', 'b', conflicts[1], 'c'])
c,cf,names,prev,added = analyze_cell(tst, 0, [None,None], None, False,fast=False)
test_eq(c, get_md_cell('`<<<<<<< HEAD`')+'\na\nb')
test_eq(cf, 2)
test_eq(names, ['HEAD', None])
test_eq(prev, ['a\nc'])
test_eq(added, True)
# Here in this example, we were entering cell `tst` with no conflict state. At the end of the cells, we are still in the second part of the conflict, hence `cf=2`. The result returns a marker for the branch head, then the whole cell in version 1 (a + b). We save a (prior to the conflict hence common to the two versions) and c (only in version 2) for the next cell in `prev` (that should contain the resolution of this conflict).
# ## Main function
#export
def fix_conflicts(fname, fast=True, trust_us=True):
"Fix broken notebook in `fname`"
fname=Path(fname)
shutil.copy(fname, fname.with_suffix('.ipynb.bak'))
with open(fname, 'r') as f: raw_text = f.read()
start,cells,end = extract_cells(raw_text)
res = [start]
cf,names,prev,added = 0,[None,None],None,False
for cell in cells:
c,cf,names,prev,added = analyze_cell(cell, cf, names, prev, added, fast=fast, trust_us=trust_us)
res.append(c)
if res[-1].endswith(','): res[-1] = res[-1][:-1]
with open(f'{fname}', 'w') as f: f.write('\n'.join([r for r in res+[end] if len(r) > 0]))
if fast and not added: print("Succesfully merged conflicts!")
else: print("One or more conflict remains in the notebook, please inspect manually.")
# The function will begin by backing the notebook `fname` to `fname.bak` in case something goes wrong. Then it parses the broken json, solving conflicts in cells. If `fast=True`, every conflict that only involves metadata or outputs of cells will be solved automatically by using the local (`trust_us=True`) or the remote (`trust_us=False`) branch. Otherwise, or for conflicts involving the inputs of cells, the json will be repaired by including the two version of the conflicted cell(s) with markdown cells indicating the conflicts. You will be able to open the notebook again and search for the conflicts (look for `<<<<<<<`) then fix them as you wish.
#
# If `fast=True`, the function will print a message indicating whether the notebook was fully merged or if conflicts remain.
# ## Export-
#hide
from nbdev.export import notebook2script
notebook2script()
| nbs/05_merge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# -*- coding: utf-8 -*-
"""
Created on Thu Feb 25 11:04:46 2021
@author: vargh
"""
import cv2
from skimage.filters import threshold_otsu
from pandas import DataFrame, Series
import numpy as np
from timeit import default_timer as timer
from skimage.registration import phase_cross_correlation
import turtle
import matplotlib.pyplot as plt
import time as t
file = 'IMG_2509.MOV'
folder = './videos/'
fileid = folder+file
# Takes two frames, calculates the phase cross correlation between them and outputs displacement
def calcdisp(imshape, frame1, frame2):
tmp = np.array([0,0])
f1 = np.fft.fft2(frame1) # fast fourier transforms of previous frame
f2 = np.fft.fft2(frame2) # fast fourier transforms of current frame
cross_power_spect = np.multiply(f1 , np.conjugate(f2))/abs(np.multiply(f1, np.conjugate(f2))) # "cross power spectrum", which is multiplying the FFTs element-wise and normalizing
peakgraph = np.fft.ifft2(cross_power_spect) # inverse FFT
detected_shift = np.where(peakgraph == np.amax(peakgraph)) # Find peaks in inverse FFT
# Due to how the output structure of the FFT, negative translations are in the latter half of the output matrix
# These if statements find the direction of the translation and stores it with the correct magnitude
if detected_shift[0][0] > imshape[0]//2:
tmp[1] = detected_shift[0][0] - imshape[0]
else:
tmp[1] = detected_shift[0][0]
if detected_shift[1][0] > imshape[1]//2:
tmp[0] = detected_shift[1][0] - imshape[1]
else:
tmp[0] = detected_shift[1][0]
# very basic low pass filter
if abs(tmp[0]) > 20:
tmp[0] = 0
if abs(tmp[1]) > 20:
tmp[1] = 1
return tmp
vid = cv2.VideoCapture(0) # Starts video capture object
# Initializations
iteration = 0
prevFrame = 0
dstep = np.empty([1,2])
totD= np.array([[0,0]])
time = [0]
start = timer()
#gpuframe1 = cv2.cuda_GpuMat() (Could not get gpu acceleration to work)
#gpuframe2 = cv2.cuda_GpuMat()
currentLoc = turtle.Turtle() # Initializes turtle for visualization
turtle.setup(width=300, height=300, startx=0, starty=0)
cap = cv2.VideoCapture(fileid)
t.sleep(2)
while(True):
# Capture the video frame by frame
ret, frame = cap.read() # get frame
#frameShape = frame.shape #?
im = cv2.resize(frame, None, fx=.25, fy=.25) # decimate quality of image by resizing
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
lower_blue = np.array([36,140,140])
upper_blue = np.array([86,255,255])
mask = cv2.inRange(hsv, lower_blue, upper_blue)
res = cv2.bitwise_and(im,im, mask= mask)
proc_im = cv2.cvtColor(res, cv2.COLOR_HSV2RGB)
bw_img = cv2.cvtColor(np.float32(proc_im), cv2.COLOR_RGB2GRAY) # convert to grayscale
imshape = np.array(bw_img.shape)
# gpuframe1.upload(bw_img) (gpu acceleration, doesn't work)
if iteration > 3 and iteration <= 15: # waits till there are sufficient frames to calculate
calcdisp(imshape, bw_img, prevFrame)
if iteration > 15: # allows program to 'warm up'. In initial tests, initial measurements were not accurate
tmp = calcdisp(imshape, bw_img, prevFrame) # raw displacement data
dstep = np.vstack((dstep, tmp)) # stacks the displacement step data just recieved
time = np.vstack((time, timer()-start)) # stacks the time data
totD = np.vstack((totD, np.sum(dstep, axis=0))) # sums displacement steps to calculate total displacement
# updates turtle
currentLoc.sety(totD[iteration-15, 1]*.35)
currentLoc.setx(totD[iteration-15, 0]*.35)
prevFrame = bw_img # sets current frame as previous frame
# gpuframe2.upload(prevFrame)
iteration = iteration + 1 #increases iteration
print(iteration,timer()-start) # prints time (for debugging/optimization purposes)
cv2.imshow('frame2',res)
cv2.imshow('frame3', im)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# After the loop release the capture object
vid.release()
# Destroy all the windows
| pcc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deployment of New Model
# import all the libraries
import time
import numpy as np
import cv2
from util import *
import os,sys,re
import os.path as osp
from DNModel import net
from img_process import preprocess_img, inp_to_image
import pandas as pd
import random
import pickle as pkl
import fcn
import torch
import torch.nn as nn
from torch.autograd import Variable
import torchvision
from torchvision import transforms as transforms
# %matplotlib inline
import json
from pycocotools.coco import COCO
import skimage.io as io
import matplotlib.pyplot as plt
import pylab
pylab.rcParams['figure.figsize'] = (8.0, 10.0)
from new_model import *
from matplotlib.patches import Rectangle
#Define the coco dataset
dataDir='.'
dataType='train2017'
annFile='annotation_little_v5.json'
coco=COCO(annFile)
#number of image in the dataset
catIds = coco.getCatIds();
imgIds = coco.getImgIds()
print('Number of images:', len(imgIds))
# function to deploy the new model by the input number of coco dataset
def plot_new_model(img_number):
img = coco.loadImgs(imgIds[img_number])[0]
img = "images/" + img['file_name']
scores,bboxes,classes,mask = new_model(img)
plt.figure(figsize = (15,8))
if len(mask)>0:
color_array = np.zeros([mask[0].shape[0], mask[0].shape[1],3], dtype=np.uint8)
bgr_img = cv2.imread(img)
ax = plt.gca()
for id, b in enumerate(mask):
if classes[id] == 0:
#person mask
color_array[mask[id]>0] = [255,0,0]
elif classes[id] ==56:
#chair mask
color_array[mask[id]>0] = [0,255,0]
elif classes[id] ==57:
#couch mask
color_array[mask[id]>0] = [0,0,255]
elif classes[id] ==60:
#dining table mask
color_array[mask[id]>0] = [120,120,0]
elif classes[id] ==61:
#toilet
color_array[mask[id]>0] = [0,120,120]
rect = Rectangle((bboxes[id][0],
bboxes[id][1]),
bboxes[id][2]-bboxes[id][0],
bboxes[id][3]-bboxes[id][1],
linewidth=2,
edgecolor='r',
facecolor='none')
ax.add_patch(rect)
added_image = cv2.addWeighted(bgr_img, 1, color_array, 0.5, 0)
plt.axis('off')
plt.imshow(added_image, interpolation = 'none')
plt.show()
# ## Examples
# There are 5830 photos in the coco dataset, input a random number (between 0 and 5830) to get photo deployed
plot_new_model(21)
plot_new_model(20)
# ## Deploy your own photo
# function to deploy the new model by the input number of coco dataset
def deploy_own_photo(img_name):
scores,bboxes,classes,mask = new_model(img_name)
plt.figure(figsize = (15,8))
if len(mask)>0:
color_array = np.zeros([mask[0].shape[0], mask[0].shape[1],3], dtype=np.uint8)
bgr_img = cv2.imread(img_name)
ax = plt.gca()
for id, b in enumerate(mask):
if classes[id] == 0:
#person mask
color_array[mask[id]>0] = [255,0,0]
elif classes[id] ==56:
#chair mask
color_array[mask[id]>0] = [0,255,0]
elif classes[id] ==57:
#couch mask
color_array[mask[id]>0] = [0,0,255]
elif classes[id] ==60:
#dining table mask
color_array[mask[id]>0] = [120,120,0]
elif classes[id] ==61:
#toilet
color_array[mask[id]>0] = [0,120,120]
rect = Rectangle((bboxes[id][0],
bboxes[id][1]),
bboxes[id][2]-bboxes[id][0],
bboxes[id][3]-bboxes[id][1],
linewidth=2,
edgecolor='r',
facecolor='none')
ax.add_patch(rect)
added_image = cv2.addWeighted(bgr_img, 1, color_array, 0.5, 0)
plt.axis('off')
plt.imshow(added_image, interpolation = 'none')
plt.show()
#input the file name of your own photo
deploy_own_photo('abc.jpg')
| Deploy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="rT2jhaFmeXIu"
# # Pareto-Efficient algorithm for MOO
# + colab={"base_uri": "https://localhost:8080/"} id="9ifbtdd2ifja" executionInfo={"status": "ok", "timestamp": 1634546106518, "user_tz": -330, "elapsed": 15, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="69ad66eb-789e-436f-ef59-3782ee050beb"
# %tensorflow_version 1.x
# + id="qpBmKd7Oig3M"
import tensorflow as tf
import numpy as np
from scipy.optimize import minimize
from scipy.optimize import nnls
# + colab={"base_uri": "https://localhost:8080/"} id="xSiAHrdnin8S" executionInfo={"status": "ok", "timestamp": 1634546145379, "user_tz": -330, "elapsed": 2250, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="25b043ce-d25a-42f2-8948-9e334529d07e"
seed = 3456
tf.set_random_seed(seed)
np.random.seed(seed)
x_data = np.float32(np.random.rand(100, 4))
y_data = np.dot(x_data, [[0.100], [0.200], [0.3], [0.4]]) + 0.300
weight_a = tf.placeholder(tf.float32)
weight_b = tf.placeholder(tf.float32)
b = tf.Variable(tf.zeros([1]))
W = tf.Variable(tf.random_uniform([4, 1], -1.0, 1.0))
y = tf.matmul(x_data, W) + b
loss_a = tf.reduce_mean(tf.square(y - y_data))
loss_b = tf.reduce_mean(tf.square(W) + tf.square(b))
loss = weight_a * loss_a + weight_b * loss_b
optimizer = tf.train.GradientDescentOptimizer(0.1)
a_gradients = tf.gradients(loss_a, W)
b_gradients = tf.gradients(loss_b, W)
train = optimizer.minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
# + id="_Hx_xqbGiryg" executionInfo={"status": "ok", "timestamp": 1634546151622, "user_tz": -330, "elapsed": 506, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="14757873-c810-4322-e39e-b3a9b534d987" colab={"base_uri": "https://localhost:8080/"}
def pareto_step(w, c, G):
"""
ref:http://ofey.me/papers/Pareto.pdf
K : the number of task
M : the dim of NN's params
:param W: # (K,1)
:param C: # (K,1)
:param G: # (K,M)
:return:
"""
GGT = np.matmul(G, np.transpose(G)) # (K, K)
e = np.mat(np.ones(np.shape(w))) # (K, 1)
m_up = np.hstack((GGT, e)) # (K, K+1)
m_down = np.hstack((np.transpose(e), np.mat(np.zeros((1, 1))))) # (1, K+1)
M = np.vstack((m_up, m_down)) # (K+1, K+1)
z = np.vstack((-np.matmul(GGT, c), 1 - np.sum(c))) # (K+1, 1)
hat_w = np.matmul(np.matmul(np.linalg.inv(np.matmul(np.transpose(M), M)), M), z) # (K+1, 1)
hat_w = hat_w[:-1] # (K, 1)
hat_w = np.reshape(np.array(hat_w), (hat_w.shape[0],)) # (K,)
c = np.reshape(np.array(c), (c.shape[0],)) # (K,)
new_w = ASM(hat_w, c)
return new_w
def ASM(hat_w, c):
"""
ref:
http://ofey.me/papers/Pareto.pdf,
https://stackoverflow.com/questions/33385898/how-to-include-constraint-to-scipy-nnls-function-solution-so-that-it-sums-to-1
:param hat_w: # (K,)
:param c: # (K,)
:return:
"""
A = np.array([[0 if i != j else 1 for i in range(len(c))] for j in range(len(c))])
b = hat_w
x0, _ = nnls(A, b)
def _fn(x, A, b):
return np.linalg.norm(A.dot(x) - b)
cons = {'type': 'eq', 'fun': lambda x: np.sum(x) + np.sum(c) - 1}
bounds = [[0., None] for _ in range(len(hat_w))]
min_out = minimize(_fn, x0, args=(A, b), method='SLSQP', bounds=bounds, constraints=cons)
new_w = min_out.x + c
return new_w
w_a, w_b = 0.5, 0.5
c_a, c_b = 0.2, 0.2
for step in range(0, 10):
res = sess.run([a_gradients, b_gradients, train], feed_dict={weight_a: w_a, weight_b: w_b})
weights = np.mat([[w_a], [w_b]])
paras = np.hstack((res[0][0], res[1][0]))
paras = np.transpose(paras)
w_a, w_b = pareto_step(weights, np.mat([[c_a], [c_b]]), paras)
la = sess.run(loss_a)
lb = sess.run(loss_b)
print("{:0>2d} {:4f} {:4f} {:4f} {:4f} {:4f}".format(step, w_a, w_b, la, lb, la / lb))
# print(np.reshape(sess.run(W), (4,)), sess.run(b))
| _notebooks/2022-01-25-pareto-moo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Essas são as bibliotecas necessárias para o código.
import math
import matplotlib
from matplotlib import pyplot as plt
import pandas as pd
# Aqui eu vou criar a lista com todos os elementos da tabela periódica e utilizar o comando
#list, zip para criar o dicionário que contenha as informações necessárias.
Elementos=["Hidrogênio","Hélio","Lítio","Berílio","Boro","Carbono","Nitrogênio",
"Oxigênio","Flúor","Neônio","Sódio","Magnésio","Alumínio","Silício","Fósforo",
"Enxofre","Cloro","Argônio","Potássio","Cálcio","Sc","Ti","V","Cromo"
,"Manganês","Ferro","Cobalto","Níquel","Cobre","Zinco","Gálio","Germânio",
"Arsênio","Selênio","Bromo","Kriptônio","Rubídio","Sr","Y","Zr","Nb","Mo",
"Tc","Ru","Rh","Pd","Ag","Cd","In","Sn","Sb","Te","I","Xe","Cs","Ba","La",
"Ce","Pr","Nd","Pm","Sm","Eu","Gd","Tb","Dy","Ho","Er","Tm","Yb","Lu","Hf","Ta",
"W","Re","Os","Ir","Pt","Au","Mercúrio","TI","Chumbo","Bi","Po","At","Rn","Fr","Ra","Ac","Th"
,"Pa","Urânio","Np","Pu","Am","Cm","Bk","Cf","Es","Fm","Md","No","Lr","Rf","Db","Sg","Bh","Hs"
,"Mt","Ds","Rg","Cn","Nh","Fl","Mc","Lv","Ts","Og"]
#Essas são as massas atômicas dos elementos da tabela periódica
Massasatomicas=[1.0,4.0,6.9,9.0,10.8,12.0,14.0,16.0,
19.0,20.1,
23.0,24.3,27.0,28,30.9,32.0,35.4,39.9,39.0,40.0,44.9,45.8,50.9,51.9,
54.9,55.8,58.9,58.69,63.54,65.3,69.72,72.6,74.9,78.9,79.9,83.7,85.4,87.6,
88.9,91.2,92.9,95.9,98,101.0,102.9,106.4,107.8,112.4,114.8,118.7,
121.7,127.6,126.9,131.2,132.9,137.3,138.9,140.1,140.9,144.4,145,150.3,
151.9,151.9,157.25,158.9,162.5,164.9,167.2,168.9,173.0,174.9,178.4,180.9,
183.8,186.2,190.2,182.2,195.0,196.9,200.5,204.3,207.2,208.9,209,210,222,223,
226,227,232.0,231.0,238.0,237,244,243,247,247,251,252,257,258,259,262,
267,268,269,270,269,278,281,281,285,286,289,288,293,294] # // massa // A = Z + N
#Isso é um vetor vazio que vai armazenar o número de protons
Numerodeprotons=[] #Numero de protons em um elemento Z
for i in range(1,119):
Numerodeprotons.append(i)
df=pd.DataFrame(Elementos,Numerodeprotons)
veldaluzs=[]
i=0
while not (i==118):
veldaluzs.append(2.997925*(10**8))
i=i+1
# +
#Agora é necessário preencher uma tabela que me dê o número de neutrons de cada elemento
Numerodeneutrons=[]
#o número de neutrons não é correspondente ao número de prótons na proporção 1:1 ou quaisquer proporções
#razoavelmente intuitivas de se imaginar, por tanto vou preencher sua lista separadamente em outro momento
#Contador
i=0
for i in range(0,len(Numerodeprotons)):
Numerodeneutrons.append(Massasatomicas[i]-Numerodeprotons[i])
Amostra=[]
i=0
while not (i==118):
Amostra.append(1)
i=i+1
i=0
Numerodegramasde1mol=[]
i=0
while not(i==118):
Numerodegramasde1mol.append(Massasatomicas[i]*Amostra[i])
i=i+1
Constantesdeavogrado=[]
i=0
while not(i==118):
Constantesdeavogrado.append(6.02*(10**23))
i=i+1
Numerodemolsde1grama=[]
i=0
while not(i==118):
Numerodemolsde1grama.append(Amostra[i]/Massasatomicas[i])
i=i+1
Massasunificadas=[]
i=0
while not(i==118):
Massasunificadas.append(Massasatomicas[i]/(Constantesdeavogrado[i]*1000))
i=i+1
# -
u=1.6605*(10**-27) #kg
us=[]
i=0
while not(i==118):
us.append(u)
i=i+1
energiasderepouso=[]
i=0
while not(i==118):
energiasderepouso.append(Massasunificadas[i]*(veldaluzs[i]**2))
i=i+1
df={"Elementos":Elementos,"Numerodeprotons":Numerodeprotons,
"Numero de neutrons":Numerodeneutrons,
"Peso atômico":Massasatomicas,"Numero de gramas em 1 mol":Numerodegramasde1mol,
"Constantesdeavogrado":Constantesdeavogrado,
"Massasunificadas":Massasunificadas,"Energias de repouso":energiasderepouso}
pd.DataFrame(df)
# +
df.head(12)
# -
# +
print(energiasderepouso)
# -
df5=pd.DataFrame(list(zip(Elementos,Numerodeprotons,Numerodeneutrons,Massasatomicas)))
df5.head(50
)
list(zip(Elementos,Numerodeprotons,Massasatomicas,Massasunificadas))
list(zip(Elementos,Numerodeprotons,Massasatomicas,Numerodeneutrons,Numerodegramasde1mol,Numerodemolsde1grama))
plt.plot(Numerodeprotons,Numerodeneutrons)
plt.title("Z x N")
plt.xlabel("Número de protons")
plt.ylabel("Número de neutrons")
plt.plot(Numerodegramasde1mol,Numerodemolsde1grama)
plt.plot(Numerodemolsde1grama,Numerodegramasde1mol)
plt.plot(Numerodeprotons,energiasderepouso)
# +
Ns=[] #Meu Ns é cada valor de N = A - Z
#E com o comando abaixo podemos relacionar um gráfico Z x N preenchendo outra lista auxiliar.
for i in range(0,i+1):
Ns.append(Massasatomicas[i]-Numerodeprotons[i])
print("Aqui está Ns", Ns)
plt.plot(Numerodeprotons,Ns)
# -
# +
Massadoproton=[]
i=0
while not (i==118):
Massadoproton.append(1.6726*(10**-27))
i=i+1
print(Massadoproton)
Massadoneutron=[]
i=0
while not (i==118):
Massadoneutron.append(1.6750*(10**-27))
i=i+1
print(Massadoneutron)
Massadoeletron=[]
i=0
while not (i==118):
Massadoeletron.append(9.109*(10**-31))
i=i+1
# -
# +
#este pra função periodica aleatoria
Nx=[]
for i in range(0,i+1):
Nx.append(Numerodemassa[i]*math.cos(i*23)-Numerodeprotons[i]*math.sin(i*21))
print("Aqui está Nx", Nx)
plt.plot(Numerodeprotons,Nx)
# -
#Teste pra outras operações
TesteNZ=[]
TesteNn=[]
TesteNm=[]
Nomedoselementos=["Hidrogênio","Hélio","Lítio"]
for j in range (1,4):
TesteNZ.append(j)
TesteNn.append(j)
for j in range (0,len(TesteNZ)):
TesteNm.append(TesteNZ[j]+TesteNn[j])
print(TesteNm)
| Códigos fichados e comentados/Física/NEPEIV/Radiação001.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.0-rc3
# language: julia
# name: julia-1.6
# ---
using HDF5
using FastGaussQuadrature
using LaTeXStrings
using LinearAlgebra
using Particles
using Random
using ReducedBasisMethods
using SparseArrays
using Statistics
# +
fpath = "../runs/BoT_Np5e4_k_010_050_np_10_T25_projections.h5"
params = read_sampling_parameters(fpath)
μₜᵣₐᵢₙ = h5read(fpath, "parameters/mu_train")
IP = IntegratorParameters(fpath)
poisson = PoissonSolverPBSplines(fpath)
Ψ = h5read(fpath, "projections/Psi");
Ψₑ = h5read(fpath, "projections/Psi_e");
Πₑ = sparse(h5read(fpath, "projections/Pi_e"));
# -
# Reference draw
P₀ = ParticleList(h5read(fpath, "initial_condition/x_0"),
h5read(fpath, "initial_condition/v_0"),
h5read(fpath, "initial_condition/w") );
params
# # Test
μₜᵣₐᵢₙ
# +
nₜₑₛₜ = 10
κₜₑₛₜ_ₘᵢₙ = 0.1; κₜₑₛₜ_ₘₐₓ = 0.5
μₜₑₛₜ = zeros(nₜₑₛₜ, 5)
for i in 1:nₜₑₛₜ
μₜₑₛₜ[i,:] = [κₜₑₛₜ_ₘᵢₙ, params.ε, params.a, params.v₀, params.σ]
end
λ = 0
for i in 1:nₜₑₛₜ
if nₜₑₛₜ > 1
μₜₑₛₜ[i,1] = rand(1)[1]*(κₜₑₛₜ_ₘₐₓ - κₜₑₛₜ_ₘᵢₙ) + κₜₑₛₜ_ₘᵢₙ
# μₜₑₛₜ[i,1] = (1-λ)*κₜₑₛₜ_ₘᵢₙ + λ*κₜₑₛₜ_ₘₐₓ
# λ += 1/(nₜₑₛₜ-1)
end
end
μₜₑₛₜ = μₜₑₛₜ[sortperm(μₜₑₛₜ[:, 1]), :]
# -
GC.gc()
IPₜₑₛₜ = IntegratorParameters(IP.dt, IP.nₜ, IP.nₜ+1, IP.nₕ, IP.nₚ, nₜₑₛₜ)
ICₜₑₛₜ = IntegratorCache(IPₜₑₛₜ);
@time Rₜₑₛₜ = ReducedBasisMethods.integrate_vp(P₀, μₜₑₛₜ, params, poisson, IPₜₑₛₜ, ICₜₑₛₜ;
given_phi = false, save = true);
# Xₜₑₛₜ = Rₜₑₛₜ.X
# Vₜₑₛₜ = Rₜₑₛₜ.V
# Φₜₑₛₜ = Rₜₑₛₜ.Φ;
Φₜₑₛₜ = copy(Rₜₑₛₜ.Φ)
size(Φₜₑₛₜ)
# no saving
@time ReducedBasisMethods.integrate_vp(P₀, μₜₑₛₜ, params, poisson, IPₜₑₛₜ, ICₜₑₛₜ;
given_phi = false, save = false);
# # Reduced Model
k = size(Ψ)[2]
kₑ = size(Ψₑ)[2]
k, kₑ
RIC = ReducedIntegratorCache(IPₜₑₛₜ, k, kₑ);
ΨᵀPₑ = Ψ' * Ψₑ * inv(Πₑ' * Ψₑ)
ΠₑᵀΨ = Πₑ' * Ψ;
@time Rᵣₘ = reduced_integrate_vp(P₀, Ψ, ΨᵀPₑ, ΠₑᵀΨ, μₜₑₛₜ, params, poisson, IPₜₑₛₜ, RIC;
DEIM=true, given_phi = false, save = true);
# Xᵣₘ = Ψ * Rᵣₘ.Zₓ
# Vᵣₘ = Ψ * Rᵣₘ.Zᵥ
# Φᵣₘ = Rᵣₘ.Φ;
# no saving
@time reduced_integrate_vp(P₀, Ψ, ΨᵀPₑ, ΠₑᵀΨ, μₜₑₛₜ, params, poisson, IPₜₑₛₜ, RIC;
DEIM=true, given_phi = false, save=false);
# Saving
h5save("../runs/BoT_Np5e4_k_010_050_np_10_T25_DEIM.h5", IPₜₑₛₜ, poisson, params, μₜᵣₐᵢₙ, μₜₑₛₜ, Rₜₑₛₜ, Rᵣₘ, Ψ);
norm(Rᵣₘ.Φ - Φₜₑₛₜ)
norm(Rₜₑₛₜ.Φ - Φₜₑₛₜ)
| notebooks/VP-ROM-Testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python (py3p6)
# language: python
# name: py3p6
# ---
# %matplotlib inline
from glob import glob
import matplotlib.pyplot as plt
import numpy as np
import sys
sys.path.append('/home/mehdi/github/LSSutils')
from lssutils.dataviz import setup_color
from lssutils.utils import make_hp
from lssutils.stats.cl import AnaFast
import fitsio as ft
import healpy as hp
setup_color()
nran = np.load('/home/mehdi/data/tanveer/dr8/elg_ran1024.npy')
windows = glob('/home/mehdi/data/tanveer/dr8/elg_mse_snapshots/windows/*')
# +
wnn = np.zeros(12*1024**2)
for wind in windows:
w_ = ft.read(wind)
w_m = make_hp(1024, w_['hpix'], w_['weight'])
wnn += w_m
print('.', end='')
wnn /= len(windows)
# +
weight = np.ones_like(nran)
mask = nran > 0
nrans = nran / nran[mask].mean()
wnns = wnn / wnn[mask].mean()
hpix = np.argwhere(mask).flatten()
wnn_sh = make_hp(1024, hpix, np.random.permutation(wnns[hpix]))
# -
hp.mollview(wnns, title='Npred')
hp.mollview(wnn_sh, title='Npred[shuffled]')
hp.mollview(nrans, title='Nran')
hp.mollview(mask, title='Mask')
af = AnaFast()
cl_mask = af(mask*1.0, weight, mask)
cl_wnn = af(wnns, weight, mask)
cl_nran = af(nrans, weight, mask)
cl_wnn_sh = af(wnn_sh, weight, mask)
plt.figure(figsize=(8, 6))
plt.plot(cl_wnn_sh['cl']/cl_mask['cl'], label='Npred[shuffled]/Mask')
plt.plot(cl_wnn['cl']/cl_mask['cl'], label='Npred/Mask')
plt.plot(cl_nran['cl']/cl_mask['cl'], label='Nran/Mask')
plt.plot(cl_nran['cl']/cl_wnn['cl'], label='Nran/Npred')
# plt.xlim(xmin=1)
plt.yscale('log')
plt.legend()
# plt.grid(which='both')
plt.xlabel(r'$\ell$')
# +
plt.plot(cl_wnn_sh['cl'], label='Npred[shuffled]')
plt.plot(cl_wnn['cl'], label='Npred')
plt.plot(cl_nran['cl'], label='Nran')
plt.plot(cl_mask['cl'], label='Mask')
plt.yscale('log')
plt.legend()
plt.xlabel(r'$\ell$')
plt.ylabel(r'C$_{\ell}$')
# +
plt.plot(cl_wnn_sh['cl'], label='Npred[shuffled]')
plt.plot(cl_wnn['cl'], label='Npred')
plt.plot(cl_nran['cl'], label='Nran')
plt.plot(cl_mask['cl'], label='Mask')
plt.yscale('log')
plt.legend()
plt.xscale('log')
plt.xlabel(r'$\ell$')
plt.ylabel(r'C$_{\ell}$')
plt.axis([100, 4000, 1.0e-9, 1.0e-5])
# -
wnns_smooth = hp.smoothing(wnns, fwhm=0.003)
hp.mollview(wnns_smooth)
wnns_smooth /= wnns_smooth[mask].mean()
cl_wnnsmooth = af(wnns_smooth, weight, mask & (wnns_smooth > 0.0), )
# +
plt.plot(cl_wnn_sh['cl'], label='Npred[shuffled]')
plt.plot(cl_wnn['cl'], label='Npred')
plt.plot(cl_nran['cl'], label='Nran')
plt.plot(cl_mask['cl'], label='Mask')
plt.plot(cl_wnnsmooth['cl'], label='Npred[smoothed]')
plt.yscale('log')
plt.legend()
plt.xscale('log')
plt.xlabel(r'$\ell$')
plt.ylabel(r'C$_{\ell}$')
plt.axis([100, 4000, 1.0e-10, 1.0e-5])
# -
plt.plot(cl_wnnsmooth['cl']/cl_mask['cl'])
plt.ylabel('Smoothed Window / Mask')
plt.xlabel(r'$\ell$')
hp.nside2resol(1024, arcmin=True)
| notebooks/lssxcmb/Window_Vs_Mask.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.0 64-bit (''tf1_13'': conda)'
# language: python
# name: python37064bittf113conda3a6e91fe0a544785b67b740c9ef5643c
# ---
# +
from user_ops import ft_inverse
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
import math
import cv2
import matplotlib.pyplot as plt
# %matplotlib inline
import os
import sys
sys.path.append(os.path.join('..', '..', 'keras_frac'))
from fractional_maxpooling import FractionalPooling2D
# %matplotlib inline
os.environ['TF_CPP_MIN_VLOG_LEVEL'] = '5'
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '5'
# -
# EPOCHS = 1
# BATCH_SIZE = 1
# in_shape = [1,5,5,1]
# ft = [3,3]
# (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
# x_train = x_train.astype(np.float32) / 255.0
# x_test = x_test.astype(np.float32) / 255.0
# y_train = keras.utils.to_categorical(y_train)
# y_test = keras.utils.to_categorical(y_test)
# model = keras.models.Sequential()
# model.add(keras.layers.InputLayer(batch_input_shape=in_shape))
# model.add(keras.layers.Conv2D(1, 3, activation='relu', padding='valid'))
# model.add(keras.layers.Lambda(lambda x: ft_pool(x, ft, ft)))
# model.add(keras.layers.Conv2D(1, 3, activation='relu', padding='same'))
# model.add(keras.layers.Flatten())
# model.add(keras.layers.Dense(10, activation='softmax'))
# model.compile('adam', loss='categorical_crossentropy', metrics=['accuracy'])
# print(model.summary())
# data = np.ones(in_shape, dtype=np.float32)
# #data[0,0,0,0] = 2.0
# model.train_on_batch(data, y_train[0:1,:])
img = cv2.imread('/home/ravers/Downloads/lotr.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = img[400:450, 600:650]
'''
img = np.zeros([20,20])
for y in range(img.shape[0]):
for x in range(img.shape[1]):
if x//6%2==0 and y//6%2==0:
img[y,x] = 255
'''
plt.figure(figsize=(15,15))
plt.imshow(img, cmap='gray')
plt.show()
img = np.expand_dims(img, 0)
img = np.expand_dims(img, 3)
print(img.shape)
EPOCHS = 1
BATCH_SIZE = 1
in_shape = img.shape
stride = [2.0, 2.0]
pooling_window = [4.0, 4.0]
model = keras.models.Sequential()
model.add(keras.layers.InputLayer(batch_input_shape=in_shape))
model.add(keras.layers.Lambda(lambda x: ft_inverse(x, stride, pooling_window)))
model.compile('adam', loss='categorical_crossentropy', metrics=['accuracy'])
out = model.predict(img).astype(np.int32)
print(model.summary())
print(img.shape, out.shape)
print(img.min(), img.mean(), img.max())
print(out.min(), out.mean(), out.max())
plt.figure(figsize=(15,15))
plt.imshow(out[0,...,0], cmap='gray')
plt.show()
plt.figure(figsize=(15,15))
plt.imshow(out[0,...,0], cmap='gray')
plt.show()
| ft_inverse/Testing_ft_output.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# download training data from https://www.kaggle.com/c/avazu-ctr-prediction/data
# for this notebook we will be working on a sample
# # cat train | head -10000 > train_sample
# +
from os.path import expanduser
SRC_PATH = expanduser("~") + '/SageMaker/mastering-ml-on-aws/chapter4/'
# +
from pyspark.context import SparkContext
sc = SparkContext('local', 'test')
# +
from pyspark.sql import SQLContext
sql = SQLContext(sc)
# -
ctr_df = sql.read.csv(SRC_PATH + 'train_sample', header=True, inferSchema=True)
ctr_df.describe().toPandas()
ctr_df.toPandas().nunique()
ctr_df.describe('C1').show()
train_df, test_df = ctr_df.randomSplit([0.8, 0.2], seed=17)
# +
from pyspark.ml.feature import StringIndexer
string_indexer = StringIndexer(inputCol="C1", outputCol="C1_index")
string_indexer_model = string_indexer.fit(ctr_df)
ctr_df_indexed = string_indexer_model.transform(ctr_df).select('C1', 'C1_index')
ctr_df_indexed.distinct().toPandas()
# +
from pyspark.ml.feature import OneHotEncoder
encoder = OneHotEncoder(inputCol="C1_index", outputCol="C1_encoded")
encoder.transform(ctr_df_indexed).distinct().toPandas()
# +
def categorical_one_hot_encoding_stages(columns):
indexers = [StringIndexer(inputCol=column, outputCol=column + "_index", handleInvalid='keep') for column in columns]
encoders = [OneHotEncoder(inputCol=column + "_index", outputCol=column + "_encoded") for column in columns]
return indexers + encoders
def categorical_encoding_stages(columns):
return [StringIndexer(inputCol=column, outputCol=column + "_encoded") for column in columns]
# +
from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import ChiSqSelector
categorical_columns = ['C1', 'C14', 'C15', 'C16', 'C17', 'C18', 'C19', 'C20', 'C21', 'site_id', 'site_domain',
'site_category', 'app_id', 'app_domain', 'app_category']
numerical_columns = ['banner_pos', 'device_type', 'device_conn_type']
encoded_columns = [column + '_encoded' for column in categorical_columns] + numerical_columns
categorical_stages = categorical_one_hot_encoding_stages(categorical_columns)
vector_assembler = VectorAssembler(inputCols=encoded_columns, outputCol="features")
selector = ChiSqSelector(numTopFeatures=100, featuresCol="features",
outputCol="selected_features", labelCol="click")
decision_tree = DecisionTreeClassifier(labelCol="click", featuresCol="selected_features")
encoding_pipeline = Pipeline(stages=categorical_stages + [vector_assembler, selector, decision_tree])
# -
pipeline_model = encoding_pipeline.fit(train_df)
pipeline_model.transform(train_df).limit(5).toPandas()
print(pipeline_model.stages[-1].toDebugString)
# +
categorical_stages_indexed = categorical_encoding_stages(categorical_columns)
decision_tree_cat = DecisionTreeClassifier(labelCol="click", featuresCol="features", maxBins=10000)
encoding_pipeline_cat = Pipeline(stages=categorical_stages_indexed + [vector_assembler, decision_tree_cat])
# -
pipeline_model_cat = encoding_pipeline_cat.fit(ctr_df)
print(pipeline_model_cat.stages[-1].toDebugString)
# +
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction", labelCol="click")
# -
evaluator.evaluate(pipeline_model.transform(test_df), {evaluator.metricName: "areaUnderPR"})
evaluator.evaluate(pipeline_model.transform(test_df), {evaluator.metricName: "areaUnderROC"})
# +
from pyspark.ml.classification import RandomForestClassifier
random_forest = RandomForestClassifier(labelCol="click", featuresCol="features")
pipeline_rf = Pipeline(stages=categorical_stages + [vector_assembler, random_forest])
# +
rf_pipeline_model = pipeline_rf.fit(train_df)
evaluator.evaluate(rf_pipeline_model.transform(test_df), {evaluator.metricName: "areaUnderROC"})
# -
pdf = Pipeline(stages=categorical_stages + [vector_assembler]).fit(train_df).transform(train_df).toPandas()
ctr_df[categorical_columns].toPandas().nunique()
ctr_df[categorical_columns].toPandas().describe()
| Chapter04/train_spark_sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from collections import namedtuple
import numpy as np
import pandas as pd
from sklearn.preprocessing import normalize
from sklearn.cross_validation import train_test_split
import tensorflow as tf
# -
kmnist_dir = './kaggle-mnist/'
kmnist = pd.read_csv(kmnist_dir + 'train.csv', sep=',')
print (type(kmnist))
kmnist.head()
# Removing the label column from the dataframe.
labels = kmnist.pop('label')
# Converting the label column form integer to one-hot vectors.
y = np.eye(10)[labels]
y
X = normalize(kmnist.as_matrix(), axis=1, norm='l2')
X[0]
X_train, X_dev, y_train, y_dev = train_test_split(X, y, test_size=0.2, random_state=42)
len(X_train), len(X_dev)
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.initialize_all_variables()
# +
# next_batch() code adapted from
# https://github.com/tensorflow/tensorflow/blob/master/
# tensorflow/contrib/learn/python/learn/datasets/mnist.py#L138
_X, _y = X_train, y_train
num_examples = len(_X)
index_in_epoch = 0
epochs_completed = 0
def next_batch(batch_size):
global _X, _y
global index_in_epoch
global epochs_completed
start = index_in_epoch
index_in_epoch += batch_size
if index_in_epoch > num_examples:
epochs_completed += 1
# Shuffle data.
perm = np.arange(num_examples)
np.random.shuffle(perm)
_X, _y = _X[perm], _y[perm]
# Start next epoch.
start = 0
index_in_epoch = batch_size
assert batch_size <= num_examples
end = index_in_epoch
## print (start, end)
return _X[start:end], _y[start:end]
# -
with tf.Session() as sess:
#tf.initialize_all_variables().run()
sess.run(init)
for i in range(1000):
batch_xs, batch_ys = next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: X_dev, y_: y_dev}))
| Samsung DW80F800.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.4 ('base')
# language: python
# name: python3
# ---
# selection sort
# the idea is similar to bubble sort, the diffence is instead of find the smallest value first, we find the index of the smallest value first
l = [2,4,1,5,7,0,8]
def selection_sort(l):
for i in range(len(l) - 1):
# find the index of the smallest at position i
min_index = i
for j in range(i+1, len(l)):
if l[j] < l[i]:
min_index = j
# now swap the item at i with the item at min index
if i != min_index:
l[i],l[min_index] = l[min_index],l[i]
return l
selection_sort(l)
| sort/selection_sort.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: bert2
# language: python
# name: bert2
# ---
# # Rule-based Relation Extraction
# Here is a rule-based method which performs heuristic search of subject-attribute entity pairs from the NER result.
import pandas as pd
import ast
from nltk import pos_tag
from nltk.chunk import conlltags2tree
from nltk.tree import Tree
# ## Utility functions
# BIO tags
def tag(ner_result):
tags = []
for word in ner_result:
# print('word, ', word)
if 'SEP' not in word['tag'] and 'CLS' not in word['tag'] : #bert large may return 'CLS' tag
tags.append((word['word'], word['tag']))
return tags
# Create tree
def stanford_tree(bio_tagged):
if len(bio_tagged) != 0:
tokens, ne_tags = zip(*bio_tagged)
pos_tags = [pos for token, pos in pos_tag(tokens)]
conlltags = [(token, pos, ne) for token, pos, ne in zip(tokens, pos_tags, ne_tags)]
ne_tree = conlltags2tree(conlltags)
return ne_tree
else:
return None
# Parse named entities from tree
def structure_ne(ne_tree):
if ne_tree is not None:
ne = []
for subtree in ne_tree:
if type(subtree) == Tree:
ne_label = subtree.label()
ne_string = " ".join([token for token, pos in subtree.leaves()])
ne.append((ne_string, ne_label))
return ne
else:
return None
def find_index(mentions, s):
index=0
indexes=[]
for mention in mentions:
if mention in s:
c = mention[0]
# for ch in s:
# Iterate over index
for i in range(index, len(s)):
if s[i]==c:
if s[i:i+len(mention)] == mention:
indexes.append((mention,i,i+len(mention)))
index = i+len(mention)
break
return indexes
# Apply relation extraction on the NER result
def call_RE_neighborpairs(df_ner):
df_ner = df_ner[['#nct_id','eligibility_type','criterion','NER']]
df_ner = df_ner.drop_duplicates()
result_ner = df_ner['NER']
doc_index = df_ner['#nct_id']
result_re = []
for ner in result_ner:
if isinstance(ner, str):
ner = ast.literal_eval(ner)
words = tag(ner)
text=""
for word in words:
token = word[0]
text+=token+" "
tags_formatted = structure_ne(stanford_tree(words))
mentions=[]
tags=[]
for one in tags_formatted:
mentions.append(one[0])
tags.append(one[1])
indexes = find_index(mentions,text)
# print(text)
# print(indexes)
entitylist=[]
# 3:6:age,7:14:upper_bound < age @NUMBER
string=""
for i in range(0,len(indexes)):
mention=indexes[i][0]
start=indexes[i][1]
end=indexes[i][2]
label=tags[i]
string+= str(start)+":"+str(end)+":"+label +","
#entitylist.append((mention,label,start,end))
entitylist.append((mention,label))
string+="\t"+text
# print(string)
# generate relation statements
relations=[]
for i in range(len(entitylist)):
if 'bound' in entitylist[i][1]:
# print("index: ", i)
left=entitylist[:i]
# Find the neareast entity type on the left (of _bound)
# find first element in a list with condition
a = next((x for x in reversed(left) if x[1] in ["clinical_variable",'bmi','age']), None)
if a != None:
# print("nearby left: has value ", a, entitylist[i])
relations.append(('hasValue',a,entitylist[i]))
b = next((x for x in reversed(left) if x[1] in ["allergy_name","cancer","chronic_disease","pregnancy",'treatment']), None)
if a==None and b!=None:
# print("nearby left: has temp ", b, entitylist[i])
relations.append(('hasTemp',b,entitylist[i]))
# Find the neareast entity type on the right (of _bound); if nothing on the left
if(a==None and b==None):
right=entitylist[i:]
c = next((x for x in right if x[1] in ["clinical_variable",'bmi','age']), None)
if c != None:
# print("nearby right: has value ", entitylist[i], c)
relations.append(('hasValue',entitylist[i],c))
d = next((x for x in right if x[1] in ["allergy_name","cancer","chronic_disease","pregnancy",'treatment']), None)
if c==None and d!=None:
print("nearby right: has temp ", entitylist[i], d)
relations.append(('hasTemp',entitylist[i],d))
result_re.append(relations)
print(len(df_ner))
print(len(result_re))
df_ner['Relation'] = result_re
#reformat the relation statements
df_re = df_ner[['#nct_id','eligibility_type','criterion','Relation']]
df_re = df_re.explode('Relation')
df_re[['RelationType', 'Entity1', 'Entity2']] = pd.DataFrame(df_re['Relation'].tolist(), index=df_re.index)
df_re = df_re[['#nct_id','eligibility_type','criterion','RelationType','Entity1','Entity2']]
return df_re
# ## Do relation extraction using sample data
# Load sample NER data
df_ner = pd.read_excel('data_ner/sample_trial_NER.xlsx')
df_ner
# Input is the NER result
df_relations = call_RE_neighborpairs(df_ner)
df_relations
df_relations.to_excel("data_re/sample_trial_relations_Rule-prediction.xlsx", index=False, encoding='utf8')
| relation_extraction_Rule-based.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
df=pd.read_csv("iris.csv")
df.head()
df['species'].unique()
df.replace('setosa',1,inplace=True)
df.replace('versicolor',2,inplace=True)
df.replace('virginica',3,inplace=True)
df['species'].unique()
X=df.drop('species',axis=1)
y=df['species']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=424)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix,accuracy_score
from sklearn.model_selection import cross_val_score
clf=LogisticRegression()
clf.fit(X_train, y_train)
predicted= clf.predict(X_test)
print(predicted)
accuracy_score(y_test, predicted)*100
| Logistic Regression with IRIS dataset EDA performed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
import pandas as pd
import xml.etree.ElementTree as ET
# Obtain data about [US 116th congress members](http://clerk.house.gov/xml/lists/MemberData.xml), what is the percentage of R, D and independents in each state?
response = requests.get("http://clerk.house.gov/xml/lists/MemberData.xml")
if response:
data = response.text
xml = ET.fromstring(data)
res = []
for member in xml.findall("./members/member/member-info"):
party = member.find("party")
state = member.find("state/state-fullname")
res.append({"state":state.text, "party":party.text})
# or using a list comprehension
res = [{"party":member.find("party").text, "state":member.find("state/state-fullname").text}
for member in xml.findall("./members/member/member-info")]
df = pd.DataFrame(res)
df = df.fillna("I") #replace None for independent
df['count'] = 1
df.groupby(["state", "party"]).count().unstack("party").fillna(0)
| 27-warmup-solution_apis_xml_xpath_xquery.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="v0oL16xi-Kr3"
# # Mount Drive
# + id="SjdXQ3cu99Z4"
#Allows dataset from drive to be utilized
from google.colab import drive
drive.mount("/content/drive", force_remount=True)
# + id="JlPQScgc6j5y"
# Imports
import pandas as pd
import numpy as np
from sklearn.model_selection import cross_val_predict, GridSearchCV, cross_val_score, train_test_split, cross_validate, StratifiedKFold
from sklearn.metrics import confusion_matrix, make_scorer, recall_score, precision_score, f1_score, accuracy_score, roc_auc_score, auc, plot_roc_curve
from sklearn.preprocessing import MinMaxScaler
from xgboost import XGBClassifier
from imblearn.metrics import geometric_mean_score
from imblearn.pipeline import Pipeline
from scipy.stats import mode
from sklearn.dummy import DummyClassifier
import statistics
from imblearn.under_sampling import ClusterCentroids
from imblearn.over_sampling import RandomOverSampler,SMOTE
import matplotlib.pyplot as plt
from imblearn.pipeline import Pipeline
from sklearn.model_selection import permutation_test_score
# + id="R6qI5W4DiB-E"
import imblearn
print('imblearn: {}'.format(imblearn.__version__))
# + [markdown] id="uorMBQ0D-Gd0"
# # Import Dataset
# + id="60UsMwbn3whq"
#Import DataFrame from .csv file
df = pd.read_csv(DATASET_LOCATION)
#Creating labels
x = df.drop("mucinous", axis=1); #Entire dataset
Y = df["mucinous"].copy();
feature_cols = x.columns
#Scale values from 0 to 1
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(x)
print(X.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=12,test_size=.2,shuffle=True,stratify=Y)
# + id="8JpGkij7ikYv"
import sys
print(sys.version)
import sklearn
print('The scikit-learn version is {}.'.format(sklearn.__version__))
# + [markdown] id="eNhKrcWN5hAQ"
# ## Import Strictly Texture Feature Dataset
# + id="Owk828Gj5gUg"
#Import DataFrame from .csv file
df = pd.read_csv(DATASET_LOCATION)
#Creating labels
x = df.drop("mucinous", axis=1); #Entire dataset
Y = df["mucinous"].copy();
feature_cols = x.columns
#Scale values from 0 to 1
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(x)
print(X.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=12,test_size=.2,shuffle=True,stratify=Y)
# + [markdown] id="kcFue018CGpC"
# ## Import Non - Texture Feature Dataset
# + id="iuf6KYcfCGYi"
#Import DataFrame from .csv file
df = pd.read_csv(DATASET_LOCATION)
#Creating labels
x = df.drop("mucinous", axis=1); #Entire dataset
Y = df["mucinous"].copy();
feature_cols = x.columns
#Scale values from 0 to 1
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(x)
print(X.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=12,test_size=.2,shuffle=True,stratify=Y)
# + [markdown] id="F0u1tHsPrkPy"
# # Hyperparameter Optimization
# + [markdown] id="Fp_hxdkCWU60"
# Full Feature Set Hyperparameters (5 Stratified CV): depth = 3, estimators = 11, weight scale =
# + id="yqwdWbXp-ZNz"
# estimate scale_pos_weight value
estimate = 1/1 # Here we set the estimate variable to the value of (minority class)/(majority class) as a starting point for
# exploring different scale_pos_weight values
print('Estimate: %.3f' % estimate)
# + id="i4vJPwZbrjl2"
# Hyper-parameter Optimization
## Using hyper-parameter optimization, we found the best hyperparameters for
## our various models.
## The specific hyperparameter values seen throughout the notebook may not
## necessarily be representative of exact hyperparameters used to achieve values
## in manuscript
metric=make_scorer(roc_auc_score)
weightlist= np.arange(.1, .4, 0.05).tolist()
weightlist.append(estimate)
cv = StratifiedKFold(n_splits=5, shuffle=True)
model = XGBClassifier()
# Based on available compute time, set values for each hyperparameter in larger
# increments and becoming more granular on subsequent runs as we narrow down
# optimal parameter values
param_grid = [{'n_estimators': [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],
'max_depth': [3,4,5,6],
'scale_pos_weight': weightlist,
}]
grid_search = GridSearchCV(model, param_grid, cv=cv, scoring=metric, )
grid_search.fit(X, Y)
best_model = grid_search.best_estimator_
print(grid_search.best_params_)
# + [markdown] id="NK2sCkuv1tPd"
# # Baseline Metrics from Various Models
# + [markdown] id="n1G4UA9j1wxi"
# ## Random Forest
# + id="lbxwPxKD2Int"
## Metrics
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
from sklearn.ensemble import RandomForestClassifier
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
Specificities = []
for i in range(500):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold_resample, y_train_fold_resample = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
model = RandomForestClassifier(n_estimators=8,max_depth=9)
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision- Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall- Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + id="HAeJT7ZJ2YWe"
## P - value
model = RandomForestClassifier(n_estimators=8,max_depth=9)
AUC_metric = make_scorer(roc_auc_score)
g_mean_metric = make_scorer(geometric_mean_score)
_, _, pvalue = permutation_test_score(model, X, Y, scoring=AUC_metric, cv=cv, n_permutations=1000)
_, _, pvalue2 = permutation_test_score(model, X, Y, scoring=g_mean_metric, cv=cv, n_permutations=1000)
print(pvalue)
print(pvalue2)
# + [markdown] id="MIFVaE1d1w84"
# ## Logistic Regression
# + id="IcRpoykhIjd9"
from sklearn.model_selection import cross_val_score,LeaveOneOut
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
import numpy as np
from sklearn.linear_model import LogisticRegression
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
Specificities = []
for i in range(200):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold,y_train_fold = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
model = LogisticRegression(class_weight='balanced')
model.fit(X_train_fold,y_train_fold)
pt = model.predict(X_val_fold)
# for i in range(len(pt)):
# if pt[i]> float(1/2):
# pt[i] = 1
# else:
# pt[i] = 0
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision- Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall- Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + id="W8KvkGxaIjn4"
## p value
model = LogisticRegression(class_weight='balanced')
AUC_metric = make_scorer(roc_auc_score)
g_mean_metric = make_scorer(geometric_mean_score)
_, _, pvalue = permutation_test_score(model, X, Y, scoring=AUC_metric, cv=cv, n_permutations=1000)
_, _, pvalue2 = permutation_test_score(model, X, Y, scoring=g_mean_metric, cv=cv, n_permutations=1000)
print(pvalue)
print(pvalue2)
# + [markdown] id="WlKvv6QZ1xFx"
# ## SVM
# + id="iy_vN47_2JuW"
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
from sklearn import svm
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
Specificities = []
for i in range(500):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold,y_train_fold = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
model = svm.SVC()
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision- Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall- Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + id="mFClykrZJsX1"
## p value
from sklearn import svm
model = svm.SVC()
cv = StratifiedKFold(n_splits=5, shuffle=True)
AUC_metric = make_scorer(roc_auc_score)
g_mean_metric = make_scorer(geometric_mean_score)
# _, _, pvalue = permutation_test_score(model, X, Y, scoring=AUC_metric, cv=cv, n_permutations=1000)
# print(pvalue)
_, _, pvalue2 = permutation_test_score(model, X, Y, scoring=g_mean_metric, cv=cv, n_permutations=100)
print(pvalue2)
# + [markdown] id="f0WItGWf1xO1"
# ## MLP
# + [markdown] id="7labjgcb2Kps"
# ### "Wide"
# + id="eNbIyp7-2TMf"
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
from sklearn.neural_network import MLPClassifier
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
Specificities = []
for i in range(500):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold_resample,y_train_fold_resample = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
model = MLPClassifier(hidden_layer_sizes=(512, 512, 512), random_state=1)
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision- Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall- Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + id="0rK9ak0VN3xL"
## p value
from sklearn.neural_network import MLPClassifier
cv = StratifiedKFold(n_splits=5)
model = MLPClassifier(solver='adam', alpha=1e-5, hidden_layer_sizes=(512, 512, 512), random_state=1)
AUC_metric = make_scorer(roc_auc_score)
g_mean_metric = make_scorer(geometric_mean_score)
# _, _, pvalue = permutation_test_score(model, X, Y, scoring=AUC_metric, cv=cv, n_permutations=1000, n_jobs=-1)
# print(pvalue)
_, _, pvalue2 = permutation_test_score(model, X, Y, scoring=g_mean_metric, cv=cv, n_permutations=1000, n_jobs=-1)
print(pvalue2)
# + [markdown] id="8zpEHpZr2OWL"
# ### "Deep"
# + id="O7D8ll2g2UNG"
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
from sklearn.neural_network import MLPClassifier
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
Specificities = []
for i in range(500):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold_resample,y_train_fold_resample = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
model = MLPClassifier(hidden_layer_sizes=(100,100,100,100,100,100,100,100,100,100), random_state=1)
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision- Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall- Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + id="qdv8GVuYOrPf"
## p value
from sklearn.neural_network import MLPClassifier
cv = StratifiedKFold(n_splits=5)
model = MLPClassifier(hidden_layer_sizes=(100,100,100,100,100,100,100,100,100,100))
AUC_metric = make_scorer(roc_auc_score)
g_mean_metric = make_scorer(geometric_mean_score)
_, _, pvalue = permutation_test_score(model, X, Y, scoring=AUC_metric, cv=cv, n_permutations=1000)
print(pvalue)
_, _, pvalue2 = permutation_test_score(model, X, Y, scoring=g_mean_metric, cv=cv, n_permutations=1000)
print(pvalue2)
# + [markdown] id="Gvpilg3r2Ovk"
# ### "Middle"
# + id="vpxypZMU2UrN"
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
from sklearn.neural_network import MLPClassifier
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
Specificities = []
for i in range(500):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold_resample,y_train_fold_resample = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
model = MLPClassifier(hidden_layer_sizes=(512, 256, 128, 64, 64), random_state=1, max_iter=400)
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision- Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall- Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + id="YRX6biH-OsK7"
## p value
from sklearn.neural_network import MLPClassifier
cv = StratifiedKFold(n_splits=5)
model = MLPClassifier(hidden_layer_sizes=(512, 256, 128, 64, 64), random_state=1, max_iter=400)
AUC_metric = make_scorer(roc_auc_score)
g_mean_metric = make_scorer(geometric_mean_score)
_, _, pvalue = permutation_test_score(model, X, Y, scoring=AUC_metric, cv=cv, n_permutations=1000, n_jobs=-1)
print(pvalue)
_, _, pvalue2 = permutation_test_score(model, X, -Y, scoring=g_mean_metric, cv=cv, n_permutations=1000, n_jobs=-1)
print(pvalue2)
# + [markdown] id="PqYUIa0j1xXI"
# ## kNN
# + id="h09YXd7Z1sGm"
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
for j in [3,5,7,9,11]:
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
Specificities = []
for i in range(500):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold_resample,y_train_fold_resample = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
model = KNeighborsClassifier(n_neighbors=j)
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print("k = "+ str(j))
print('Precision- Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall- Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + id="qVCDiqGDPzMY"
## p value
for j in [3,5,7,9,11]:
model = KNeighborsClassifier(n_neighbors=j)
AUC_metric = make_scorer(roc_auc_score)
g_mean_metric = make_scorer(geometric_mean_score)
_, _, pvalue = permutation_test_score(model, X, Y, scoring=AUC_metric, cv=cv, n_permutations=1000)
_, _, pvalue2 = permutation_test_score(model, X, Y, scoring=g_mean_metric, cv=cv, n_permutations=1000)
print("k = %i, p-value (AUC): %f, p-value (gmean) %f"% (j, pvalue, pvalue2))
# + [markdown] id="_wM_93EZ_gzG"
# ## XGBoost
# + id="V1_cjqjJOPgM"
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
Precisions = []
Recalls = []
Specificities = []
F1s = []
G_means = []
accuracy = []
AUC = []
for i in range(500):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold,y_train_fold = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
model = XGBClassifier(max_depth=3, n_estimators=25, scale_pos_weight=.2)
model.fit(X_train_fold,y_train_fold)
pt = model.predict(X_val_fold)
#print("confusion_matrix:")
#print(confusion_matrix(y_val_fold,pt))
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
Specificities.append(tn / (tn+fp))
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision - Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall - Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + [markdown] id="uiOLhjfc_YDL"
# ## XGBoost with Undersampling
# + id="j_-dgmOyPI7U"
# K-fold
cv1 = StratifiedKFold(n_splits=3, random_state=12, shuffle=True)
Precisons = []
Recalls = []
F1s = []
G_means = []
accuracy = []
cc2 = ClusterCentroids(random_state=12)
print(X.shape,Y.shape)
X_under, Y_under = cc2.fit_resample(X,Y)
print(X_under.shape,Y_under.shape)
for train_fold_index, val_fold_index in cv1.split(X_under,Y_under):
X_train_fold,y_train_fold = X_under[train_fold_index], Y_under[train_fold_index]
X_val_fold, y_val_fold = X_under[val_fold_index], Y_under[val_fold_index]
model = XGBClassifier(n_estimators=32, max_depth=3, scale_pos_weight=.2875)
model.fit(X_train_fold,y_train_fold)
pt = model.predict(X_val_fold)
#print("confusion_matrix:")
#print(confusion_matrix(y_val_fold,pt))
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisons.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
print('Precision: ',mean(Precisons))
print('Recall: ',mean(Recalls))
print('F1: ',mean(F1s))
print('G_mean: ',mean(G_means))
print('accuracy: ', mean(accuracy))
# + [markdown] id="1572MnVE_WWk"
# ## Oversampling for XGBoost
# + [markdown] id="eSf2DvzPY6WR"
# ### SMOTE
# + id="8CHpgVin33Kg"
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
Specificities = []
for i in range(500):
cv = StratifiedKFold(n_splits=5, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold,y_train_fold = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
smoter = SMOTE()
X_train_fold_resample, y_train_fold_resample = smoter.fit_resample(X_train_fold,y_train_fold)
model = XGBClassifier(max_depth=3, n_estimators=11, scale_pos_weight=.25)
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
#print("confusion_matrix:")
#print(confusion_matrix(y_val_fold,pt))
tn, fp, fn, tp = confusion_matrix(y_val_fold, pt).ravel()
specificity = tn / (tn+fp)
Specificities.append(specificity)
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision- Mean: %f Standard Deviation: %f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Sensitivity/Recall- Mean: %f Standard Deviation: %f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(Specificities), statistics.pstdev(Specificities)))
print('F1- Mean: %f Standard Deviation: %f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %f Standard Deviation: %f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %f Standard Deviation: %f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %f Standard Deviation: %f' % (mean(AUC), statistics.pstdev(AUC)))
# + [markdown] id="1jxxkgaz6WZ3"
# ### Random Oversampling
# + id="kQHrsMKnXtwJ"
# K-fold
from statistics import mean as mean
from sklearn.metrics import roc_auc_score
Precisions = []
Recalls = []
F1s = []
G_means = []
accuracy = []
AUC = []
for i in range(500):
cv = StratifiedKFold(n_splits=3, shuffle=True)
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold,y_train_fold = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
ros = RandomOverSampler()
X_train_fold_resample, y_train_fold_resample = ros.fit_resample(X_train_fold,y_train_fold)
model = XGBClassifier(n_estimators=11, max_depth=3, scale_pos_weight=.25)
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
#print("confusion_matrix:")
#print(confusion_matrix(y_val_fold,pt))
Precisions.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
AUC.append(roc_auc_score(y_val_fold,pt))
print('Precision- Mean: %.3f Standard Deviation: %.3f' % (mean(Precisions), statistics.pstdev(Precisions)))
print('Recall- Mean: %.3f Standard Deviation: %.3f' % (mean(Recalls), statistics.pstdev(Recalls)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(F1s), statistics.pstdev(F1s)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(G_means), statistics.pstdev(G_means)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(accuracy), statistics.pstdev(accuracy)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(AUC), statistics.pstdev(AUC)))
# + [markdown] id="9SFjCCpobmmo"
# # Metrics for Naive Classifiers
# + [markdown] id="fC5SaeR5MlzQ"
# ## Majority Classifier
#
# + id="F1m6soupM54z"
# Naive Classifier
## Predicts the Majority (Mucinous) Class
## Source: https://machinelearningmastery.com/how-to-develop-and-evaluate-naive-classifier-strategies-using-probability/
## https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics
# predict the majority class
def majority_class(y):
return mode(y)[0]
# make predictions
yhat = [1 for _ in range(len(Y))]
print(yhat)
tn, fp, fn, tp = confusion_matrix(Y, yhat).ravel()
# calculate Metrics
print('F1 : %.3f' % f1_score(Y, yhat))
print('Recall : %.3f' % recall_score(Y,yhat))
print('Precision : %.3f' % precision_score(Y,yhat))
print('Specificity : %.3f' % (tn/(tn+fp)))
print('ROC: %.3f' % roc_auc_score(Y, yhat))
print('G-Mean : %.3f' % geometric_mean_score(Y,yhat))
print('accuracy : %.3f' % accuracy_score(Y,yhat))
# + [markdown] id="Z1FNR9Gpb3CL"
# ## Minority Classifier
# + id="matJ-7UccBkp"
# predict the majority class
def majority_class(y):
return mode(y)[0]
# make predictions
yhat = [0 for _ in range(len(Y))] #Hardcoded for our model's distribution
print(yhat)
tn, fp, fn, tp = confusion_matrix(Y, yhat).ravel()
# calculate Metrics
print('F1 : %.3f' % f1_score(Y, yhat))
print('Recall : %.3f' % recall_score(Y,yhat))
print('Precision : %.3f' % precision_score(Y,yhat))
print('Specificity : %.3f' % (tn/(tn+fp)))
print('ROC: %.3f' % roc_auc_score(Y, yhat))
print('G-Mean : %.3f' % geometric_mean_score(Y,yhat))
print('accuracy : %.3f' % accuracy_score(Y,yhat))
# + [markdown] id="Gq08zkQcb3Ni"
# ## Random Guesser
# + id="4LFUVOvkcBGO"
from statistics import mean as mean
dummy_clf = DummyClassifier(strategy="uniform")
dummy_clf.fit(X, Y)
y_predicted = dummy_clf.predict(X)
f1= []
rcll = []
prc = []
gmean = []
acc = []
spec = []
roc = []
for i in range(1000):
y_predicted = dummy_clf.predict(X)
f1.append(f1_score(Y, y_predicted))
rcll.append(recall_score(Y,y_predicted))
prc.append(precision_score(Y,y_predicted))
gmean.append(geometric_mean_score(Y,y_predicted))
acc.append(accuracy_score(Y,y_predicted))
tn, fp, fn, tp = confusion_matrix(Y, y_predicted).ravel()
spec.append(tn/(tn+fp))
roc.append(roc_auc_score(Y,y_predicted))
print('Precision - Mean: %.3f Standard Deviation: %.3f' % (mean(prc), statistics.pstdev(prc)))
print('Sensitivity/Recall - Mean: %.3f Standard Deviation: %.3f' % (mean(rcll), statistics.pstdev(rcll)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(spec), statistics.pstdev(spec)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(f1), statistics.pstdev(f1)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(gmean), statistics.pstdev(gmean)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(acc), statistics.pstdev(acc)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(roc), statistics.pstdev(roc)))
# + [markdown] id="eQEY83jib3Yh"
# ## Stratified Guesser
# + id="ztF5AgvDcAmf"
from statistics import mean as mean
dummy_clf = DummyClassifier(strategy="stratified")
dummy_clf.fit(X, Y)
f1= []
rcll = []
prc = []
gmean = []
acc = []
spec = []
roc = []
for i in range(1000):
y_predicted = dummy_clf.predict(X)
f1.append(f1_score(Y, y_predicted))
rcll.append(recall_score(Y,y_predicted))
prc.append(precision_score(Y,y_predicted))
gmean.append(geometric_mean_score(Y,y_predicted))
acc.append(accuracy_score(Y,y_predicted))
tn, fp, fn, tp = confusion_matrix(Y, y_predicted).ravel()
spec.append(tn/(tn+fp))
roc.append(roc_auc_score(Y,y_predicted))
print('Precision - Mean: %.3f Standard Deviation: %.3f' % (mean(prc), statistics.pstdev(prc)))
print('Sensitivity/Recall - Mean: %.3f Standard Deviation: %.3f' % (mean(rcll), statistics.pstdev(rcll)))
print('Specificity - Mean: %.3f Standard Deviation: %.3f' % (mean(spec), statistics.pstdev(spec)))
print('F1- Mean: %.3f Standard Deviation: %.3f' % (mean(f1), statistics.pstdev(f1)))
print('G_mean- Mean: %.3f Standard Deviation: %.3f' % (mean(gmean), statistics.pstdev(gmean)))
print('Accuracy- Mean: %.3f Standard Deviation: %.3f' % (mean(acc), statistics.pstdev(acc)))
print('AUC Score- Mean: %.3f Standard Deviation: %.3f' % (mean(roc), statistics.pstdev(roc)))
# + [markdown] id="47MNyx1Iq42W"
# # P - Values for Models
# + colab={"background_save": true} id="oTt3Wy9Aq4Tx"
# Datasets
## Full Feature Set
df = pd.read_csv(DATASET_LOCATION)
x_full = df.drop("mucinous", axis=1); #Entire dataset
Y_full = df["mucinous"].copy()
scaler = MinMaxScaler(feature_range=(0, 1))
X_full = scaler.fit_transform(x_full)
#Import Texture-Only Feature Set
df = pd.read_csv(DATASET_LOCATION)
x_texture = df.drop("mucinous", axis=1); #Entire dataset
Y_texture = df["mucinous"].copy();
scaler = MinMaxScaler(feature_range=(0, 1))
X_texture = scaler.fit_transform(x_texture)
# Models
## Naive
### Majority, Minority, random, stratified
majority = DummyClassifier(strategy='constant', constant=1) #strategy='most_frequent'
minority = DummyClassifier(strategy='constant', constant=0)
random = DummyClassifier(strategy='uniform', constant=1)
stratified = DummyClassifier(strategy='stratified', constant=1)
random.fit(X_full, Y_full)
stratified.fit(X_full, Y_full)
## ML
### SMOTE Full Feature, SMOTE Texture-Only, XGBoost Full, XGBoost Texture-only
XGBoost = XGBClassifier(n_estimators=11, max_depth=3, scale_pos_weight=.25)
SMOTE_XGBoost = Pipeline([
('sampling', SMOTE()),
('classification', XGBoost)
])
# Scoring
## Setup
cv = StratifiedKFold(n_splits=5, random_state=12, shuffle=True)
AUC_metric = make_scorer(roc_auc_score)
g_mean_metric = make_scorer(geometric_mean_score)
p_values_AUC = {}
p_values_g_mean = {}
titles = ["AUC p-value", "G-Mean p-value"]
## AUC
_, _, pvalue = permutation_test_score(majority, X_full, Y_full, scoring=AUC_metric, cv=cv, n_permutations=1000)
p_values_AUC["majority"] = pvalue
_, _, pvalue = permutation_test_score(minority, X_full, Y_full, scoring=AUC_metric, cv=cv, n_permutations=1000)
p_values_AUC["minority"] = pvalue
_, _, pvalue = permutation_test_score(random, X_full, Y_full, scoring=AUC_metric, cv=cv, n_permutations=1000)
p_values_AUC["random"] = pvalue
_, _, pvalue = permutation_test_score(stratified, X_full, Y_full, scoring=AUC_metric, cv=cv, n_permutations=1000)
p_values_AUC["stratified"] = pvalue
_, _, pvalue = permutation_test_score(XGBoost, X_full, Y_full, scoring=AUC_metric, cv=cv, n_permutations=1000)
p_values_AUC["XGBoost_Full"] = pvalue
_, _, pvalue = permutation_test_score(SMOTE_XGBoost, X_full, Y_full, scoring=AUC_metric, cv=cv, n_permutations=1000)
p_values_AUC["SMOTE_Full"] = pvalue
_, _, pvalue = permutation_test_score(XGBoost, X_texture, Y_texture, scoring=AUC_metric, cv=cv, n_permutations=1000)
p_values_AUC["XGBoost_Texture"] = pvalue
_, _, pvalue = permutation_test_score(SMOTE_XGBoost, X_texture, Y_texture, scoring=AUC_metric, cv=cv, n_permutations=1000)
p_values_AUC["SMOTE_Texture"] = pvalue
## G - Mean
_, _, pvalue = permutation_test_score(majority, X_full, Y_full, scoring=g_mean_metric, cv=cv, n_permutations=1000)
p_values_g_mean["majority"] = pvalue
_, _, pvalue = permutation_test_score(minority, X_full, Y_full, scoring=g_mean_metric, cv=cv, n_permutations=1000)
p_values_g_mean["minority"] = pvalue
_, _, pvalue = permutation_test_score(random, X_full, Y_full, scoring=g_mean_metric, cv=cv, n_permutations=1000)
p_values_g_mean["random"] = pvalue
_, _, pvalue = permutation_test_score(stratified, X_full, Y_full, scoring=g_mean_metric, cv=cv, n_permutations=1000)
p_values_g_mean["stratified"] = pvalue
score, _, pvalue = permutation_test_score(XGBoost, X_full, Y_full, scoring=g_mean_metric, cv=cv, n_permutations=1000)
p_values_g_mean["XGBoost_Full"] = pvalue
_, _, pvalue = permutation_test_score(SMOTE_XGBoost, X_full, Y_full, scoring=g_mean_metric, cv=cv, n_permutations=1000)
p_values_g_mean["SMOTE_Full"] = pvalue
_, _, pvalue = permutation_test_score(XGBoost, X_texture, Y_texture, scoring=g_mean_metric, cv=cv, n_permutations=1000)
p_values_g_mean["XGBoost_Texture"] = pvalue
_, _, pvalue = permutation_test_score(SMOTE_XGBoost, X_texture, Y_texture, scoring=g_mean_metric, cv=cv, n_permutations=1000)
p_values_g_mean["SMOTE_Texture"] = pvalue
# Output Table
print("AUC")
print(p_values_AUC)
print("G-Mean")
print(p_values_g_mean)
# + [markdown] id="mkQr6Ld6nJ5Z"
# # Plots
# + [markdown] id="ASJWSwCZq4pe"
# ## Plot Decision Bounds
# + id="Kuxons1wmF9B"
#https://pierpaolo28.github.io/Projects/project6.html
from sklearn.decomposition import PCA
from itertools import product
pca = PCA(n_components=2,svd_solver='full')
X_pca = pca.fit_transform(X)
X_reduced, X_test_reduced, Y_Train, Y_Test = train_test_split(X_pca, Y, test_size=.2,shuffle=True,stratify=Y)
reduced_data = X_reduced
trainedmodel = XGBClassifier(n_estimators=7, max_depth=3, scale_pos_weight=.25).fit(reduced_data,Y_Train)
x_min, x_max = reduced_data[:, 0].min() - .5, reduced_data[:, 0].max() + .5
y_min, y_max = reduced_data[:, 1].min() - .5, reduced_data[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
Z = trainedmodel.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
arg_0 = np.where(Y_Train == 0)
arg_1 = np.where(Y_Train == 1)
plt.figure(figsize=(7.5,5))
plt.contourf(xx, yy, Z,cmap=plt.cm.coolwarm, alpha=0.4)
plt.scatter(reduced_data[arg_1, 0], reduced_data[arg_1, 1],
s=20, edgecolor='k', marker='^', label='Mucinous', c='purple')
plt.scatter(reduced_data[arg_0, 0], reduced_data[arg_0, 1],
s=20, edgecolor='k', c='yellow', label='Non-mucinous')
plt.title('XGBoost - Mucinous Classifier')
plt.legend(loc='upper right')
plt.show()
# + [markdown] id="DUvop-yJoPBF"
# ## Shap Model Visualization
# + id="wl4EDYDNoRYD"
#Import DataFrame from .csv file
df = pd.read_csv(DATASET_LOCATION)
#Creating labels
x = df.drop("mucinous", axis=1); #Entire dataset
Y = df["mucinous"].copy();
feature_cols = x.columns
#Scale values from 0 to 1
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(x)
model = XGBClassifier(max_depth=3, n_estimators=11, scale_pos_weight=.25)
model.fit(X,Y)
# explain the model's predictions using SHAP
# (same syntax works for LightGBM, CatBoost, scikit-learn and spark models)
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.summary_plot(shap_values, x)
# + id="R2Ro9B23wu3g"
#Import DataFrame from .csv file
df = pd.read_csv(DATASET_LOCATION)
#Creating labels
x = df.drop("mucinous", axis=1); #Entire dataset
Y = df["mucinous"].copy();
feature_cols = x.columns
#Scale values from 0 to 1
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(x)
print(X.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=12,test_size=.2,shuffle=True,stratify=Y)
#print(X_train.shape, X_test.shape)
model = XGBClassifier(max_depth=3, n_estimators=8, scale_pos_weight=.25)
model.fit(X,Y)
# explain the model's predictions using SHAP
# (same syntax works for LightGBM, CatBoost, scikit-learn and spark models)
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.summary_plot(shap_values, x)
# + id="_-a4DZrHo46j"
# load JS visualization code to notebook
shap.initjs()
model = XGBClassifier(max_depth=3, n_estimators=11, scale_pos_weight=.25)
model.fit(X,Y)
# explain the model's predictions using SHAP
# (same syntax works for LightGBM, CatBoost, scikit-learn and spark models)
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.force_plot(explainer.expected_value, shap_values[0,:], x.iloc[0,:], matplotlib=False)
# + [markdown] id="50SDuqQx0Tel"
# ## Curves
# + [markdown] id="Wg-vk0JJ0RT2"
# ### PR Curves
# + id="76fcAAT6wCy4"
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import matplotlib.pyplot as plt
import numpy
from sklearn.datasets import make_blobs
from sklearn.metrics import precision_recall_curve, auc
from sklearn.model_selection import StratifiedKFold
from sklearn.svm import SVC
from numpy import interp
from xgboost import XGBClassifier
FOLDS = 10
f, axes = plt.subplots(figsize=(10,10))
k_fold = StratifiedKFold(n_splits=FOLDS, random_state=12, shuffle=True)
results = pd.DataFrame(columns=['training_score', 'test_score'])
y_realtot = []
y_probatot = []
precision_arraytot = []
threshold_arraytot=[]
recall_arraytot = np.linspace(0, 1, 100)
for j in range(10):
y_real = []
y_proba = []
precision_array = []
threshold_array=[]
recall_array = np.linspace(0, 1, 100)
for i, (train_index, test_index) in enumerate(k_fold.split(X,Y)):
predictor = XGBClassifier(n_estimators=32, max_depth=3, scale_pos_weight=.2875)
X_train_fold,y_train_fold = X[train_index], Y[train_index]
X_val_fold, y_val_fold = X[test_index], Y[test_index]
smoter = SMOTE(random_state=12)
X_train_fold_resample, y_train_fold_resample = smoter.fit_resample(X_train_fold,y_train_fold)
predictor.fit(X_train_fold_resample, y_train_fold_resample)
pred_proba = predictor.predict_proba(X_val_fold)
precision_fold, recall_fold, thresh = precision_recall_curve(y_val_fold, pred_proba[:,1])
precision_fold, recall_fold, thresh = precision_fold[::-1], recall_fold[::-1], thresh[::-1] # reverse order of results
thresh = np.insert(thresh, 0, 1.0)
precision_array = interp(recall_array, recall_fold, precision_fold)
threshold_array = interp(recall_array, recall_fold, thresh)
pr_auc = auc(recall_array, precision_array)
lab_fold = 'Fold %d AUC=%.4f' % (i+1, pr_auc)
#plt.plot(recall_fold, precision_fold, alpha=0.3, label=lab_fold)
y_real.append(y_val_fold)
y_proba.append(pred_proba[:,1])
y_real = numpy.concatenate(y_real)
y_proba = numpy.concatenate(y_proba)
precision, recall, _ = precision_recall_curve(y_real, y_proba)
lab_foldtot = 'PR %d AUC=%.4f' % (j+1, pr_auc)
plt.plot(recall, precision, marker='.' ,alpha=0.3, label=lab_foldtot)
y_realtot.append(y_real)
y_probatot.append(y_proba)
precision_arraytot = interp(recall_array, recall, precision)
threshold_arraytot = interp(recall_array, recall, precision)
#plt.plot(recall_fold, precision_fold, alpha=0.3, label=lab_fold)
#finsih 10 iterations.
y_realtot = numpy.concatenate(y_realtot)
y_probatot= numpy.concatenate(y_probatot)
precision, recall, _ = precision_recall_curve(y_realtot, y_probatot)
lab = 'Overall AUC=%.4f' % (auc(recall, precision))
plt.plot(recall, precision, marker='.', lw=2,color='red', label=lab)
plt.legend(loc='lower left', fontsize=18)
lab = 'Overall AUC=%.4f' % (auc(recall, precision))
mean_precision = np.mean(precision)
mean_recall = np.mean(recall)
std_precision = np.std(precision)
print ("mean of precision: " )
print (mean_precision )
print ("Std Dev of precision: ")
print ( std_precision )
# print ("mean of recall: " )
# print (mean_precision )
axes.set_title('10 Indenpendent PR Curves of Random Forest Over 10 Folds Cross Validation', fontsize=18)
plt.fill_between(recall, precision + std_precision, precision - std_precision, alpha=0.3, linewidth=0, color='grey')
plt.xlabel("Recall", fontsize=18)
plt.ylabel("Precision", fontsize=18)
plt.ylim((0,1))
plt.xlim((0,1))
plt.show()
f.savefig('result.png')
print (precision)
print (recall)
print (_)
# + [markdown] id="-rhGEp140nYK"
# ### ROC
# + id="l-RpAgWplpXf"
## ROC Curve for 5-Fold Cross Validation with SMOTE oversampling
# Source: https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/auto_examples/plot_roc_crossval.html
# #############################################################################
# Run classifier with cross-validation and plot ROC curves
from sklearn import metrics
df = pd.read_csv('/content/drive/My Drive/CT Analysis/Data Sets/mucinous_processed.csv')
#Creating labels
full_x = df.drop("mucinous", axis=1); #Entire dataset
full_Y = df["mucinous"].copy();
scaler = MinMaxScaler(feature_range=(0, 1))
full_x = scaler.fit_transform(full_x)
df = pd.read_csv('/content/drive/My Drive/CT Analysis/Data Sets/texture_feature_set_mucinous_processed.csv')
#Creating labels
texture_x = df.drop("mucinous", axis=1); #Entire dataset
texture_Y = df["mucinous"].copy();
scaler = MinMaxScaler(feature_range=(0, 1))
texture_x = scaler.fit_transform(texture_x)
cv = StratifiedKFold(n_splits=5, shuffle=True)
#classifier = RandomForestClassifier(n_estimators=25,max_depth=20, class_weight='balanced')
plt.rcParams["figure.figsize"] = [14,10]
tprs_full = []
aucs_full = []
mean_fpr_full = np.linspace(0, 1, 100)
tprs_text = []
aucs_text = []
mean_fpr_text = np.linspace(0, 1, 100)
fig, full = plt.subplots()
fig, text = plt.subplots()
fig, both = plt.subplots()
for j in range(500):
for i, (train_fold_index, val_fold_index) in enumerate(cv.split(full_x, full_Y)):
X_train_full,y_train_full = full_x[train_fold_index], full_Y[train_fold_index]
X_val_full, y_val_full = full_x[val_fold_index], full_Y[val_fold_index]
X_train_text,y_train_text = texture_x[train_fold_index], texture_Y[train_fold_index]
X_val_text, y_val_text = texture_x[val_fold_index], texture_Y[val_fold_index]
classifier_full = XGBClassifier(n_estimators=11, max_depth=3, scale_pos_weight=.25)
classifier_full.fit(X_train_full,y_train_full)
classifier_text = XGBClassifier(n_estimators=8, max_depth=3, scale_pos_weight=.25)
classifier_text.fit(X_train_text,y_train_text)
y_scores_full = classifier_full.predict_proba(X_val_full)[:, 1]
fpr_full, tpr_full, thresholds_full = metrics.roc_curve(y_val_full, classifier_full.predict_proba(X_val_full)[:, 1])
y_scores_text = classifier_text.predict_proba(X_val_text)[:, 1]
fpr_text, tpr_text, thresholds_text = metrics.roc_curve(y_val_text, classifier_text.predict_proba(X_val_text)[:, 1])
interp_tpr_full = np.interp(mean_fpr_full, fpr_full, tpr_full)
interp_tpr_full[0] = 0.0
tprs_full.append(interp_tpr_full)
aucs_full.append(metrics.auc(fpr_full, tpr_full))
interp_tpr_text = np.interp(mean_fpr_text, fpr_text, tpr_text)
interp_tpr_text[0] = 0.0
tprs_text.append(interp_tpr_text)
aucs_text.append(metrics.auc(fpr_text, tpr_text))
### Full Feature Plot
full.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr_full = np.mean(tprs_full, axis=0)
mean_tpr_full[-1] = 1.0
mean_auc_full = auc(mean_fpr_full, mean_tpr_full)
std_auc_full = np.std(aucs_full)
full.plot(mean_fpr_full, mean_tpr_full, color='b',
label=r'Mean ROC of Full Feature Set(AUC = %0.2f $\pm$ %0.2f)' % (mean_auc_full, std_auc_full),
lw=2, alpha=.8)
std_tpr_full = np.std(tprs_full, axis=0)
tprs_upper_full = np.minimum(mean_tpr_full + std_tpr_full, 1)
tprs_lower_full = np.maximum(mean_tpr_full - std_tpr_full, 0)
full.fill_between(mean_fpr_full, tprs_lower_full, tprs_upper_full, color='blue', alpha=.1,
label=r'$\pm$ 1 std. dev.')
full.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05],
title="Receiver operating characteristic")
full.legend(loc="lower right")
### Texture Only Plot
text.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr_text = np.mean(tprs_text, axis=0)
mean_tpr_text[-1] = 1.0
mean_auc_text = auc(mean_fpr_text, mean_tpr_text)
std_auc_text = np.std(aucs_text)
text.plot(mean_fpr_text, mean_tpr_text, color='g',
label=r'Mean ROC of Texture Feature Set(AUC = %0.2f $\pm$ %0.2f)' % (mean_auc_text, std_auc_text),
lw=2, alpha=.8)
std_tpr_text = np.std(tprs_text, axis=0)
tprs_upper_text = np.minimum(mean_tpr_text + std_tpr_text, 1)
tprs_lower_text = np.maximum(mean_tpr_text - std_tpr_text, 0)
text.fill_between(mean_fpr_text, tprs_lower_text, tprs_upper_text, color='green', alpha=.1,
label=r'$\pm$ 1 std. dev.')
text.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05],
title="Receiver operating characteristic")
text.legend(loc="lower right")
### Combined Plot
## Full Features
both.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr_full = np.mean(tprs_full, axis=0)
mean_tpr_full[-1] = 1.0
mean_auc_full = auc(mean_fpr_full, mean_tpr_full)
std_auc_full = np.std(aucs_full)
both.plot(mean_fpr_full, mean_tpr_full, color='b',
label=r'Mean ROC of Full Feature Set(AUC = %0.2f $\pm$ %0.2f)' % (mean_auc_full, std_auc_full),
lw=2, alpha=.8)
std_tpr_full = np.std(tprs_full, axis=0)
tprs_upper_full = np.minimum(mean_tpr_full + std_tpr_full, 1)
tprs_lower_full = np.maximum(mean_tpr_full - std_tpr_full, 0)
both.fill_between(mean_fpr_full, tprs_lower_full, tprs_upper_full, color='blue', alpha=.1,
label=r'$\pm$ 1 std. dev.')
both.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05],
title="Receiver operating characteristic")
both.legend(loc="lower right")
## Texture Features
mean_tpr_text = np.mean(tprs_text, axis=0)
mean_tpr_text[-1] = 1.0
mean_auc_text = auc(mean_fpr_text, mean_tpr_text)
std_auc_text = np.std(aucs_text)
both.plot(mean_fpr_text, mean_tpr_text, color='g',
label=r'Mean ROC of Texture Feature Set(AUC = %0.2f $\pm$ %0.2f)' % (mean_auc_text, std_auc_text),
lw=2, alpha=.8)
std_tpr_text = np.std(tprs_text, axis=0)
tprs_upper_text = np.minimum(mean_tpr_text + std_tpr_text, 1)
tprs_lower_text = np.maximum(mean_tpr_text - std_tpr_text, 0)
both.fill_between(mean_fpr_text, tprs_lower_text, tprs_upper_text, color='green', alpha=.2,
label=r'$\pm$ 1 std. dev.')
both.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05],
title="Receiver operating characteristic")
both.legend(loc="lower right")
plt.show()
# + [markdown] id="wLkq9_6UaV2v"
# ## Permutation Testing
# + id="90irGYfqfuVm"
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import permutation_test_score
from imblearn.metrics import geometric_mean_score
from xgboost import XGBClassifier
from sklearn.metrics import make_scorer
#Uses test 1 described here:
# http://www.jmlr.org/papers/volume11/ojala10a/ojala10a.pdf
# #############################################################################
n_classes = np.unique(Y).size
cv = StratifiedKFold(n_splits=5, random_state=12, shuffle=True)
xgb = XGBClassifier(n_estimators=32, max_depth=3, scale_pos_weight=.2875)
metric=make_scorer(geometric_mean_score)
score, permutation_scores, pvalue = permutation_test_score(
xgb,X, Y, scoring=metric, cv=cv, n_permutations=1000)
print("Classification score %s (pvalue : %s)" % (score, pvalue))
# #############################################################################
# View histogram of permutation scores
plt.figure(figsize=(12,6))
plt.hist(permutation_scores, 20, label='Permutation scores',
edgecolor='black')
ylim = plt.ylim()
plt.plot(2 * [score], ylim, '--g', linewidth=3,
label='Classification Score'
' (pvalue %s)' % pvalue)
plt.plot(2 * [1. /n_classes], ylim, '--k', linewidth=3, label='Luck')
#plt.plot(2 * [luck_new], ylim, '--k', linewidth=3, label='Luck')
plt.ylim(ylim)
plt.legend()
plt.xlabel('Score')
plt.show()
# + [markdown] id="OwyOGmkpc9TW"
# # Feature Selection
# + id="JpUcD1IfgzJr"
#Creating labels
x1 = df2
Y = df["mucinous"].copy();
feature_cols = x1.columns
#Scale values from 0 to 1
scaler = MinMaxScaler(feature_range=(0, 1))
X1 = scaler.fit_transform(x1)
print(X1.shape)
# + id="_vTyPWligzAS"
#most improtant feature function
def Important_fetures(mymodel,featuredict):
import numpy as np
import sklearn as sk
import sklearn.datasets as skd
import matplotlib.pyplot as plt
# %matplotlib inline
importances = model.feature_importances_
indice = np.argsort(importances)[::-1]
indices = indice [:30]
# Print the feature ranking
# print("Feature ranking:")
num=0
with open(OUTPUT_LOCATION_OF_FEATURE_FILE, "w") as txt_file:
for f in indices:
indexname = f;
num+=1;
#print("%d. feature: %s (%f)" % (num, feature_cols[indexname], importances[indexname]))
if feature_cols[indexname] in featuredict:
featuredict[feature_cols[indexname]][0] += 1
featuredict[feature_cols[indexname]][1] += importances[indexname]
else:
featuredict[feature_cols[indexname]] = [1,importances[indexname]]
# + id="ylZcoJlLgzDE"
# K-fold
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score,LeaveOneOut
from imblearn.over_sampling import RandomOverSampler,SMOTE
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from imblearn.metrics import geometric_mean_score
from statistics import mean
from xgboost import XGBClassifier
featuredict = {}
for x in range(1000):
cv = StratifiedKFold(n_splits=5, shuffle=True)
Precisons = []
Recalls = []
F1s = []
G_means = []
for train_fold_index, val_fold_index in cv.split(X,Y):
X_train_fold,y_train_fold = X[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X[val_fold_index], Y[val_fold_index]
#smoter = SMOTE()
#X_train_fold_resample, y_train_fold_resample = smoter.fit_resample(X_train_fold,y_train_fold)
model = XGBClassifier(n_estimators=8, max_depth=3, scale_pos_weight=.25)
#model.fit(X_train_fold_resample,y_train_fold_resample)
model.fit(X_train_fold,y_train_fold)
pt = model.predict(X_val_fold)
Important_fetures(model,featuredict)
# print("confusion_matrix:")
# print(confusion_matrix(y_val_fold,pt))
Precisons.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
# + id="6vd8WAEWgzFp"
#List ranked by average
import operator
import collections
Avg = {}
Ocurr = {}
Tavg ={}
for key in featuredict:
Avg[key] = [featuredict[key][1]/featuredict[key][0],featuredict[key][0]]
Ocurr[key] = featuredict[key][0]
Tavg[key] = featuredict[key][1]/5000
AvgRank = sorted(Avg.items(),key=lambda kv: kv[1][0],reverse=True)
OcurrRank = sorted(Ocurr.items(),key=lambda x: x[1],reverse=True)
TavRank = sorted(Tavg.items(),key=lambda kv: kv[1],reverse=True)
sortedAvg = {}
for i in AvgRank:
sortedAvg[i[0]] = [i[1][0],i[1][1]]
Ocuurpd = pd.DataFrame.from_dict(OcurrRank)
Avgdf = pd.DataFrame.from_dict(sortedAvg,orient='index',columns=['Avg.Value','Occurance'])
Tavdf = pd.DataFrame.from_dict(TavRank)
Ocuurpd.columns = ['Feature', 'Avg. Value']
Avgdf.to_csv('Average feature Importance.CSV');
Ocuurpd.to_csv('Occurance.CSV')
Tavdf.to_csv('TSC.CSV')
# + id="jE5aWmcQgzHz"
a = 0
df2 = df
for (columnName, columnData) in df.iteritems():
if columnName not in Avg:
a +=1
df2 = df2.drop(columnName, axis=1)
# + id="9k8FIL6agzMk"
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score,LeaveOneOut
from imblearn.over_sampling import RandomOverSampler,SMOTE
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from imblearn.metrics import geometric_mean_score
from statistics import mean
from xgboost import XGBClassifier
cv = StratifiedKFold(n_splits=5, random_state=12, shuffle=True)
Precisons = []
Recalls = []
F1s = []
G_means = []
accuracy = []
for train_fold_index, val_fold_index in cv.split(X1,Y):
X_train_fold,y_train_fold = X1[train_fold_index], Y[train_fold_index]
X_val_fold, y_val_fold = X1[val_fold_index], Y[val_fold_index]
smoter = SMOTE(random_state=12)
X_train_fold_resample, y_train_fold_resample = smoter.fit_resample(X_train_fold,y_train_fold)
model = XGBClassifier(n_estimators=32, max_depth=3, scale_pos_weight=.2875)
model.fit(X_train_fold_resample,y_train_fold_resample)
pt = model.predict(X_val_fold)
print("confusion_matrix:")
print(confusion_matrix(y_val_fold,pt))
Precisons.append(precision_score(y_val_fold,pt))
Recalls.append(recall_score(y_val_fold,pt))
F1s.append(f1_score(y_val_fold,pt))
G_means.append(geometric_mean_score(y_val_fold,pt))
accuracy.append(accuracy_score(y_val_fold,pt))
print('Precision: ',mean(Precisons))
print('Recall: ',mean(Recalls))
print('F1: ',mean(F1s))
print('G_mean: ',mean(G_means))
print('Accuracy: ',mean(accuracy))
print(AvgRank)
| Mucinous/Mucinous.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn import metrics
import os, sys
from phm08ds.models import experiment
# -
# ## Load Dataset
folderpath = '../../data/interim/'
data_completed = pd.read_csv(folderpath + 'data_preprocessed.csv')
data_completed.head()
# ## Data preprocessing
# Use the pipeline and mlp
# +
from phm08ds.data.preprocessing import OperationalCondition
data_unlabel = data_completed.drop(labels=['Health_state', 'Operational_condition'], axis=1)
tf_op_cond = OperationalCondition()
op_cond = tf_op_cond.fit_transform(data_unlabel.loc[0])
from phm08ds.features.feature_selection import RemoveSensor
tf_select_sensor = RemoveSensor(sensors=[1,2,3,6,8,10,11,12,13,14,19,20])
data_important_sensors = tf_select_sensor.fit_transform(data_unlabel).iloc[:,5:]
from sklearn.preprocessing import StandardScaler
tf_std = StandardScaler()
data_elman = tf_std.fit_transform(data_important_sensors)
# -
data_elman
labels = np.array(data_completed['Health_state'])
# +
from sklearn.preprocessing import LabelEncoder
tf_label_encoder = LabelEncoder()
# -
labels = tf_label_encoder.fit_transform(labels) + 1
labels = labels.reshape(labels.shape[0],1)
labels
# # Classification steps
# ## How to use Elman network of neurolab
# Folllowing the example at https://pythonhosted.org/neurolab/ex_newelm.html
import neurolab as nl
# +
min_list = []
max_list = []
for feature in range(0,data_elman.shape[1]):
min_list.append(data_elman[:,feature].min())
max_list.append(data_elman[:,feature].max())
min_max_list = list(map(list, list(zip(min_list, max_list))))
min_max_list
# -
# from sklearn.preprocessing import LabelBinarizer
#
# target_tf = LabelBinarizer()
# labels_encoded = target_tf.fit_transform(labels_op_1)
elman_clf = nl.net.newelm(min_max_list, [50,1], [nl.trans.TanSig(), nl.trans.PureLin()])
# Set initialized functions and init
elman_clf.layers[0].initf = nl.init.InitRand([-0.1, 0.1], 'wb')
elman_clf.layers[1].initf= nl.init.InitRand([-0.1, 0.1], 'wb')
elman_clf.init()
# Train network
error = elman_clf.train(data_elman, labels, epochs=1, goal=0.1, adapt=True, show=1)
# Simulate network
output = elman_clf.sim(data_elman)
plt.plot(error)
# ### Test the newtwork
real_targets = labels.reshape(-1)
real_targets
# +
new_output = []
for k in range(0,len(output)):
new_output.append(int(output[k].round()))
# -
predicted_targets = np.array(new_output)
# +
until_to = 1100
plt.figure()
plt.plot(real_targets[:until_to])
plt.plot(output[:until_to])
# -
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
accuracy_score(real_targets, predicted_targets)
confusion_matrix(real_targets, predicted_targets)
| notebooks/E10_PHM08-train_Elman/0.3-warm_up-navarmn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Heat (diffusion) equation
#
# ```{index} Heat (diffusion) equation
# ```
#
# The heat equation is second of the three important PDEs we consider.
#
# $$ u_t = k^2 \nabla^2 u $$
#
# where $u( \mathbf{x}, t)$ is the temperature at a point $\mathbf{x}$ and time $t$ and $k^2$ is a constant with dimensions length$^2 \ \times$ time$^{-1}$. It is a parabolic PDE.
#
# The heat equation includes the $\nabla^2 u$ term which, if you recall from the previous notebook, is related to the mean of an infinitesimal circle or sphere centered at a point $p$:
#
# $$ \nabla^2 u \ \sim \ \overline{u} - u(p) $$
#
# That means that the rate $\partial u \ / \ \partial t$ at a point $p$ will be proportional to how much hotter or colder the surrounding material is. This agrees with our everyday intuition about diffusion and heat flow.
# (heat_separation_of_variables)=
# ## Separation of variables
#
# ```{index} Separation of variables
# ```
#
# The reader may have seen on Mathematics for Scientists and Engineers how separation of variables method can be used to solve the heat equation in a bounded domain. However, this method requires a pair of homogeneous boundary conditions, which is quite a strict requirement!
#
# (inhomog_bcs_2_homog)=
# ### Transforming inhomogeneous BCs to homogeneous
#
# Consider a general 1 + 1 dimensional heat equation with inhomogeneous boundary conditions:
#
# $$ \begin{aligned}
# \text{PDE} \qquad & u_t = k^2 u_{xx}, \qquad 0<x<L, \quad 0<t< \infty \\ \\
# \text{BCs} \qquad & \begin{cases}
# a_1 u(t, 0) + b_1 u_x(t, 0) = g_1(t) \\
# a_2 u(t, L) + b_2 u_x(t, L) = g_2(t)
# \end{cases} \quad 0<t< \infty \\ \\
# \text{IC} \qquad & u(x, 0) = \phi(x), \quad 0 \leq x \leq L
# \end{aligned} $$
#
# For separation of variables to be successful, we need to transform the boundary conditions to homogeneous ones. We do that by seeking the solution of the form \\( u(x,t) = U(x,t) + S(x,t) \\) where $S$ is of the form
#
# \\[ S(t,x) = A(t) \left( 1 - \frac{x}{L} \right) + B(t) \frac{x}{L} \\]
#
# and \\( A(t), B(t) \\) are unknown functions chosen such that \\( S(t,x) \\) satisfies the original boundary conditions
#
# \\[ a_1 S(t, 0) + b_1 S_x(t, 0) = g_1(t) \\
# a_2 S(t,L) + b_2 S_x(t,L) = g_2(t) \\]
#
# or after substituting in $S$:
#
# \\[ a_1 A(t) + \frac{b_1}{L} \big( {-A(t)} + B(t) \big) = g_1(t) \\
# a_2 B(t) + \frac{b_2}{L} \big( {-A(t)} + B(t) \big) = g_2(t) \\]
#
# This is a simple system of two linear equations for $A$ and $B$ which we can solve using Cramer's rule. Substituting \\(u = U + S\\) in the original PDE, we get, in general, an inhomogeneous PDE
#
# \\[ -U_t + k^2 U_{xx} = -S_t \\]
#
# but the boundary conditions are now homogeneous
#
# \\[ a_1 U(t,0) + b_1 U_x(t,0) = 0 \\
# a_2 U(t,L) + b_2 U_x(t,L) = 0 \\]
#
# and the initial condition becomes
#
# \\[ U(0, x) = \phi(x) - S(0,x) \\]
# ### Example
#
# Let us solve the initial-boundary-value problem (Farlow 1993, p.47 problem 1):
#
# $$ \begin{aligned}
# \text{PDE} \qquad & u_t = k^2 u_{xx}, \qquad 0<x<1, \quad 0<t< \infty \\ \\
# \text{BCs} \qquad & \begin{cases}
# u(t, 0) = 1 \\
# u(t, 1) + u_x(t, 1) = 1
# \end{cases} \quad 0<t< \infty \\ \\
# \text{IC} \qquad & u(0, x) = \sin (\pi x) + 1, \quad 0 \leq x \leq 1
# \end{aligned} $$
#
# where \\(u \equiv u(t, x)\\) is temperature in the domain. For simplicity let us choose \\(k^2 = 1\\). Note that mathematically it doesn't really matter what \\( k^2 \\) is since we can always transform $u$ such that \\(k^2\\) is unity (in engineering it, of course, matters).
#
# Let us draw a simple diagram to visualise our domain and auxiliary conditions.
# + tags=["hide-input"]
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(5, 7))
ax = fig.add_subplot(111)
ax.plot([0, 0, 1, 1], [1.5, 0, 0, 1.5], 'C0', linewidth=3)
ax.plot([1, 0], [1.5, 1.5], '--', c='C0')
ax.text(0.2, 0.05, r'$u(0,x) = \sin (\pi x) + x$', fontsize=14)
ax.text(0.05, 0.6, r'$u(t, 0) = 1$', rotation=90, fontsize=14)
ax.text(1.05, 0.5, r'$u(t, 1) + u_x(t,1)= 1$', rotation=90, fontsize=14)
ax.text(0.35, 0.7, r'$u_t = u_{xx}$', fontsize=16)
ax.set_xlim(-0.1, 1.1)
ax.set_ylim(-0.1, 1.6)
ax.set_aspect('equal')
ax.set_xlabel('x')
ax.set_ylabel('t')
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xticks([1])
ax.set_yticks([])
plt.show()
# -
# First we have to transform the BCs to homogeneous ones. Looking at BCs, we could guess that we only need to translate $u$ by $+1$, i.e. $u = U + S = 1$ BC will become homogeneous for $U$. But we can follow the above procedure and get \\( A = B = 1 \\). Then \\( S = 1 - x/L + x/L = 1 \\). So our transformation is, as we expected,
#
# \\[ u = U + S = U + 1 \\]
#
# Substituting this in the original problem we get the transformed problem
#
# \\[ \begin{aligned}
# \text{PDE} \qquad & U_t = U_{xx} \\ \\
# \text{BCs} \qquad & \begin{cases}
# U(t, 0) = 0 \\
# U(t, 1) + U_x(t, 1) = 0
# \end{cases} \\ \\
# \text{IC} \qquad & U(0, x) = \sin (\pi x)
# \end{aligned} \\]
#
# which we can now solve using separation of variables. We seek solutions of the form \\( U = X(x)T(t) \\) and substitute it into our PDE and divide both sides by \\( XT \\) to get **separated variables**:
#
# \\[ \frac{T'}{T} = \frac{X''}{X} \\]
#
# where LHS depends only on \\( t \\) and RHS on \\( x \\). Since \\( x, t \\) are independent of each other, each side must be a constant, say \\( \alpha \\). We get two ODEs:
#
# \\[ T' - \alpha T = 0 \\ X'' - \alpha X = 0 \\]
#
# Now notice that \\( \alpha \\) must be negative, i.e. \\( \alpha = - \lambda^2 \\) since then \\( T' = -\lambda^2 T \\) has the solution \\( T(t) = C \exp (-\lambda^2 t) \\) which decays with time, as it should (instead of growing to \\( \infty \\)). Then \\( X(x) = D \sin (\lambda x) + E \cos (\lambda x) \\) and multiplying them together:
#
# \\[ U(x, t) = e^{-\lambda^2 t} \left[ D \sin (\lambda x) + E \cos (\lambda x) \right] \\]
#
# where \\( D, E \\) are arbitrary constants (multiplying them by \\( C \\), another constant, doesn't matter). We now have infinitely many *simple* solutions to \\( u_t = u_{xx}\\). Solutions are simple because any temperature \\( u(x, t) \\) will have the same "shape" for any value of \\( t \\), but it exponentially decays with time.
# + tags=["hide-input"]
import numpy as np
xx = np.linspace(0, 1, 51)
tt = [0, 0.015, 0.05, 1]
fig = plt.figure(figsize=(8, 3))
ax = fig.add_subplot(111)
for t in tt:
U = np.exp(-25*t) * (np.sin(5*xx) + np.cos(5*xx))
ax.plot(xx, U, label=f'U(x, {t})')
ax.legend(loc='best')
plt.show()
# -
# However, as we can see from the figure above, not all of these solutions satisfy auxiliary conditions, but some of them do and we are only interested in the ones which do. So we substitute \\( U \\) in BCs:
#
# \\[ U(t, 0) = E \ e^{- \lambda^2 t} = 0 \quad \Rightarrow \quad E = 0 \\
# U(t, 1) + U_x(t, 1) = D \ e^{- \lambda^2 t} ( \sin \lambda + \lambda \cos \lambda) = 0 \\]
#
# where we choose \\( D \neq 0 \\) as we are interested in non-trivial solutions, so we have \\( \sin \lambda + \lambda \cos \lambda = 0 \\).
# + tags=["hide-input"]
x = np.linspace(-4, 15, 501)
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(111)
ax.plot(x, np.sin(x) + x*np.cos(x), label=r'$y = \sin \lambda + \lambda \cos \lambda$')
ax.legend(loc='best')
ax.set_xlim(-3, 12)
ax.set_ylim(-12, 10)
ax.set_xlabel(r'$\lambda$')
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_yticks([])
plt.show()
# -
# We have to find the roots numerically, which we do below.
# +
# find roots of f(x) = sinx + x cosx = 0
from scipy.optimize import fsolve
def f(x):
return np.sin(x) + x*np.cos(x)
# the list are initial guesses which we approximate from the graphs above
lambdas = fsolve(f, [2, 5, 8, 11, 14])
for i in range(len(lambdas)):
print(f'lambda {i+1} = {lambdas[i]}')
# -
# Also note that we may only consider positive roots \\( \lambda \\) because \\( \sin \lambda + \lambda \cos \lambda = \sin (- \lambda) + (-\lambda) \cos (-\lambda) \\).
#
# There are infinitely many roots \\( \lambda_i \\), which we call the **eigenvalues**, so we have infinitely many solutions
#
# \\[ X_n (x) = D_n \sin ( \lambda_n x) \\
# T_n (t) = e^{- \lambda_n^2 t} \\]
#
# and multiplying them together we get the **eigenfunctions**:
#
# \\[ U_n (x, t) = D_n e^{- \lambda_n^2 t} \sin ( \lambda_n x) \\]
#
# Each one of these eigenfunctions satisfies the PDE and the BCs.
# +
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
ax = ax.flatten(order='F')
for i, lam in enumerate(lambdas[:-1]):
U = np.exp(-lam**2) * np.sin(lam*xx)
ax[i].plot(xx, U, c=f'C{i}', label=fr'$\lambda_{i+1} = {lam:.3f}$')
ax[i].legend(loc='best')
ax[i].set_xlabel('x')
ax[i].set_ylabel('U')
ax[i].spines['left'].set_position('zero')
ax[i].spines['bottom'].set_position('zero')
ax[i].spines['right'].set_visible(False)
ax[i].spines['top'].set_visible(False)
ax[i].set_xticks([0, 1])
ax[i].set_yticks([])
fig.suptitle(r'$U_n(1, x) = e^{-\lambda_n^2} \sin (\lambda_n x)$', fontsize=14)
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.show()
# -
# But we still need to satisfy the initial condition. Let us assume that we could do that by summing up all functions \\( U_n \\) (which we are allowed to do because of linearity). That is, consider a Fourier series of the eigenfunctions:
#
# \\[ U(t, x) = \sum_{n=1}^\infty U_n = \sum_{n=1}^\infty D_n e^{- \lambda_n^2 t} \sin ( \lambda_n x) \\]
#
# where we want to choose coefficients \\( D_n \\) such that the initial condition \\( U(0, x) = \sin(\pi x) \\) is satisfied:
#
# \\[ U(0, x) = \sum_{n=1}^\infty D_n \sin ( \lambda_n x) = \sin (\pi x) \\]
#
# To find the coefficients \\( D_n \\) we multiply both sides by \\( \sin (\lambda_n x) \\) and integrate from \\( 0 \\) to \\( 1 \\):
#
# \\[ D_n \int_0^1 \sin^2 ( \lambda_n x) \ dx = \int_0^1 \sin (\pi x) \sin (\lambda_n x) \ dx \\]
#
# and solve for \\( D_n \\):
#
# \\[ D_n = \frac{\int_0^1 \sin (\pi x) \sin (\lambda_n x) \ dx}{\int_0^1 \sin^2 ( \lambda_n x) \ dx} \\]
# + tags=["hide-input"]
x = np.linspace(0, 1, 201)
x0 = [2 + k*np.pi for k in np.arange(100)] # initial guesses
lambdas = fsolve(f, x0)
U = 0
for i, lam in enumerate(lambdas, 1):
q1 = np.pi*np.sin(lam) / (np.pi**2 - lam**2) # numerator
q2 = 0.5 - np.sin(2*lam) / (4*lam) # denominator
D = q1/q2
if i < 7:
print(f'D_{i} = {D}')
U += D*np.sin(lam*x)
plt.plot(x, U, label='U(0, x)')
plt.plot(x, np.sin(np.pi*x), '--', label=r'$\sin (\pi x)$')
plt.legend(loc='best')
plt.title('n = 100')
plt.show()
# -
# And now \\( U \\) satisfies the PDE, BCs and IC. So our solution \\(u = U + S \\) is
#
# \\[ u(x, t) = 1 + \sum_{n=1}^\infty D_n e^{- \lambda_n^2 t} \sin ( \lambda_n x) \\]
#
# where \\( \lambda_n \\) and \\( D_n \\) are given above.
| notebooks/c_mathematics/differential_equations/10_heat_eq.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3 - python
# language: python
# name: ipython_python
# ---
# # Linear regression with Professor Mittens, a.k.a. recipe for linear regression.
#
# ## Overview
#
# In this notebook we will learn how to use regression to study the factors that affect the number of pats cats will recieve. This will start with a visual inspection of the data, followed by the development of a linear model to explain the data. Along the way we will answer a few questions such as: does coat colour influence the number of pats, is a long coat better than a short coat, and how important is the volume of a meow.
#
# ## Specifying regression models
#
# A very popular way to describe regression models is with "formulae" as popularised by R. The [R documentation on formulae](https://cran.r-project.org/doc/manuals/R-intro.html#Formulae-for-statistical-models) is a good place to learn how to use these properly. For example, here is the syntax we will use today,
#
# - `y ~ x1 + x2` will make a linear model with the predictors $x_1$ and $x_2$.
# - `y ~ x1 * x2` includes the terms $x_1 + x_2 + x_1x_2$
# - `y ~ x1 : x2` includes *just* the interaction term $x_1x_2$
# - `y ~ C(x)` specifies that $x$ is a catagorical variable **NOTE** this is not necessary in R.
# %matplotlib inline
import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
import altair as alt
from functools import reduce
# ## Helping cats get more pats
#
# Professor Mittens in interested in helping cats optimise the number of pats they can get. To learn more about this, he has interviewed 1000 cats and taken measurements of their behaviour and appearance. The data in `cat-pats.csv` contains measurments of the following:
#
# - `time_outdoors` is the number of hours that the cat is out of their primary dwelling,
# - `coat_colour` is either tortoiseshell, white, or "other" encoded as integers 1, 2, and 3 respectively,
# - `weight` is the weight of the cat in kilograms,
# - `height` is their height in centimeters,
# - `loudness` is a measure of how loud their meow is, the units are not known,
# - `whisker_length` is the length of their whiskers in centimeters,
# - `is_longhaired` is a Boolean variable equal to 1 if the cat is of a longhaired breed and 0 if it is of a shorthaired breed,
# - `coat_length` is the length of their fur in centimeters,
# - and `num_pats` is the number of pats they received on the day they were interviewed.
#
# The variable we are interested in explaining is `num_pats`. Although this is a discrete variable, we will ignore this aspect of the data and consider it as a continuous value. This is a useful simplifying assumption, as you learn more about regression, in particular generalized linear models, you will see additional ways to handle this. For this example, you can consider it a continuous variable though.
#
# The types of questions that Professor Mittens is interested in answering are as follows:
#
# 1. Do any of the variables correlate with the number of pats that the cats recieve?
# 2. Under a naive model, how much of the variability in pats can they explain? Do all the variables need to be included?
# 3. Does the coat colour matter?
# 4. Among short-haired breeds they say longer hair is better, among long-haired breeds they say short hair is better, who is correct?
# 5. **If a cat can choose to spend more time outdoors, or practise meowing louder, which will get them more pats?**
# ### Read in the data and generate some scatter plots to see if there are any good predictors of the number of pats
#
# The data is in the file `cat-pats.csv` so read this into a data frame using `pd.read_csv` and go from there. I have used altair to generate my scatter plots based on [this example](https://altair-viz.github.io/gallery/scatter_matrix.html) but you can use whatever you feel most comfortable with. It might be useful to use colour to see if `coat_colour` and `is_longhaired` are important.
#
# ### Question
#
# Based on these figures, what variables appear to relate to the number of pats? What do you notice about the catagorical variables `coat_colour` and `is_longhaired`?
# ### Compute the correlation between each variable and the number of pats, what looks important
#
# ### Question
#
# Does the the correlation matrix raise any further questions? Does it handle the catagorical variables correctly?
# ### What is $R^2$?
#
# Sometimes called the *coefficient of determination*, this statistic measures the proportion of the variance in the response variable that is explained by the regression model. In the case of simple linear regression it is just the correlation squared, it can also be calculated as the ratio of the regression sum of squares and the total sum of squares.
#
# $$
# R^2 = \frac{\text{RegSS}}{\text{TSS}}
# $$
#
# It can be thought of as the proportion of the total variance that is explained by the regression model.
#
# ### What is an *adjusted* $R^2$?
#
# For a fixed number of observations, as the number of covariates increases you can get explain as much of the variability as you want! The adjusted $R^2$ is a way to penalise using too many covariates. The adjusted $R^2$ for a model with $n$ observations and $p$ coefficients is given by the following:
#
# $$
# \tilde{R}^2 = 1 - \frac{n - 1}{n - p}\left(1 - R^2\right)
# $$
#
# ### Under a naive model, how much of the variability in pats can they explain?
#
# Run an ordinary linear regression with all of the variables and see what percentage of the variability in the number of pats is explained. Make sure that you have used the catagorical variables correctly. Can be be confident in rejecting the null hypothesis that none of these variables is associated with the number of pats received?
# ### Question: Is colinearity an issue in this model? Do all of the variables need to be included?
#
# Compute the VIF to see if there is a concerning amount of colinearity between any of the covariates.
# ### Does coat colour matter?
#
# 1. Make a box plot of the number of pats by coat colour to see this pattern.
# 2. Fit an additional linear model without the coat colour as a covariate to see how much of the explained variability comes from the inclusion of coat colour in the model.
# ### Among short-haired breeds they say longer hair is better, among long-haired breeds they say short hair is better, who is correct?
#
# Since in the figures above we saw that the breed longhaired/shorthaired appears to separate the data, it may be useful to consider different models on each subset. Fit a linear model to each subset of the data and see that the effect of the coat length is in each case.
# ### Fit a model with an interacion term between the coat length and the long/shorthaired breed
#
# What does this tell us about the age old debate about cat hair length?
# ### How else could we handle coat length?
#
# We could instead have included quadratic terms for coat length to see if this was a better way to explain the non-linear effect.
# ### Shouldn't we check for influential points?
#
# We can generate a plot of the studentized residuals and the leverage to check if there are any influential points.
#
# If there is a potential outlier, does removing it change anything?
# ### Should a cat practise meowing or just spend more time outdoors to get more pats?
#
# We can just look at the coefficients to see that a much more efficient way to get pats is to be outside, the relationship between loudness and number of pats is not supported by this data set.
| example-2/.ipynb_checkpoints/example-2-questions-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from fastai.vision.all import *
from faststyle import *
source = Path('/notebooks/storage/data/coco_sample')
source_train = untar_data('http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_LR_bicubic_X4.zip',
dest='/storage/data/DIV2K')
source_val = untar_data('http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_valid_LR_bicubic_X4.zip',
dest='/storage/data/DIV2K')
source_DIV2K = source_train.parent
style_fn = 'styles/village.jpg'
dblock = DataBlock(style_blocks, get_items=get_image_files, splitter=RandomSplitter(.1),
item_tfms=[Resize(192)],
batch_tfms=[*aug_transforms(2, size=128),
NormalizeX.from_stats(*coco_stats)])
dls = dblock.dataloaders(source, bs=64)
dls.show_batch(max_n=3, unique=True)
arch = resnet18
body = create_body(arch, pretrained=False)
model_splitter = model_meta[arch]['split']
m = DynamicUnet(body, 3, (128,128), blur=True, self_attention=True,
y_range=(0,1), norm_type=NormType.Instance)
layer_feats = LayerFeats.from_feat_m(FeatModels.vgg19)
loss_func = FastStyleLoss(stl_w=1e5, tv_w=300)
learn = style_learner(dls, m, layer_feats, style_fn, splitter=model_splitter, loss_func=loss_func)
learn.load('unet-stage2')
learn.lr_find()
learn.fit_one_cycle(1, 1e-1, pct_start=.9)
learn.fit_one_cycle(10, 1e-3, pct_start=.2)
learn.save('unet-raw')
learn.unfreeze()
learn.fit_one_cycle(5, slice(1e-5, 1e-3), pct_start=.22)
learn.recorder.plot_loss()
learn.save('unet-raw2')
learn.show_results()
| examples/style_vector_weights.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="c0VB-FkbvkWt" executionInfo={"status": "ok", "timestamp": 1608444157793, "user_tz": -180, "elapsed": 1381, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
pd.options.display.max_columns = None
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# + id="TQyAflGfvkW7" executionInfo={"status": "ok", "timestamp": 1608444159059, "user_tz": -180, "elapsed": 2639, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
train_df=pd.read_csv("Train.csv")
test_df=pd.read_csv("Test.csv")
sub_df=pd.read_csv("SampleSubmission.csv")
descp=pd.read_csv("VariableDefinitions.csv")
# + colab={"base_uri": "https://localhost:8080/", "height": 737} id="84np3LrsvkW8" executionInfo={"status": "ok", "timestamp": 1608444159062, "user_tz": -180, "elapsed": 2632, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="9746d6fb-a966-4035-e258-e2becb9fa6c4"
descp
# + colab={"base_uri": "https://localhost:8080/", "height": 291} id="q9rl1P_JoMs_" executionInfo={"status": "ok", "timestamp": 1608444159063, "user_tz": -180, "elapsed": 2618, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="a92f734b-e0ab-419a-f409-eb79552aec7a"
train_df.head(3)
# + colab={"base_uri": "https://localhost:8080/", "height": 137} id="LaivAZWzoT-w" executionInfo={"status": "ok", "timestamp": 1608444159065, "user_tz": -180, "elapsed": 2606, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="b9160fc5-ae90-491e-d197-cc230c3fc430"
sub_df.head(3)
# + colab={"base_uri": "https://localhost:8080/"} id="vnUmPzeIvkW9" executionInfo={"status": "ok", "timestamp": 1608444159066, "user_tz": -180, "elapsed": 2596, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="1b9b6810-25f5-4990-b46b-3acc0740f10b"
print("Size of train",train_df.shape)
print("Size of test",test_df.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 417} id="1-GlWKarvkW-" executionInfo={"status": "ok", "timestamp": 1608444159068, "user_tz": -180, "elapsed": 2584, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="e74b2421-9eca-4c52-987f-2f88b7aca2c1"
train_df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 334} id="GPP7QJ8XvkW-" executionInfo={"status": "ok", "timestamp": 1608444159071, "user_tz": -180, "elapsed": 2573, "user": {"displayName": "paul mwaura", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="e5207ee1-6c0c-426c-c343-aa8ec02a30c9"
fig, ax = plt.subplots()
fig.set_size_inches(20, 5)
sns.countplot(x = 'travel_with', data = train_df)
ax.set_xlabel('travel_with', fontsize=15)
ax.set_ylabel('Count', fontsize=15)
ax.set_title('travel_with Count Distribution', fontsize=15)
ax.tick_params(labelsize=15)
sns.despine()
# + colab={"base_uri": "https://localhost:8080/", "height": 333} id="e3M8YF84sfVU" executionInfo={"status": "ok", "timestamp": 1608444159074, "user_tz": -180, "elapsed": 2562, "user": {"displayName": "paul mwaura", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="80cc5894-20ca-4269-e180-d483d3739ea5"
fig, ax = plt.subplots()
fig.set_size_inches(20, 5)
sns.countplot(x = 'purpose', data = train_df)
ax.set_xlabel('purpose', fontsize=15)
ax.set_ylabel('Count', fontsize=15)
ax.set_title('purpose Count Distribution', fontsize=15)
ax.tick_params(labelsize=15)
sns.despine()
# + colab={"base_uri": "https://localhost:8080/", "height": 317} id="DazQlv5ssO0t" executionInfo={"status": "ok", "timestamp": 1608444159076, "user_tz": -180, "elapsed": 2550, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="2bd08c78-134a-4eab-af23-2d97839d3b1a"
#relationship between two categorical variables using a Two-way table
pd.crosstab(train_df['purpose'], train_df['travel_with'], margins=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 547} id="VUCTI1FRvkW_" executionInfo={"status": "ok", "timestamp": 1608444159535, "user_tz": -180, "elapsed": 2995, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="64afa063-2258-42ef-eb4a-b4e1b5c11ae0"
#relationship between two categorical variables using a Two-way table
pd.crosstab(train_df['main_activity'], train_df['country'], margins=True)
# + colab={"base_uri": "https://localhost:8080/"} id="uhmd-8PjvkXA" executionInfo={"status": "ok", "timestamp": 1608444159538, "user_tz": -180, "elapsed": 2984, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="37ecf66d-78e0-4422-c649-1c47ce6caa10"
data=train_df.copy()
data.columns.tolist()
# + colab={"base_uri": "https://localhost:8080/"} id="-DWwUu9-vkXA" executionInfo={"status": "ok", "timestamp": 1608444159541, "user_tz": -180, "elapsed": 2977, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="e70a8ca6-11bc-43ac-bac3-41ecbcddc4d0"
data.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="WvFr8zn3tT4t" executionInfo={"status": "ok", "timestamp": 1608444159542, "user_tz": -180, "elapsed": 2969, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="f5d2a439-077d-4b9b-f83f-eef526300a2e"
data['travel_with'] = pd.get_dummies(data['travel_with'])
data['travel_with'].head(3)
# + colab={"base_uri": "https://localhost:8080/"} id="7d-_Aoh_tqed" executionInfo={"status": "ok", "timestamp": 1608444159543, "user_tz": -180, "elapsed": 2961, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="32aa869c-8db9-40ba-b19e-2c74d31db64a"
data['most_impressing'] = pd.get_dummies(data['most_impressing'])
data['most_impressing'].head(3)
# + id="gIDMGKXovkXB" executionInfo={"status": "ok", "timestamp": 1608444159544, "user_tz": -180, "elapsed": 1811, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
data.travel_with.fillna(data.travel_with.mean(),inplace = True)
data.most_impressing.fillna(data.most_impressing.mean(),inplace = True)
data.total_female.fillna(data.total_female.mean(),inplace = True)
data.total_male.fillna(data.total_male.mean(),inplace = True)
data.total_cost.fillna(data.total_cost.mean(), inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} id="nLZXWaCTvkXB" executionInfo={"status": "ok", "timestamp": 1608444161703, "user_tz": -180, "elapsed": 1255, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="1085d36c-082a-4a56-819e-7e20e7dda1a7"
data.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="R5-yrG3TxmWx" executionInfo={"status": "ok", "timestamp": 1608444162109, "user_tz": -180, "elapsed": 1408, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="c0b51508-f98b-4f9b-e3c1-c663053a3622"
test_df.isna().sum()
# + id="iWknc7lVxuvG" executionInfo={"status": "ok", "timestamp": 1608444162110, "user_tz": -180, "elapsed": 1026, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
test_df.travel_with.fillna(data.travel_with.mean(),inplace = True)
test_df.most_impressing.fillna(data.most_impressing.mean(),inplace = True)
test_df.total_female.fillna(data.total_female.mean(),inplace = True)
test_df.total_male.fillna(data.total_male.mean(),inplace = True)
# + colab={"base_uri": "https://localhost:8080/"} id="Fc9lntGluSh3" executionInfo={"status": "ok", "timestamp": 1608444163005, "user_tz": -180, "elapsed": 714, "user": {"displayName": "paul mwaura", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="4ea98d0b-9bfa-4e38-f45a-23049255ebc1"
test_df.isna().sum()
# + id="3NwIc_sqvkXC" executionInfo={"status": "ok", "timestamp": 1608444164116, "user_tz": -180, "elapsed": 724, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
data['age_group'] = le.fit_transform(data['age_group'])
data['package_transport_int'] = le.fit_transform(data['package_transport_int'])
data['package_accomodation'] = le.fit_transform(data['package_accomodation'])
data['package_food'] = le.fit_transform(data['package_food'])
data['package_transport_tz'] = le.fit_transform(data['package_transport_tz'])
data['package_sightseeing'] = le.fit_transform(data['package_sightseeing'])
data['package_guided_tour'] = le.fit_transform(data['package_guided_tour'])
data['package_insurance'] = le.fit_transform(data['package_insurance'])
data['first_trip_tz'] = le.fit_transform(data['first_trip_tz'])
data['country'] = le.fit_transform(data['country'])
# + id="1n8DkL9ZvkXC" executionInfo={"status": "ok", "timestamp": 1608444164682, "user_tz": -180, "elapsed": 772, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
columns_to_transform = ['tour_arrangement','purpose','main_activity','info_source','payment_mode']
data = pd.get_dummies( data,columns = columns_to_transform,drop_first=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 161} id="u7RSZt3xvkXC" executionInfo={"status": "ok", "timestamp": 1608444165168, "user_tz": -180, "elapsed": 806, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="5a430c31-401f-4f45-8b38-be854dde9faa"
data.head(2)
# + id="0Lit8FpDyB0S" executionInfo={"status": "ok", "timestamp": 1608444165678, "user_tz": -180, "elapsed": 713, "user": {"displayName": "paul mwaura", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
le = LabelEncoder()
test_df['age_group'] = le.fit_transform(test_df['age_group'])
test_df['package_transport_int'] = le.fit_transform(test_df['package_transport_int'])
test_df['package_accomodation'] = le.fit_transform(test_df['package_accomodation'])
test_df['package_food'] = le.fit_transform(test_df['package_food'])
test_df['package_transport_tz'] = le.fit_transform(test_df['package_transport_tz'])
test_df['package_sightseeing'] = le.fit_transform(test_df['package_sightseeing'])
test_df['package_guided_tour'] = le.fit_transform(test_df['package_guided_tour'])
test_df['package_insurance'] = le.fit_transform(test_df['package_insurance'])
test_df['first_trip_tz'] = le.fit_transform(test_df['first_trip_tz'])
test_df['country'] = le.fit_transform(test_df['country'])
# + id="VJamXBo7u6-X" executionInfo={"status": "ok", "timestamp": 1608444166191, "user_tz": -180, "elapsed": 821, "user": {"displayName": "paul mwaura", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
columns_to_transform = ['tour_arrangement','purpose','main_activity','info_source','payment_mode']
test_df = pd.get_dummies( test_df,columns = columns_to_transform,drop_first=True)
# + id="o0iTtqu-vkXD" executionInfo={"status": "ok", "timestamp": 1608444166726, "user_tz": -180, "elapsed": 791, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
## convert float dtypes to int
data["total_female"] = data['total_female'].astype('int')
data["total_male"] = data['total_male'].astype('int')
data["night_mainland"] = data['night_mainland'].astype('int')
data["night_zanzibar"] = data['night_zanzibar'].astype('int')
# + id="DtX6-jslvkXD" executionInfo={"status": "ok", "timestamp": 1608444167686, "user_tz": -180, "elapsed": 1018, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
#feature engineering
data["total_persons"] = data["total_female"] + data["total_male"]
data["total_nights_spent"] = data["night_mainland"] + data["night_zanzibar"]
# + id="z3xr4y8XvEMC" executionInfo={"status": "ok", "timestamp": 1608444170395, "user_tz": -180, "elapsed": 973, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
## convert float dtypes to int
test_df["total_female"] = test_df['total_female'].astype('int')
test_df["total_male"] = test_df['total_male'].astype('int')
test_df["night_mainland"] = test_df['night_mainland'].astype('int')
test_df["night_zanzibar"] = test_df['night_zanzibar'].astype('int')
# + id="rWrMT1zLvQSQ" executionInfo={"status": "ok", "timestamp": 1608444172072, "user_tz": -180, "elapsed": 1007, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
#feature engineering
test_df["total_persons"] = test_df["total_female"] + test_df["total_male"]
test_df["total_nights_spent"] = test_df["night_mainland"] + test_df["night_zanzibar"]
# + id="15cw0hTdwYVk" executionInfo={"status": "ok", "timestamp": 1608444173913, "user_tz": -180, "elapsed": 659, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(data, test_size = 0.25, random_state=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 53} id="y4VQOtiivkXE" executionInfo={"status": "ok", "timestamp": 1608444175395, "user_tz": -180, "elapsed": 811, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="6ee8c73c-9776-4e0a-c0b4-ce6bfb3cafdc"
'''## separate data into train and test
train_df=data[data.total_cost.notnull()].reset_index(drop=True)
test_df=data[data.total_cost.isna()].reset_index(drop=True)'''
# + colab={"base_uri": "https://localhost:8080/", "height": 460} id="U_42araPw8nH" executionInfo={"status": "ok", "timestamp": 1608444550245, "user_tz": -180, "elapsed": 1011, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="2374df8b-5981-4e89-dca6-edffc3382e27"
train_df.reset_index(drop=True)
test_df.reset_index(drop=True)
# + colab={"base_uri": "https://localhost:8080/"} id="tcOFtHgQvkXE" executionInfo={"status": "ok", "timestamp": 1608444553236, "user_tz": -180, "elapsed": 939, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="4f4f7fd8-afef-40c7-e192-c1afd3d0b207"
print(train_df.shape)
print(test_df.shape)
# + id="myGRKR0avkXE" executionInfo={"status": "ok", "timestamp": 1608444554395, "user_tz": -180, "elapsed": 635, "user": {"displayName": "paul mwaura", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
#Modelling
feat_cols = train_df.drop(["ID","total_cost"],1)
cols = feat_cols.columns
target=train_df["total_cost"]
# + id="gdVJceA4vkXF" executionInfo={"status": "ok", "timestamp": 1608444555715, "user_tz": -180, "elapsed": 717, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold, cross_val_score
# + colab={"base_uri": "https://localhost:8080/"} id="n02cuj3OvkXF" executionInfo={"status": "ok", "timestamp": 1608444557192, "user_tz": -180, "elapsed": 1035, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="0d1c60a8-2853-4648-bfc9-dfb868e52fad"
# create training and testing vars
X_train, X_test, y_train, y_test = train_test_split(train_df[cols],target, test_size=0.15, random_state = 42)
print (X_train.shape, y_train.shape)
print (X_test.shape, y_test.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="ca3IEJ2YvkXF" executionInfo={"status": "ok", "timestamp": 1608444558041, "user_tz": -180, "elapsed": 1613, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="1c95aa26-8a15-47c8-a163-bc8fdb506c3d"
from xgboost import XGBRegressor
xgb=XGBRegressor( n_estimators= 110,learning_rate = 0.01,max_depth =5)
xgb.fit(X_train, y_train)
y_pred = xgb.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="uz3LTYNlvkXF" executionInfo={"status": "ok", "timestamp": 1608444558427, "user_tz": -180, "elapsed": 1781, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}} outputId="0e351b18-1262-4738-c8b0-9b10ef905a50"
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(y_test, y_pred)
print('Error {}'.format(mae))
# + id="qylKk0-VvkXG" executionInfo={"status": "ok", "timestamp": 1608444565864, "user_tz": -180, "elapsed": 1022, "user": {"displayName": "paul mwaura", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj342nb5c2ggVzwL5k9eVu-XG5BEHV3dtliRTTbyA=s64", "userId": "05571276976991411894"}}
#predict and prepare submission file
sub = test_df[cols]
predictions_xgb = xgb.predict(sub)
submission_df = pd.DataFrame({'ID': test_df.ID, 'total_cost': predictions_xgb})
submission_df.to_csv('submit.csv',index=False)
# + id="0o-QhlNdvkXG"
| starter-notebook-tanzania-tourism-hack.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (reco_base)
# language: python
# name: reco_base
# ---
# <i>Copyright (c) Microsoft Corporation. All rights reserved.<br>
# Licensed under the MIT License.</i>
# <br>
# # Hyperparameter Tuning for Matrix Factorization Using the Neural Network Intelligence Toolkit
# This notebook shows how to use the **[Neural Network Intelligence](https://nni.readthedocs.io/en/latest/) toolkit (NNI)** for tuning hyperparameters of a matrix factorization model. In particular, we optimize the hyperparameters of [Surprise SVD](https://surprise.readthedocs.io/en/stable/matrix_factorization.html).
#
# NNI is a toolkit to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or complex system’s parameters, in an efficient and automatic way. NNI has several appealing properties: ease of use, scalability, flexibility and efficiency. NNI comes with [several tuning algorithms](https://nni.readthedocs.io/en/latest/Builtin_Tuner.html) built in. It also allows users to [define their own general purpose tuners](https://nni.readthedocs.io/en/latest/Customize_Tuner.html). NNI can be executed in a distributed way on a local machine, a remote server, or a large scale training platform such as OpenPAI or Kubernetes.
#
# In this notebook we execute several NNI _experiments_ on the same data sets obtained from Movielens with a training-validation-test split. Each experiment corresponds to one of the built-in tuning algorithms. It consists of many parallel _trials_, each of which corresponds to a choice of hyperparameters sampled by the tuning algorithm. All the experiments require a call to the same [python script](../../reco_utils/nni/svd_training.py) for training the SVD model and evaluating rating and ranking metrics on the test data. This script has been adapted from the [Surprise SVD notebook](../02_model/surprise_svd_deep_dive.ipynb) with only a few changes. In all experiments, we maximize precision@10.
#
# For this notebook we use a _local machine_ as the training platform (this can be any machine running the `reco_base` conda environment). In this case, NNI uses the available processors of the machine to parallelize the trials, subject to the value of `trialConcurrency` we specify in the configuration. Our runs and the results we report were obtained on a [Standard_D16_v3 virtual machine](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general#dv3-series-1) with 16 vcpus and 64 GB memory.
# ### 1. Global Settings
# +
import sys
sys.path.append("../../")
import json
import os
import surprise
import papermill as pm
import pandas as pd
import shutil
import subprocess
import yaml
import pkg_resources
from tempfile import TemporaryDirectory
import reco_utils
from reco_utils.common.timer import Timer
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.evaluation.python_evaluation import rmse, precision_at_k, ndcg_at_k
from reco_utils.tuning.nni.nni_utils import (check_experiment_status, check_stopped, check_metrics_written, get_trials,
stop_nni, start_nni)
from reco_utils.recommender.surprise.surprise_utils import predict, compute_ranking_predictions
print("System version: {}".format(sys.version))
print("Surprise version: {}".format(surprise.__version__))
print("NNI version: {}".format(pkg_resources.get_distribution("nni").version))
# %load_ext autoreload
# %autoreload 2
# -
# ### 2. Prepare Dataset
# 1. Download data and split into training, validation and test sets
# 2. Store the data sets to a local directory.
# + tags=["parameters"]
# Parameters used by papermill
# Select Movielens data size: 100k, 1m
MOVIELENS_DATA_SIZE = '100k'
SURPRISE_READER = 'ml-100k'
tmp_dir = TemporaryDirectory()
TMP_DIR = tmp_dir.name
NUM_EPOCHS = 30
MAX_TRIAL_NUM = 10
# time (in seconds) to wait for each tuning experiment to complete
WAITING_TIME = 20
MAX_RETRIES = 40 # it is recommended to have MAX_RETRIES>=4*MAX_TRIAL_NUM
tmp_dir = TemporaryDirectory()
# +
data = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=["userID", "itemID", "rating"]
)
data.head()
# -
train, validation, test = python_random_split(data, [0.7, 0.15, 0.15])
# +
LOG_DIR = os.path.join(TMP_DIR, "experiments")
os.makedirs(LOG_DIR, exist_ok=True)
DATA_DIR = os.path.join(TMP_DIR, "data")
os.makedirs(DATA_DIR, exist_ok=True)
TRAIN_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_train.pkl"
train.to_pickle(os.path.join(DATA_DIR, TRAIN_FILE_NAME))
VAL_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_val.pkl"
validation.to_pickle(os.path.join(DATA_DIR, VAL_FILE_NAME))
TEST_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_test.pkl"
test.to_pickle(os.path.join(DATA_DIR, TEST_FILE_NAME))
# -
# ### 3. Prepare Hyperparameter Tuning
# We now prepare a training script [svd_training_nni.py](../../reco_utils/nni/svd_training.py) for the hyperparameter tuning, which will log our target metrics such as precision, NDCG, RMSE.
# We define the arguments of the script and the search space for the hyperparameters. All the parameter values will be passed to our training script.<br>
# Note that we specify _precision@10_ as the primary metric. We will also instruct NNI (in the configuration file) to _maximize_ the primary metric. This is passed as an argument in the training script and the evaluated metric is returned through the NNI python library. In addition, we also evaluate RMSE and NDCG@10.
# The `script_params` below are the parameters of the training script that are fixed (unlike `hyper_params` which are tuned). In particular, `VERBOSE, BIASED, RANDOM_STATE, NUM_EPOCHS` are parameters used in the [SVD method](../02_model/surprise_svd_deep_dive.ipynb) and `REMOVE_SEEN` removes the training data from the recommended items.
# +
EXP_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_svd_model"
PRIMARY_METRIC = "precision_at_k"
RATING_METRICS = ["rmse"]
RANKING_METRICS = ["precision_at_k", "ndcg_at_k"]
USERCOL = "userID"
ITEMCOL = "itemID"
REMOVE_SEEN = True
K = 10
RANDOM_STATE = 42
VERBOSE = True
BIASED = True
script_params = " ".join([
"--datastore", DATA_DIR,
"--train-datapath", TRAIN_FILE_NAME,
"--validation-datapath", VAL_FILE_NAME,
"--surprise-reader", SURPRISE_READER,
"--rating-metrics", " ".join(RATING_METRICS),
"--ranking-metrics", " ".join(RANKING_METRICS),
"--usercol", USERCOL,
"--itemcol", ITEMCOL,
"--k", str(K),
"--random-state", str(RANDOM_STATE),
"--epochs", str(NUM_EPOCHS),
"--primary-metric", PRIMARY_METRIC
])
if BIASED:
script_params += " --biased"
if VERBOSE:
script_params += " --verbose"
if REMOVE_SEEN:
script_params += " --remove-seen"
# +
# hyperparameters search space
# We do not set 'lr_all' and 'reg_all' because they will be overriden by the other lr_ and reg_ parameters
hyper_params = {
'n_factors': {"_type": "choice", "_value": [10, 50, 100, 150, 200]},
'init_mean': {"_type": "uniform", "_value": [-0.5, 0.5]},
'init_std_dev': {"_type": "uniform", "_value": [0.01, 0.2]},
'lr_bu': {"_type": "uniform", "_value": [1e-6, 0.1]},
'lr_bi': {"_type": "uniform", "_value": [1e-6, 0.1]},
'lr_pu': {"_type": "uniform", "_value": [1e-6, 0.1]},
'lr_qi': {"_type": "uniform", "_value": [1e-6, 0.1]},
'reg_bu': {"_type": "uniform", "_value": [1e-6, 1]},
'reg_bi': {"_type": "uniform", "_value": [1e-6, 1]},
'reg_pu': {"_type": "uniform", "_value": [1e-6, 1]},
'reg_qi': {"_type": "uniform", "_value": [1e-6, 1]}
}
# -
with open(os.path.join(TMP_DIR, 'search_space_svd.json'), 'w') as fp:
json.dump(hyper_params, fp)
# We also create a yaml file for the configuration of the trials and the tuning algorithm to be used (in this experiment we use the [TPE tuner](https://nni.readthedocs.io/en/latest/hyperoptTuner.html)).
# +
config = {
"authorName": "default",
"experimentName": "surprise_svd",
"trialConcurrency": 8,
"maxExecDuration": "1h",
"maxTrialNum": MAX_TRIAL_NUM,
"trainingServicePlatform": "local",
# The path to Search Space
"searchSpacePath": "search_space_svd.json",
"useAnnotation": False,
"logDir": LOG_DIR,
"tuner": {
"builtinTunerName": "TPE",
"classArgs": {
#choice: maximize, minimize
"optimize_mode": "maximize"
}
},
# The path and the running command of trial
"trial": {
"command": sys.prefix + "/bin/python svd_training.py" + " " + script_params,
"codeDir": os.path.join(os.path.split(os.path.abspath(reco_utils.__file__))[0], "tuning", "nni"),
"gpuNum": 0
}
}
with open(os.path.join(TMP_DIR, "config_svd.yml"), "w") as fp:
fp.write(yaml.dump(config, default_flow_style=False))
# -
# ### 4. Execute NNI Trials
#
# The conda environment comes with NNI installed, which includes the command line tool `nnictl` for controlling and getting information about NNI experiments. <br>
# To start the NNI tuning trials from the command line, execute the following command: <br>
# `nnictl create --config <path of config_svd.yml>` <br>
# In the cell below, we call this command programmatically. <br>
# You can see the progress of the experiment by using the URL links output by the above command.
#
# 
#
# 
#
# 
# Make sure that there is no experiment running
stop_nni()
config_path = os.path.join(TMP_DIR, 'config_svd.yml')
nni_env = os.environ.copy()
nni_env['PATH'] = sys.prefix + '/bin:' + nni_env['PATH']
proc = subprocess.run([sys.prefix + '/bin/nnictl', 'create', '--config', config_path], env=nni_env)
if proc.returncode != 0:
raise RuntimeError("'nnictl create' failed with code %d" % proc.returncode)
with Timer() as time_tpe:
check_experiment_status(wait=WAITING_TIME, max_retries=MAX_RETRIES)
# ### 5. Show Results
#
# The trial with the best metric and the corresponding metrics and hyperparameters can also be read from the Web UI
#
# 
#
# or from the JSON file created by the training script. Below, we do this programmatically using [nni_utils.py](../../reco_utils/nni/nni_utils.py)
trials, best_metrics, best_params, best_trial_path = get_trials('maximize')
best_metrics
best_params
best_trial_path
# This directory path is where info about the trial can be found, including logs, parameters and the model that was learned. To evaluate the metrics on the test data, we get the SVD model that was saved as `model.dump` in the training script.
svd = surprise.dump.load(os.path.join(best_trial_path, "model.dump"))[1]
# The following function computes all the metrics given an SVD model.
def compute_test_results(svd):
test_results = {}
predictions = predict(svd, test, usercol="userID", itemcol="itemID")
for metric in RATING_METRICS:
test_results[metric] = eval(metric)(test, predictions)
all_predictions = compute_ranking_predictions(svd, train, usercol="userID", itemcol="itemID", remove_seen=REMOVE_SEEN)
for metric in RANKING_METRICS:
test_results[metric] = eval(metric)(test, all_predictions, col_prediction='prediction', k=K)
return test_results
test_results_tpe = compute_test_results(svd)
print(test_results_tpe)
# ### 6. More Tuning Algorithms
# We now apply other tuning algorithms supported by NNI to the same problem. For details about these tuners, see the [NNI docs.](https://nni.readthedocs.io/en/latest/tuners.html#)
# The only change needed is in the relevant entry in the configuration file.
# In summary, the tuners used in this notebook are the following:
# - Tree-structured Parzen Estimator (TPE), within the Sequential Model-Based Optimization (SMBO) framework,
# - SMAC, also an instance of SMBO,
# - Hyperband
# - Metis, an implementation of Bayesian optimization with Gaussian Processes
# - a Naive Evolutionary algorithm
# - an Annealing method for sampling, and
# - plain Random Search as a baseline.
#
# For more details and references to the relevant literature, see the [NNI github](https://github.com/Microsoft/nni/blob/master/docs/en_US/Builtin_Tuner.md).
# +
# Random search
config['tuner']['builtinTunerName'] = 'Random'
if 'classArgs' in config['tuner']:
config['tuner'].pop('classArgs')
with open(config_path, 'w') as fp:
fp.write(yaml.dump(config, default_flow_style=False))
# -
stop_nni()
with Timer() as time_random:
start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
check_metrics_written(wait=WAITING_TIME, max_retries=MAX_RETRIES)
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_random = compute_test_results(svd)
# +
# Annealing
config['tuner']['builtinTunerName'] = 'Anneal'
if 'classArgs' not in config['tuner']:
config['tuner']['classArgs'] = {'optimize_mode': 'maximize'}
else:
config['tuner']['classArgs']['optimize_mode'] = 'maximize'
with open(config_path, 'w') as fp:
fp.write(yaml.dump(config, default_flow_style=False))
# -
stop_nni()
with Timer() as time_anneal:
start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
check_metrics_written(wait=WAITING_TIME, max_retries=MAX_RETRIES)
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_anneal = compute_test_results(svd)
# Naive evolutionary search
config['tuner']['builtinTunerName'] = 'Evolution'
with open(config_path, 'w') as fp:
fp.write(yaml.dump(config, default_flow_style=False))
stop_nni()
with Timer() as time_evolution:
start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
check_metrics_written(wait=WAITING_TIME, max_retries=MAX_RETRIES)
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_evolution = compute_test_results(svd)
# The SMAC tuner requires to have been installed with the following command <br>
# `nnictl package install --name=SMAC`
# SMAC
config['tuner']['builtinTunerName'] = 'SMAC'
with open(config_path, 'w') as fp:
fp.write(yaml.dump(config, default_flow_style=False))
# Check if installed
proc = subprocess.run([sys.prefix + '/bin/nnictl', 'package', 'show'], stdout=subprocess.PIPE)
if proc.returncode != 0:
raise RuntimeError("'nnictl package show' failed with code %d" % proc.returncode)
if 'SMAC' not in proc.stdout.decode().strip().split():
proc = subprocess.run([sys.prefix + '/bin/nnictl', 'package', 'install', '--name=SMAC'])
if proc.returncode != 0:
raise RuntimeError("'nnictl package install' failed with code %d" % proc.returncode)
# Skipping SMAC optimization for now
# stop_nni()
with Timer() as time_smac:
# start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
pass
# +
#check_metrics_written()
#svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
#test_results_smac = compute_test_results(svd)
# -
# Metis
config['tuner']['builtinTunerName'] = 'MetisTuner'
with open(config_path, 'w') as fp:
fp.write(yaml.dump(config, default_flow_style=False))
stop_nni()
with Timer() as time_metis:
start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
check_metrics_written()
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_metis = compute_test_results(svd)
# Hyperband follows a different style of configuration from other tuners. See [the NNI documentation](https://nni.readthedocs.io/en/latest/hyperbandAdvisor.html). Note that the [training script](../../reco_utils/nni/svd_training.py) needs to be adjusted as well, since each Hyperband trial receives an additional parameter `STEPS`, which corresponds to the resource allocation _r<sub>i</sub>_ in the [Hyperband algorithm](https://arxiv.org/pdf/1603.06560.pdf). In this example, we used `STEPS` in combination with `R` to determine the number of epochs that SVD will run for in every trial.
# Hyperband
config['advisor'] = {
'builtinAdvisorName': 'Hyperband',
'classArgs': {
'R': NUM_EPOCHS,
'eta': 3,
'optimize_mode': 'maximize'
}
}
config.pop('tuner')
with open(config_path, 'w') as fp:
fp.write(yaml.dump(config, default_flow_style=False))
stop_nni()
with Timer() as time_hyperband:
start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
check_metrics_written()
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_hyperband = compute_test_results(svd)
test_results_tpe.update({'time': time_tpe.interval})
test_results_random.update({'time': time_random.interval})
test_results_anneal.update({'time': time_anneal.interval})
test_results_evolution.update({'time': time_evolution.interval})
#test_results_smac.update({'time': time_smac.interval})
test_results_metis.update({'time': time_metis.interval})
test_results_hyperband.update({'time': time_hyperband.interval})
algos = ["TPE",
"Random Search",
"Annealing",
"Evolution",
#"SMAC",
"Metis",
"Hyperband"]
res_df = pd.DataFrame(index=algos,
data=[res for res in [test_results_tpe,
test_results_random,
test_results_anneal,
test_results_evolution,
#test_results_smac,
test_results_metis,
test_results_hyperband]]
)
res_df.sort_values(by="precision_at_k", ascending=False).round(3)
# As we see in the table above, _TPE_ performs best with respect to the primary metric (precision@10) that all the tuners optimized. Also the best NDCG@10 is obtained for TPE and correlates well with precision@10. RMSE on the other hand does not correlate well and is not optimized for TPE, since finding the top k recommendations in the right order is a different task from predicting ratings (high and low) accurately.
# We have also observed that the above ranking of the tuners is not consistent and may change when trying these experiments multiple times. Since some of these tuners rely heavily on randomized sampling, a larger number of trials is required to get more consistent metrics.
# In addition, some of the tuning algorithms themselves come with parameters, which can affect their performance.
# Stop the NNI experiment
stop_nni()
tmp_dir.cleanup()
# ### 7. Concluding Remarks
#
# We showed how to tune **all** the hyperparameters accepted by Surprise SVD simultaneously, by utilizing the NNI toolkit.
# For example, training and evaluation of a single SVD model takes about 50 seconds on the 100k MovieLens data on a Standard D2_V2 VM. Searching through 100 different combinations of hyperparameters sequentially would take about 80 minutes whereas each of the above experiments took about 10 minutes by exploiting parallelization on a single D16_v3 VM. With NNI, one can take advantage of concurrency and multiple processors on a virtual machine and can use a variety of tuning methods to navigate efficiently through a large space of hyperparameters.<br>
# For examples of scaling larger tuning workloads on clusters of machines, see [the notebooks](./README.md) that employ the [Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters).
# ### References
#
# * [Matrix factorization algorithms in Surprise](https://surprise.readthedocs.io/en/stable/matrix_factorization.html)
# * [Surprise SVD deep-dive notebook](../02_model/surprise_svd_deep_dive.ipynb)
# * [Neural Network Intelligence toolkit](https://github.com/Microsoft/nni)
| examples/04_model_select_and_optimize/nni_surprise_svd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# **This notebook shows how to run the inference in the training-time two-view settings on the validation or training set of MegaDepth to visualize the training metrics and losses.**
# +
# %load_ext autoreload
# %autoreload 2
import torch
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from omegaconf import OmegaConf
from pixloc import run_Aachen
from pixloc.pixlib.datasets.megadepth import MegaDepth
from pixloc.pixlib.utils.tensor import batch_to_device, map_tensor
from pixloc.pixlib.utils.tools import set_seed
from pixloc.pixlib.utils.experiments import load_experiment
from pixloc.visualization.viz_2d import (
plot_images, plot_keypoints, plot_matches, cm_RdGn,
features_to_RGB, add_text)
torch.set_grad_enabled(False);
mpl.rcParams['image.interpolation'] = 'bilinear'
# -
# # Create a validation or training dataloader
conf = {
'min_overlap': 0.4,
'max_overlap': 1.0,
'max_num_points3D': 512,
'force_num_points3D': True,
'resize': 512,
'resize_by': 'min',
'crop': 512,
'optimal_crop': True,
'init_pose': [0.75, 1.],
# 'init_pose': 'max_error',
# 'init_pose_max_error': 4,
# 'init_pose_num_samples': 50,
'batch_size': 1,
'seed': 1,
'num_workers': 0,
}
loader = MegaDepth(conf).get_data_loader('val', shuffle=True)
orig_items = loader.dataset.items
# # Load the training experiment
# Name of the example experiment. Replace with your own training experiment.
exp = run_Aachen.experiment
device = 'cuda'
conf = {
'optimizer': {'num_iters': 20,},
}
refiner = load_experiment(exp, conf).to(device)
print(OmegaConf.to_yaml(refiner.conf))
# # Run on a few examples
# - Reference image: red/green = reprojections of 3D points not/visible in the query at the ground truth pose
# - Query image: red/blue/green = reprojections of 3D points at the initial/final/GT poses
# - ΔP/ΔR/Δt are final errors in terms of 2D reprojections, rotation, and translation
set_seed(7)
for _, data in zip(range(5), loader):
data_ = batch_to_device(data, device)
pred_ = refiner(data_)
pred = map_tensor(pred_, lambda x: x[0].cpu())
data = map_tensor(data, lambda x: x[0].cpu())
cam_q = data['query']['camera']
p3D_r = data['ref']['points3D']
p2D_r, valid_r = data['ref']['camera'].world2image(p3D_r)
p2D_q_gt, valid_q = cam_q.world2image(data['T_r2q_gt'] * p3D_r)
p2D_q_init, _ = cam_q.world2image(data['T_r2q_init'] * p3D_r)
p2D_q_opt, _ = cam_q.world2image(pred['T_r2q_opt'][-1] * p3D_r)
valid = valid_q & valid_r
losses = refiner.loss(pred_, data_)
mets = refiner.metrics(pred_, data_)
errP = f"ΔP {losses['reprojection_error/init'].item():.2f} -> {losses['reprojection_error'].item():.3f} px; "
errR = f"ΔR {mets['R_error/init'].item():.2f} -> {mets['R_error'].item():.3f} deg; "
errt = f"Δt {mets['t_error/init'].item():.2f} -> {mets['t_error'].item():.3f} %m"
print(errP, errR, errt)
imr, imq = data['ref']['image'].permute(1, 2, 0), data['query']['image'].permute(1, 2, 0)
plot_images([imr, imq],titles=[(data['scene'][0], valid_r.sum().item(), valid_q.sum().item()), errP+'; '+errR])
plot_keypoints([p2D_r[valid_r], p2D_q_gt[valid]], colors=[cm_RdGn(valid[valid_r]), 'lime'])
plot_keypoints([np.empty((0, 2)), p2D_q_init[valid]], colors='red')
plot_keypoints([np.empty((0, 2)), p2D_q_opt[valid]], colors='blue')
add_text(0, 'reference')
add_text(1, 'query')
continue
for i, (F0, F1) in enumerate(zip(pred['ref']['feature_maps'], pred['query']['feature_maps'])):
C_r, C_q = pred['ref']['confidences'][i][0], pred['query']['confidences'][i][0]
plot_images([C_r, C_q], cmaps=mpl.cm.turbo)
add_text(0, f'Level {i}')
axes = plt.gcf().axes
axes[0].imshow(imr, alpha=0.2, extent=axes[0].images[0]._extent)
axes[1].imshow(imq, alpha=0.2, extent=axes[1].images[0]._extent)
plot_images(features_to_RGB(F0.numpy(), F1.numpy(), skip=1))
| notebooks/training_MegaDepth.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import tifffile
import numpy as np
import phathom.phenotype.mesh as mesh
import phathom.phenotype.niche as niche
import matplotlib.pyplot as plt
working_dir = '/media/jswaney/SSD EVO 860/organoid_phenotyping/20181210_eF9_A34_2'
# # Load centers and cell-type labels
# +
centers_path = 'centers.npy'
sox2_labels_path = 'sox2_labels.npy'
tbr1_labels_path = 'tbr1_labels.npy'
centers = np.load(os.path.join(working_dir, centers_path))
sox2_labels = np.load(os.path.join(working_dir, sox2_labels_path))
tbr1_labels = np.load(os.path.join(working_dir, tbr1_labels_path))
centers.shape
# +
voxel_size = (2.052, 1.082, 1.082)
centers_um = mesh.voxels_to_micron(centers, voxel_size)
# -
# # Query neighbors within a fixed radius
nbrs = niche.fit_neighbors(centers_um)
nbrs
# +
nb_cells = 5000
(centers_um_sample,) = mesh.randomly_sample(nb_cells, centers_um)
# +
radius = 50
distances, indices = niche.query_radius(nbrs, centers_um, radius)
distances[0]
# -
np.save(os.path.join(working_dir, 'distances.npy'), distances)
np.save(os.path.join(working_dir, 'indices.npy'), indices)
# +
radius = 50
distances = np.load(os.path.join(working_dir, 'distances.npy'))
indices = np.load(os.path.join(working_dir, 'indices.npy'))
# -
total = 0
for idx in indices:
total += len(idx)
total / len(indices)
# +
sox2_counts = niche.neighborhood_counts(indices, sox2_labels)
tbr1_counts = niche.neighborhood_counts(indices, tbr1_labels)
dn_counts = niche.neighborhood_counts(indices, ~np.logical_or(sox2_labels, tbr1_labels))
sox2_counts.max(), tbr1_counts.max(), dn_counts.max()
# -
np.save(os.path.join(working_dir, 'sox2_counts.npy'), sox2_counts)
np.save(os.path.join(working_dir, 'tbr1_counts.npy'), tbr1_counts)
np.save(os.path.join(working_dir, 'dn_counts.npy'), dn_counts)
sox2_counts = np.load(os.path.join(working_dir, 'sox2_counts.npy'))
tbr1_counts = np.load(os.path.join(working_dir, 'tbr1_counts.npy'))
dn_counts = np.load(os.path.join(working_dir, 'dn_counts.npy'))
# +
sox2_directions = niche.neighborhood_directionality(centers_um, indices, sox2_labels)
tbr1_directions = niche.neighborhood_directionality(centers_um, indices, tbr1_labels)
dn_directions = niche.neighborhood_directionality(centers_um, indices, ~np.logical_or(sox2_labels, tbr1_labels))
sox2_directions.max(axis=0), sox2_directions.min(axis=0)
# -
tbr1_directions.max(axis=0), tbr1_directions.min(axis=0)
dn_directions.max(axis=0), dn_directions.min(axis=0)
np.save(os.path.join(working_dir, 'sox2_directions.npy'), sox2_directions)
np.save(os.path.join(working_dir, 'tbr1_directions.npy'), tbr1_directions)
np.save(os.path.join(working_dir, 'dn_directions.npy'), dn_directions)
sox2_directions = np.load(os.path.join(working_dir, 'sox2_directions.npy'))
tbr1_directions = np.load(os.path.join(working_dir, 'tbr1_directions.npy'))
dn_directions = np.load(os.path.join(working_dir, 'dn_directions.npy'))
# +
projections = niche.directionality_projection(sox2_directions, tbr1_directions, dn_directions)
projections
# -
projections.mean(axis=0)
plt.hist(projections[:, 1], bins=128)
plt.show()
# +
bins = 5
sox2_profiles = niche.radial_profile(centers_um, distances, indices, radius, bins, sox2_labels)
# -
tbr1_profiles = niche.radial_profile(centers_um, distances, indices, radius, bins, tbr1_labels)
dn_profiles = niche.radial_profile(centers_um, distances, indices, radius, bins, dn_labels)
plt.plot(dn_profiles[:100].T)
plt.show()
sox2_labels
sox2_profiles[:100] == dn_profiles[:100]
# features = np.hstack([sox2_counts[:, np.newaxis],
# tbr1_counts[:, np.newaxis],
# dn_counts[:, np.newaxis],
# np.linalg.norm(sox2_directions, axis=-1)[:, np.newaxis],
# np.linalg.norm(tbr1_directions, axis=-1)[:, np.newaxis],
# np.linalg.norm(dn_directions, axis=-1)[:, np.newaxis],
# projections])
# features = np.hstack([sox2_counts[:, np.newaxis],
# tbr1_counts[:, np.newaxis],
# dn_counts[:, np.newaxis],
# sox2_directions,
# tbr1_directions,
# dn_directions,
# projections])
# features = features[:, 6:]
features = np.hstack([sox2_profiles, tbr1_profiles, dn_profiles])
# features = dn_profiles
features.shape
from sklearn.preprocessing import scale
features_scaled = scale(features)
import seaborn as sns
dn_labels = ~np.logical_or(sox2_labels, tbr1_labels)
dn_labels.shape
np.random.seed(987)
(feat, dn_sample), sample_idx = mesh.randomly_sample(20000, features_scaled, dn_labels, return_idx=True)
(dn_sample == 1).sum()
# +
linkage = 'centroid'
g = sns.clustermap(feat, col_cluster=False, method=linkage)
plt.show()
# -
# try Gaussian Mixture (Dirichlet prior for unknown cluster number)
from sklearn.mixture import GaussianMixture, BayesianGaussianMixture
# +
dpgmm = BayesianGaussianMixture(n_components=10,
covariance_type='full',
weight_concentration_prior=1e-3).fit(feat)
plt.plot(dpgmm.weights_)
plt.show()
labels = dpgmm.predict(feat)
n_clusters = len(np.unique(labels))
n_clusters
# -
np.random.seed(456)
gmm = GaussianMixture(n_components=5).fit(feat)
gmm.weights_
labels = gmm.predict(feat)
n_clusters = len(np.unique(labels))
# Try hierarchical clustering (works fine with centroid method)
from scipy.cluster.hierarchy import centroid, cut_tree, fclusterdata
# Hierarchical clustering
n_clusters = 10
labels = fclusterdata(feat, n_clusters, criterion='maxclust', method='centroid')
labels -= 1
# Try DBSCAN (didn't work)
from sklearn.cluster import DBSCAN
dbscan = DBSCAN(eps=0.6, min_samples=2).fit(feat)
labels = dbscan.labels_ + 1
np.unique(labels)
n_clusters = len(np.unique(labels))
# Plot TSNE colored by cluster label
colors = mesh.colormap_to_colors(n_clusters)
# %matplotlib inline
np.random.seed(456)
plt.figure(figsize=(12, 12))
mesh.plot_tsne(feat, labels, colors)
profiles = features[sample_idx]
profiles.shape
labels.shape
for c in range(n_clusters):
idx = np.where(labels == c)[0]
cluster_profiles = profiles[idx]
sox2_ave = cluster_profiles[:, :5].mean()
tbr1_ave = cluster_profiles[:, 5:10].mean()
dn_ave = cluster_profiles[:, 10:].mean()
print(f'Count {len(idx)}, SOX2 ave {sox2_ave}, TBR1 ave {tbr1_ave}, DN ave {dn_ave}')
plt.hist(profiles[:, :5].ravel(), bins=32)
plt.show()
plt.hist(profiles[:, 5:10].ravel(), bins=32)
plt.ylim([0, 10000])
plt.show()
profiles[:10]
import pandas as pd
dn_profiles.shape, dn_labels.shape
# +
sox2_mean = dn_profiles[:, :5].mean(axis=-1)
tbr1_mean = dn_profiles[:, 5:].mean(axis=-1)
mean_counts = np.concatenate([sox2_mean, tbr1_mean])
expression = len(sox2_mean)*['SOX2'] + len(tbr1_mean)*['TBR1']
cluster = np.concatenate([dn_labels, dn_labels])
# +
df = pd.DataFrame({'counts': mean_counts,
'cluster': cluster,
'expression': expression})
ax = sns.violinplot(x="cluster", y='counts', hue='expression', data=df, palette="muted")
# -
for c in np.unique(dn_labels):
cluster_idx = np.where(dn_labels == c)[0]
sox2_cluster = sox2_mean[cluster_idx].mean()
tbr1_cluster = tbr1_mean[cluster_idx].mean()
plt.figure()
plt.pie([sox2_cluster, tbr1_cluster])
plt.show()
| phathom/phenotype/notebbooks/.ipynb_checkpoints/7 - Single cell niche embedding-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import tensorflow as tf
W = tf.Variable([3], tf.float32)
b = tf.Variable([-2], tf.float32)
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
linear_model = W * X + b
squared_error = tf.square(linear_model - Y)
loss = tf.reduce_sum(squared_loss)
optimizer = tf.train.GradientDescentOptimizer(0.03)
train_step = optimizer.minimize(loss)
X_train = [1, 2, 3, 4, 5, 6]
Y_train = [3, 4, 5, 6, 7, 8]
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(50):
sess.run(train_step, {X: X_train, Y: Y_train})
# curr_W, curr_b, curr_loss, _ = sess.run([W, b, loss, train_step], {X: X_train, Y: Y_train})
# print ("W: %s b: %s loss:%s " %(curr_W, curr_b, curr_loss))
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {X:X_train, Y:Y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
| simple_linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('base')
# language: python
# name: python3
# ---
num = 100
if num == 100:
print('One hundred')
else: print('This number is not 100')
ls = [1, 2, 3, 4, 10]
if 5 in ls:
print(5)
else:
print('not in list')
6 in ls
9 not in ls
sum(ls)
if sum(ls) == 0:
print('the sum of elements is 0')
elif sum(ls) < 10:
print('sum is less than 10')
else:
print('the sum is not within 0-10')
# +
#declare list with any type of elements and use if-else and if-elif-else to get the result of condition
#zadacha
# -
ls1 = ['malina', 'roza', 8, 7, 1, True]
if True in ls1:
print('there is a boolean in ls1')
else:
print('there is no boolean in ls1')
if type(ls1[4] == str):
print('there is a string in ls1')
elif type(ls1[1] == int):
print('there is a integer in list')
else:
print('no connection found')
ls.extend([10, 11, 20, 18])
# ls
for el in ls:
print(el)
for i in range(2, 22):
print(i)
ls2 = [i for i in range(22)]
ls2
for i in range(8):
i += 10 #i=i+10
print(i)
num1 = 8
while num1 < 15:
print(num1)
num1 += 1
# +
#zadacha1
# -
for i in range(21, 29):
i *= 5
print(i)
# +
#zadacha2
# -
ls5 = ['ice-cream', 12, 'pion', 'kakach', 4, 9, 'karmir', 'arev', 'matani', 'lipstick']
len(ls5)
while len(ls5) > 0:
ls5.pop()
print(ls5)
| lesson3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/intro/unsuper.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="rnmk5B1V4Wdw" colab_type="text"
# # Unsupervised learning <a class="anchor" id="unsuper"></a>
#
# Unsupervised learning is less well-defined than supervised learning, since there is no specific input-output mapping whose accuracy we can use to measure performance. Instead we often focus on maximizing the likelihood of the data, and hope that the model learns "something interesting". We give some simple examples of this below.
# + id="SSmXaJG75K0A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 291} outputId="d5d3297d-6509-43c8-8213-b253d7d60482"
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
import seaborn as sns;
sns.set(style="ticks", color_codes=True)
import pandas as pd
pd.set_option('precision', 2) # 2 decimal places
pd.set_option('display.max_rows', 20)
pd.set_option('display.max_columns', 30)
pd.set_option('display.width', 100) # wide windows
# Check we can plot stuff
plt.figure()
plt.plot(range(10))
# + [markdown] id="vO95k4h84-iR" colab_type="text"
# ## Clustering iris data using a Gaussian mixture model (GMM)
#
# In this section, we show how to find clusters in an unlabeled 2d version of the Iris dataset by fitting a GMM using sklearn.
#
#
#
# + id="LiLzt46c45vL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 565} outputId="23b92889-bbc1-4599-e1d3-26314f60e137"
import seaborn as sns
from sklearn.datasets import load_iris
#from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
iris = load_iris()
X = iris.data
fig, ax = plt.subplots()
idx1 = 2; idx2 = 3;
ax.scatter(X[:, idx1], X[:, idx2], c="k", marker=".")
ax.set(xlabel = iris.feature_names[idx1])
ax.set(ylabel = iris.feature_names[idx2])
#save_fig("iris-2d-unlabeled")
plt.show()
K = 3
y_pred = GaussianMixture(n_components=K, random_state=42).fit(X).predict(X)
mapping = np.array([2, 0, 1])
y_pred = np.array([mapping[cluster_id] for cluster_id in y_pred])
colors = sns.color_palette()[0:K]
markers = ('s', 'x', 'o', '^', 'v')
fig, ax = plt.subplots()
for k in range(0, K):
ax.plot(X[y_pred==k, idx1], X[y_pred==k, idx2], color=colors[k], \
marker=markers[k], linestyle = 'None', label="Cluster {}".format(k))
ax.set(xlabel = iris.feature_names[idx1])
ax.set(ylabel = iris.feature_names[idx2])
plt.legend(loc="upper left", fontsize=12)
#save_fig("iris-2d-gmm")
plt.show()
# + [markdown] id="lb1yP0QR5bSi" colab_type="text"
# ## Dimensionality reduction of iris data using PCA <a class="anchor" id="PCA-iris"></a>
#
# In this section, we show how to find low dimensional structure
# in an unlabeled version of the Iris dataset by fitting a PCA model.
# We will use sklearn.
# + id="Bp1Q5QjO5XhB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="4544830a-539a-4dc5-b3a7-a354cf47e4ec"
# Visualize raw 3d data
from sklearn.datasets import load_iris
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import Axes3D
#https://jakevdp.github.io/PythonDataScienceHandbook/04.12-three-dimensional-plotting.html
iris = load_iris()
X = iris.data
y = iris.target
fig = plt.figure().gca(projection='3d')
colors = ['g', 'b', 'o']
for c in range(3):
x0 = X[y==c,0]
x1 = X[y==c,1]
x2 = X[y==c,2]
fig.scatter(x0, x1, x2, colors[c], edgecolors='k',s=50, alpha=0.9, \
marker='o', label=iris.target_names[c])
fig.set_xlabel('sepal length')
fig.set_ylabel('sepal width')
fig.set_zlabel('petal length')
#plt.legend()
#save_fig("iris-3dscatterplot")
plt.show()
# + id="svUstZ6a5g8X" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 274} outputId="3811b14b-dea5-41c0-c2ef-b5ca0a595b80"
# 2d projection of points
from sklearn.decomposition import PCA
X = iris.data[:,0:3]
pca_xy = PCA(n_components=2).fit_transform(X)
fig, ax = plt.subplots()
ax.scatter(pca_xy[:,0], pca_xy[:,1], c=y)
#save_fig("iris-pca")
plt.show()
# + id="YXZ4NhzJ5jkr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="423a163b-ebbe-4dc0-b265-568ed5ca2050"
# plot latent 2d projection of points in ambient 3d feature space
pca = PCA(n_components=2)
mu = np.mean(X, axis=0)
Xc = X - mu # center the data
pca.fit(Xc)
W = pca.components_.T # D*K
Z = np.dot(Xc, W) # N * K latent scores
Xrecon = np.dot(Z, W.T) + mu # N*D
# span the latent space in area covered by data
a = np.min(Z[:,0])
b = np.max(Z[:,0])
c = np.min(Z[:,1])
d = np.max(Z[:,1])
z0 = np.linspace(a, b, 10)
z1 = np.linspace(c, d, 10)
ZZ0, ZZ1 = np.meshgrid(z0, z1)
Zgrid = np.c_[ZZ0.ravel(), ZZ1.ravel()] # 100x2
plane = np.dot(Zgrid, W.T) + mu # N*D
latent_corners = np.array([ [a,c], [a,d], [b,c], [b,d] ]) # 4x2
recon_corners = np.dot(latent_corners, W.T) + mu # 4x3
fig = plt.figure().gca(projection='3d')
scatterplot = fig.scatter(X[:,0], X[:,1], X[:,2], color="red")
#recon = fig.scatter(Xrecon[:,0], Xrecon[:,1], Xrecon[:,2], marker='*', color='green')
lineplot = fig.scatter(plane[:,0], plane[:,1], plane[:,2], color="black", alpha=0.5)
fig.set_xlabel('sepal length')
fig.set_ylabel('sepal width')
fig.set_zlabel('petal length')
#save_fig("iris-pca-3d")
plt.show()
# + id="cb7CY7C85m8N" colab_type="code" colab={}
| notebooks/intro/unsuper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PuLP を用いた線形計画問題のモデリングとその最適化
# ## 前準備
# PuLP を用いるには、JupyterLab(Anaconda) 上のTerminal で、事前に
# ```bash
# % pip install pulp
# ```
# コマンドを実行し、PuLP をインストールする必要があります。
#
# ## 簡単な例
# 次の最適化問題を考えましょう。
# \begin{align*}
# \text{Minimize }
# & -4x + y\\
# \text{subject to }
# & x + y \le 2,\\
# & 0 \le x \le 3, y \ge 0.
# \end{align*}
# PuLP では、この数式をそのまま打ち込むことで、線形計画問題を解くことができます。
#
# ### Step 1. モジュールのインポートと変数の宣言
# 今回は、$x$ と$y$ の2つの変数を用いて最適化問題をモデリングします。
# なので、最初にこれらを宣言しましょう。
from pulp import *
x = LpVariable('x', 0, 3)
y = LpVariable('y', 0)
# いま考えている問題では、変数$x$ は$[0, 3]$、変数$y$ は$[0,\infty)$ の範囲を取ることを許すので、これらも変数宣言時に指定します。
# 変数$y$ で見たように、上に有界でない変数については、第3引数を指定する必要はありません。
#
# ### Step 2. 最適化問題の構築
# 続いて、最適化問題をPuLP 上で構築しましょう。
problem = LpProblem('簡単な例', LpMinimize)
problem += -4 * x + y, "目的関数"
problem += x + y <= 2, "制約式(1)"
# 今回は最小化問題を考えるので、`LpProblem` 呼び出しの第一引数には`LpMinimize` を指定していますが、最大化問題を考えるときは`LpMaximize` を指定します。
# 目的関数や制約式は、最適化問題を表すオブジェクト`problems` に`+=` 演算子を用いて追加していきます。
# 追加する式が比較式であれば制約式として、線形関数であれば目的関数として自動的に認識されます。
#
# ### Step 3. 最適化
# PuLP を用いて構築した最適化問題を実際に解いてみましょう。
# 最適化問題を解くためには、構築した最適化問題`problem` の`solve` メソッドを呼び出すだけです。
LpStatus[problem.solve()]
# 実行後には、最適化問題を解くことができれば`Optimal` と、解けなければ`Not Solved`と表示されます。
# 特に、制約式をすべて満たせない場合は`Infeasible`、またどこまでも無制限に関数値を最小または最大化できる場合は`Unbounded` などと表示されます。
# 各変数の最適値は、それぞれの変数の`varValue` プロパティから確認することができます。
x.varValue
# 最適化に用いたすべての変数とその最適値の一覧を出力することもできます。
print('目的関数値: %.8f' % value(problem.objective))
for var in problem.variables():
print('%s=%.8f' % (var.name, var.varValue))
# ## ほかの例
# PuLP を用いて最適化問題を解く多くのサンプルプログラムが[GitHub リポジトリ](https://github.com/coin-or/pulp/tree/master/examples) 上で公開されています。
# PuLP のより高度な使い方や活用事例などは、このサンプルプログラムを参考にするとよいでしょう。
#
# # 参考文献
# * <NAME>, <NAME> [PuLP: A python Linear Programming API](https://github.com/coin-or/pulp).
# * <NAME>, <NAME>, <NAME>, <NAME>, <NAME>: [Optimization with PuLP](https://pythonhosted.org/PuLP/).
| pulp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sympy import symbols, solve, log, diff
from scipy.optimize import minimize_scalar, newton, minimize
from scipy.integrate import quad
from scipy.stats import norm
import numpy as np
import pandas as pd
from numpy.linalg import inv
import matplotlib.pyplot as plt
from numpy.random import dirichlet
# %matplotlib inline
plt.style.use('fivethirtyeight')
np.random.seed(42)
# ## The optimal size of a bet
share, odds, probability = symbols('share odds probability')
Value = probability * log(1 + odds * share) + (1 - probability) * log(1 - share)
solve(diff(Value, share), share)
f, p = symbols('f p')
y = p * log(1 + f) + (1 - p) * log(1 - f)
solve(diff(y, f), f)
# ## Get S&P 500 Data
with pd.HDFStore('../../data/assets.h5') as store:
sp500 = store['sp500/prices'].close
# ### Compute Returns & Standard Deviation
annual_returns = sp500.resample('A').last().pct_change().to_frame('sp500')
return_params = annual_returns.sp500.rolling(25).agg(['mean', 'std']).dropna()
return_ci = (return_params[['mean']]
.assign(lower=return_params['mean'].sub(return_params['std'].mul(2)))
.assign(upper=return_params['mean'].add(return_params['std'].mul(2))))
return_ci.plot(lw=2, figsize=(14, 8));
# ### Kelly Rule for Index Returns
def norm_integral(f, mean, std):
val, er = quad(lambda s: np.log(1 + f * s) * norm.pdf(s, mean, std),
mean - 3 * std,
mean + 3 * std)
return -val
def norm_dev_integral(f, mean, std):
val, er = quad(lambda s: (s / (1 + f * s)) * norm.pdf(s, mean, std), m-3*std, mean+3*std)
return val
def get_kelly_share(data):
solution = minimize_scalar(norm_integral,
args=(data['mean'], data['std']),
bounds=[0, 2],
method='bounded')
return solution.x
annual_returns['f'] = return_params.apply(get_kelly_share, axis=1)
return_params.plot(subplots=True, lw=2, figsize=(14, 8));
annual_returns.tail()
# ### Performance Evaluation
(annual_returns[['sp500']]
.assign(kelly=annual_returns.sp500.mul(annual_returns.f.shift()))
.dropna()
.loc['1900':]
.add(1)
.cumprod()
.sub(1)
.plot(lw=2));
annual_returns.f.describe()
return_ci.head()
# ### Compute Kelly Fraction
m = .058
s = .216
# Option 1: minimize the expectation integral
sol = minimize_scalar(norm_integral, args=(m, s), bounds=[0., 2.], method='bounded')
print('Optimal Kelly fraction: {:.4f}'.format(sol.x))
# Option 2: take the derivative of the expectation and make it null
x0 = newton(norm_dev_integral, .1, args=(m, s))
print('Optimal Kelly fraction: {:.4f}'.format(x0))
# ## Kelly Rule for Multiple Assets
with pd.HDFStore('../../data/assets.h5') as store:
sp500_stocks = store['sp500/stocks'].index
prices = store['quandl/wiki/prices'].adj_close.unstack('ticker').filter(sp500_stocks)
prices.info()
monthly_returns = prices.loc['1988':'2017'].resample('M').last().pct_change().dropna(how='all').dropna(axis=1)
stocks = monthly_returns.columns
monthly_returns.info()
cov = monthly_returns.cov()
inv_cov = pd.DataFrame(inv(cov), index=stocks, columns=stocks)
kelly_allocation = monthly_returns.mean().dot(inv_cov)
kelly_allocation.describe()
kelly_allocation.sum()
TRADING_DAYS = 12
# +
def pf_vol(weights, cov):
return np.sqrt(weights.T @ cov @ weights)
def pf_ret(weights, mean_ret):
return weights @ mean_ret.values
def pf_performance(weights, mean_ret, cov):
r = pf_ret(weights, mean_ret)
sd = pf_vol(weights, cov)
return r, sd
# +
n_assets = len(stocks) # number of assets to allocate
x0 = np.full(n_assets, 1 / n_assets)
mean_asset_ret = monthly_returns.mean()
asset_cov = monthly_returns.cov()
# -
def simulate_pf(mean_ret, cov):
perf, weights = [], []
for i in range(N_PORTFOLIOS):
if i % 50000 == 0:
print(i)
weights = dirichlet([.08] * n_assets)
weights /= np.sum(weights)
r, sd = pf_performance(weights, mean_ret, cov)
perf.append([r, sd, (r - RF_RATE) / sd])
perf_df = pd.DataFrame(perf, columns=['ret', 'vol', 'sharpe'])
return perf_df, weights
# +
RF_RATE = 0
def neg_sharpe_ratio(weights, mean_ret, cov):
r, sd = pf_performance(weights, mean_ret, cov)
return -(r / sd)
def max_sharpe_ratio(mean_ret, cov):
args = (mean_ret, cov)
constraints = {'type': 'eq', 'fun': lambda x: np.sum(x) - 1}
bounds = ((-1, 1),) * n_assets
return minimize(fun=neg_sharpe_ratio,
x0=x0,
args=args,
method='SLSQP',
bounds=bounds,
constraints=constraints)
# -
res = max_sharpe_ratio(mean_asset_ret, asset_cov)
(res.x / kelly_allocation).sort_values().plot.bar(figsize=(15, 5));
# +
def pf_volatility(w, r, c):
return pf_performance(w, r, c)[1]
def efficient_return(mean_ret, cov, target):
args = (mean_ret, cov)
def ret_(weights):
return pf_ret(weights, mean_ret)
constraints = [{'type': 'eq', 'fun': lambda x: ret_(x) - target},
{'type': 'eq', 'fun': lambda x: np.sum(x) - 1}]
bounds = ((0.0, 1.0),) * n_assets
# noinspection PyTypeChecker
return minimize(pf_volatility,
x0=x0,
args=args, method='SLSQP',
bounds=bounds,
constraints=constraints)
def efficient_frontier(mean_ret, cov, ret_range):
efficient_pf = []
for ret in ret_range:
efficient_pf.append(efficient_return(mean_ret, cov, ret))
return efficient_pf
# -
def calculate_efficient_frontier(mean_ret, cov):
perf, wt = simulate_pf(mean_ret, cov)
max_sharpe = max_sharpe_ratio(mean_ret, cov)
max_sharpe_perf = pf_performance(max_sharpe.x, mean_ret, cov)
wmax = max_sharpe.x
print(np.sum(wmax))
min_vol = min_variance(mean_ret, cov)
min_vol_perf = pf_performance(min_vol['x'], mean_ret, cov)
pf = ['Max Sharpe', 'Min Vol']
alloc = pd.DataFrame(dict(zip(pf, [max_sharpe.x, min_vol.x])), index=assets)
selected_pf = pd.DataFrame(dict(zip(pf, [max_sharpe_perf, min_vol_perf])),
index=['ret', 'vol'])
print(selected_pf)
print(perf.describe())
perf.plot.scatter(x='vol', y='ret', c='sharpe',
cmap='YlGnBu', marker='o', s=10,
alpha=0.3, figsize=(10, 7), colorbar=True,
title='PF Simulation')
r, sd = selected_pf['Max Sharpe'].values
plt.scatter(sd, r, marker='*', color='r', s=500, label='Max Sharpe Ratio')
r, sd = selected_pf['Min Vol'].values
plt.scatter(sd, r, marker='*', color='g', s=500, label='Min volatility')
plt.xlabel('Annualised Volatility')
plt.ylabel('Annualised Returns')
plt.legend(labelspacing=0.8)
rmin = selected_pf.loc['ret', 'Min Vol']
rmax = returns.add(1).prod().pow(1 / len(returns)).pow(TRADING_DAYS).sub(1).max()
ret_range = np.linspace(rmin, rmax, 50)
# ret_range = np.linspace(rmin, .22, 50)
efficient_portfolios = efficient_frontier(mean_asset_ret, cov, ret_range)
plt.plot([p['fun'] for p in efficient_portfolios], ret_range, linestyle='-.', color='black',
label='efficient frontier')
plt.title('Calculated Portfolio Optimization based on Efficient Frontier')
plt.xlabel('annualised volatility')
plt.ylabel('annualised returns')
plt.legend(labelspacing=0.8)
plt.tight_layout()
plt.savefig('Calculated EF.png')
def min_variance(mean_ret, cov):
args = (mean_ret, cov)
constraints = {'type': 'eq', 'fun': lambda x: np.sum(x) - 1}
bounds = ((0, 1),) * n_assets
return minimize(fun=pf_volatility,
x0=x0,
args=args,
method='SLSQP',
bounds=bounds,
constraints=constraints)
res = min_variance(mean_asset_ret, asset_cov)
| Chapter05/05_kelly/kelly_rule.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Modeling 02: Prophet, and Capturing Seasonality
#
# So far, we've worked with very classic, at least in the terms of competitive data science, models. In the first modeling notebook, we made several baseline models as well as a tuned XGBoost model. Although both the linear models and XGBoost were able to capture seasonality, they were not able to catch upward and irregular trends.
#
# In this notebook, I will try and capture more seasonality and trend by using Facebook Prophet, a fourier based model that accounts for seasonality at various time lags and levels.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import sklearn as sk
from sklearn.metrics import mean_absolute_error
from xgboost import XGBRegressor
from fbprophet import Prophet
from fbprophet.diagnostics import cross_validation, performance_metrics
import tensorflow as tf
# +
df = pd.read_csv('../data/clean/full/dengue_features_train.csv')
df_labels = pd.read_csv('../data/clean/full/dengue_labels_train.csv')
df_test = pd.read_csv('../data/raw/dengue_features_test.csv')
sj_features = pd.read_csv('../data/clean/sj/sj_train_features.csv')
sj_labels = pd.read_csv('../data/clean/sj/sj_train_labels.csv')
iq_features = pd.read_csv('../data/clean/iq/iq_train_features.csv')
iq_labels = pd.read_csv('../data/clean/iq/iq_train_labels.csv')
sj_test = pd.read_csv('../data/clean/sj/sj_test_features.csv')
iq_test = pd.read_csv('../data/clean/iq/iq_test_features.csv')
# -
# Let's explore the dataset with Facebook prophet. This is a model commonly fit to seemingly seasonal or cyclical trends. More information can be found [here](https://facebook.github.io/prophet/).
#
# It requires a special dataset, that is, one where the two columns are `[time, y]`
# +
def gen_prophet_df(features: pd.DataFrame(), labels: pd.DataFrame()) -> pd.DataFrame:
df = pd.DataFrame()
df['ds'] = pd.to_datetime(features['week_start_date'])
df['y'] = labels['total_cases']
return df
df_sj_prophet = gen_prophet_df(sj_features, sj_labels)
df_iq_prophet = gen_prophet_df(iq_features, iq_labels)
df_sj_prophet_test = pd.to_datetime(sj_test['week_start_date']).to_frame().rename(columns={'week_start_date':'ds'})
df_iq_prophet_test = pd.to_datetime(iq_test['week_start_date']).to_frame().rename(columns={'week_start_date':'ds'})
# -
# Now, we will build seperate models for each city, as our data exploration implied their isn't much correlation between cases. Additionally, all of our models have performed better on the leaderboard when trained on the cities seperately, implying there isn't much to learn about possible correlations between them.
#
# First, start with SJ
sj_m = Prophet(
growth = 'linear',
yearly_seasonality = 10,
weekly_seasonality = False,
daily_seasonality = False,
seasonality_mode = 'multiplicative'
)
# Now, let's train each of them and visualize their results and CV scores before making a submission
sj_m.fit(df_sj_prophet)
forcast_sj = sj_m.predict(df_sj_prophet)
sj_m.plot(forcast_sj)
cv_sj = cross_validation(sj_m, horizon = '730 days')
mean_absolute_error(cv_sj['yhat'], cv_sj['y'])
iq_m.fit(df_iq_prophet)
forcast_iq = iq_m.predict(df_iq_prophet)
iq_m.plot(forcast_iq)
cv_iq = cross_validation(iq_m, horizon = '730 days')
mean_absolute_error(cv_iq['yhat'], cv_iq['y'])
# ## Test data
# Now, let's predict the test data and make a submission to see where we stand.
# +
sj_p = sj_m.predict(df_sj_prophet_test)['yhat'].values
iq_p = iq_m.predict(df_iq_prophet_test)['yhat'].values
preds = np.rint(np.concatenate((sj_p, iq_p)))
# -
subm = pd.read_csv('../data/raw/submission_format.csv')
subm['total_cases'] = preds.astype(int)
subm.to_csv('../subm5.csv', index=False)
# This gets our leaderboard MAE down to **25.4591**, placing us in the top **11.8%** of contestants.
# ## Random hyperparameter search
#
# Prophet has a lot of parameters, so an exhuastive grid search is not realistic if we want to try out many models -- especially with the high training time. So instead, let's define a list of parameters to try, and then CV models trained on a random subset of them.
# +
changepoint_prior_scale = [0.001, 0.003, 0.05]
yearly_seasonality = [10, 15, 20]
seasonality_mode = ['multiplicative', 'additive']
def prophet_random_search(n: int, df: pd.DataFrame):
"""A method for random hyperparameter tuning with Prophet, where n is the number of random
models to try, and df is the training dataframe"""
for i in range(n - 1):
m = Prophet(
# By inspection, we know the seasonality trends are not weekly or daily
weekly_seasonality = False,
daily_seasonality = False,
growth = 'linear',
# random params
changepoint_prior_scale = np.random.choice(changepoint_prior_scale),
seasonality_mode= np.random.choice(seasonality_mode),
yearly_seasonality = np.random.choice(yearly_seasonality)
)
m.fit(df)
cv = cross_validation(m, horizon = '730 days')
print(mean_absolute_error(cv['yhat'], cv['y']))
print(m.changepoint_prior_scale, m.yearly_seasonality, m.seasonality_mode)
# Looking for CV score < 26 for any significant improvement
prophet_random_search(15, df_sj_prophet)
# -
# Looking for CV score < 9.75 for any significant improvement
prophet_random_search(15, df_iq_prophet)
# Although we didn't find any improvement for SJ, our CV score for IQ got as low as 7.806. Let's try two seperate models with new params for iq, and see if it helps our leaderboard score at all
# +
iq_m = Prophet(
growth = 'linear',
weekly_seasonality = False,
daily_seasonality = False,
yearly_seasonality = 20,
changepoint_prior_scale = 0.003,
seasonality_mode = 'additive',
)
sj_m.fit(df_sj_prophet)
iq_m.fit(df_iq_prophet)
cv_iq = cross_validation(iq_m, horizon = '730 days')
mean_absolute_error(cv_iq['yhat'], cv_iq['y'])
# -
# After making a submission, there was no significant improvement, in fact, MAE increased to 26. In the next notebook, we'll explore using LSTM's and deep learning models.
| notebooks/data_modeling_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''intro_nn'': conda)'
# language: python
# name: python37664bitintronncondabe27099557db4cd1a4d5547b016f4f28
# ---
# + [markdown] colab_type="text" id="Ldr0HZ193GKb"
# Lambda School Data Science
#
# *Unit 4, Sprint 3, Module 1*
#
# ---
#
# -
# # Recurrent Neural Networks (RNNs) and Long Short Term Memory (LSTM) (Prepare)
#
# <img src="https://media.giphy.com/media/l2JJu8U8SoHhQEnoQ/giphy.gif" width=480 height=356>
# <br></br>
# <br></br>
# ## Learning Objectives
# - <a href="#p1">Part 1: </a>Describe Neural Networks used for modeling sequences
# - <a href="#p2">Part 2: </a>Apply a LSTM to a text generation problem using Keras
# + [markdown] colab_type="text" id="_IizNKWLomoA"
# ## Overview
#
# > "Yesterday's just a memory - tomorrow is never what it's supposed to be." -- <NAME>
#
# Wish you could save [Time In A Bottle](https://www.youtube.com/watch?v=AnWWj6xOleY)? With statistics you can do the next best thing - understand how data varies over time (or any sequential order), and use the order/time dimension predictively.
#
# A sequence is just any enumerated collection - order counts, and repetition is allowed. Python lists are a good elemental example - `[1, 2, 2, -1]` is a valid list, and is different from `[1, 2, -1, 2]`. The data structures we tend to use (e.g. NumPy arrays) are often built on this fundamental structure.
#
# A time series is data where you have not just the order but some actual continuous marker for where they lie "in time" - this could be a date, a timestamp, [Unix time](https://en.wikipedia.org/wiki/Unix_time), or something else. All time series are also sequences, and for some techniques you may just consider their order and not "how far apart" the entries are (if you have particularly consistent data collected at regular intervals it may not matter).
# + [markdown] colab_type="text" id="44QZgrPUe3-Y"
# # Neural Networks for Sequences (Learn)
# + [markdown] colab_type="text" id="44QZgrPUe3-Y"
# ## Overview
#
# There's plenty more to "traditional" time series, but the latest and greatest technique for sequence data is recurrent neural networks. A recurrence relation in math is an equation that uses recursion to define a sequence - a famous example is the Fibonacci numbers:
#
# $F_n = F_{n-1} + F_{n-2}$
#
# For formal math you also need a base case $F_0=1, F_1=1$, and then the rest builds from there. But for neural networks what we're really talking about are loops:
#
# 
#
# The hidden layers have edges (output) going back to their own input - this loop means that for any time `t` the training is at least partly based on the output from time `t-1`. The entire network is being represented on the left, and you can unfold the network explicitly to see how it behaves at any given `t`.
#
# Different units can have this "loop", but a particularly successful one is the long short-term memory unit (LSTM):
#
# 
#
# There's a lot going on here - in a nutshell, the calculus still works out and backpropagation can still be implemented. The advantage (ane namesake) of LSTM is that it can generally put more weight on recent (short-term) events while not completely losing older (long-term) information.
#
# After enough iterations, a typical neural network will start calculating prior gradients that are so small they effectively become zero - this is the [vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem), and is what RNN with LSTM addresses. Pay special attention to the $c_t$ parameters and how they pass through the unit to get an intuition for how this problem is solved.
#
# So why are these cool? One particularly compelling application is actually not time series but language modeling - language is inherently ordered data (letters/words go one after another, and the order *matters*). [The Unreasonable Effectiveness of Recurrent Neural Networks](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) is a famous and worth reading blog post on this topic.
#
# For our purposes, let's use TensorFlow and Keras to train RNNs with natural language. Resources:
#
# - https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py
# - https://keras.io/layers/recurrent/#lstm
# - http://adventuresinmachinelearning.com/keras-lstm-tutorial/
#
# Note that `tensorflow.contrib` [also has an implementation of RNN/LSTM](https://www.tensorflow.org/tutorials/sequences/recurrent).
# + [markdown] colab_type="text" id="eWrQllf8WEd-"
# ## Follow Along
#
# Sequences come in many shapes and forms from stock prices to text. We'll focus on text, because modeling text as a sequence is a strength of Neural Networks. Let's start with a simple classification task using a TensorFlow tutorial.
# + [markdown] colab_type="text" id="eWrQllf8WEd-"
# ### RNN/LSTM Sentiment Classification with Keras
# + colab={"base_uri": "https://localhost:8080/", "height": 975} colab_type="code" id="Ti23G0gRe3kr" outputId="bba9ae40-a286-49ed-d87b-b2946fb60ddf"
'''
#Trains an LSTM model on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
**Notes**
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
from __future__ import print_function
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding
from tensorflow.keras.layers import LSTM
from tensorflow.keras.datasets import imdb
# what this does is that when the features are loaded we only want the 20000 most common words.
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
# -
x_train[0];
print('Pad Sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape: ', x_train.shape)
print('x_test shape: ', x_test.shape)
x_train[0]
# +
model = Sequential()
# takes all 20000 features and colapses it into 128 int vectorizations via backprop
# random power of 2, 256 is also common in the field.
model.add(Embedding(max_features, 128))
# setting it up to use 128 hidden nodes
# this is 128 steps in the loop of rnn
# yes and no, the node size is matching the dense embedding because the math works better that way,
# the number of lstm units does not 'have' to match the embeddign layers.
# using dropout layers and recurentdropout for lstm units.
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
# to use gpu acceleration.
#model.add(LSTM(128, dropout=0.2,activation='tanh',recurrent_activation='sigmoid',dropout=0.2,recurrent_dropout=0,use_bias=True,unroll=False))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
# -
# this process can be used as a sentement anylisis
unicorns = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=5,
validation_data=(x_test,y_test))
model.save("unicorns.h5")
unicorns.params
# +
import matplotlib.pyplot as plt
# Plot training & validation loss values
plt.plot(unicorns.history['loss'])
plt.plot(unicorns.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show();
# +
import matplotlib.pyplot as plt
# Plot training & validation loss values
plt.plot(unicorns.history['accuracy'])
plt.plot(unicorns.history['val_accuracy'])
plt.title('Model acc')
plt.ylabel('accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show();
# +
from tensorflow.keras.optimizers import Adam
opt = Adam(learning_rate=0.0005)
model = Sequential()
# takes all 20000 features and colapses it into 128 int vectorizations via backprop
# random power of 2, 256 is also common in the field.
model.add(Embedding(max_features, 128))
# setting it up to use 128 hidden nodes
# this is 128 steps in the loop of rnn
# yes and no, the node size is matching the dense embedding because the math works better that way,
# the number of lstm units does not 'have' to match the embeddign layers.
# using dropout layers and recurentdropout for lstm units.
#model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
# to use gpu acceleration.
model.add(LSTM(128,activation='tanh',recurrent_activation='sigmoid',recurrent_dropout=0, dropout=0.2, use_bias=True,unroll=False))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.summary()
# -
# this process can be used as a sentement anylisis
unicorns2 = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=5,
validation_data=(x_test,y_test))
def plot_history(hist):
# Plot training & validation loss values
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show();
# Plot training & validation loss values
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title('Model acc')
plt.ylabel('accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show();
return None
plot_history(unicorns2)
# ## Challenge
#
# You will be expected to use an Keras LSTM for a classicification task on the *Sprint Challenge*.
# + [markdown] colab_type="text" id="7pETWPIe362y"
# # LSTM Text generation with Keras (Learn)
# + [markdown] colab_type="text" id="7pETWPIe362y"
# ## Overview
#
# What else can we do with LSTMs? Since we're analyzing the *sequence*, we can do more than classify - we can *generate* text. I'ved pulled some news stories using [newspaper](https://github.com/codelucas/newspaper/).
#
# This example is drawn from the Keras [documentation](https://keras.io/examples/lstm_text_generation/).
# +
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.optimizers import RMSprop
import numpy as np
import random
import sys
import os
# -
data_files = os.listdir('./articles')
# +
# Read in Data
data = []
for file in data_files:
if file[-3:] == 'txt':
with open(f'./articles/{file}', 'r', encoding='utf-8') as f:
data.append(f.read())
# -
len(data)
data[-1]
# +
# Encode Data as Chars
# Gather all text
# Why? 1. See all possible characters 2. For training / splitting later
text = " ".join(data)
# Unique Characters
chars = list(set(text))
# Lookup Tables
char_int = {c:i for i, c in enumerate(chars)}
int_char = {i:c for i, c in enumerate(chars)}
# -
len(chars)
# +
# Create the sequence data
maxlen = 40
step = 5
encoded = [char_int[c] for c in text]
sequences = [] # Each element is 40 chars long
next_char = [] # One element for each sequence
for i in range(0, len(encoded) - maxlen, step):
sequences.append(encoded[i : i + maxlen])
next_char.append(encoded[i + maxlen])
print('sequences: ', len(sequences))
# -
sequences[0]
# +
# Create x & y
x = np.zeros((len(sequences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sequences),len(chars)), dtype=np.bool)
for i, sequence in enumerate(sequences):
for t, char in enumerate(sequence):
x[i,t,char] = 1
y[i, next_char[i]] = 1
# -
x.shape
y.shape
# +
# build the model: a single LSTM
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars), activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
# -
def sample(preds):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / 1
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
# +
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print()
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(text) - maxlen - 1)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_int[char]] = 1
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds)
next_char = int_char[next_index]
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
# +
# fit the model
model.fit(x, y,
batch_size=32,
epochs=10,
callbacks=[print_callback])
# -
model.save("text_gen.h5")
# ## Challenge
#
# You will be expected to use a Keras LSTM to generate text on today's assignment.
# # Review
#
# - <a href="#p1">Part 1: </a>Describe Neural Networks used for modeling sequences
# * Sequence Problems:
# - Time Series (like Stock Prices, Weather, etc.)
# - Text Classification
# - Text Generation
# - And many more! :D
# * LSTMs are generally preferred over RNNs for most problems
# * LSTMs are typically a single hidden layer of LSTM type; although, other architectures are possible.
# * Keras has LSTMs/RNN layer types implemented nicely
# - <a href="#p2">Part 2: </a>Apply a LSTM to a text generation problem using Keras
# * Shape of input data is very important
# * Can take a while to train
# * You can use it to write movie scripts. :P
| module1-rnn-and-lstm/LS_DS_431_RNN_and_LSTM_Lecture.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 + Jaspy
# language: python
# name: jaspy
# ---
try:
import netCDF4 as nc4
from netCDF4 import Dataset
import numpy as np
import os
import matplotlib.pyplot as plt
except ModuleNotFoundError:
print ('Module import error')
else:
print ('Libaries properly loaded. Ready to start')
# +
# Sentinel 5P file I want to process
#s5p_file = '/neodc/sentinel5p/data/L2_NO2/v1.3/2020/04/26/S5P_NRTI_L2__NO2____20200426T123448_20200426T123948_13139_01_010302_20200426T131747.nc'
s5p_file = '/neodc/sentinel5p/data/L2_NO2/v1.3/2020/01/01/S5P_OFFL_L2__NO2____20200101T110146_20200101T124316_11493_01_010302_20200103T041218.nc'
file = Dataset(s5p_file, mode='r')
# -
print(file)
# +
lons = file.groups['PRODUCT'].variables['longitude'][:][0,:,:]
lats = file.groups['PRODUCT'].variables['latitude'][:][0,:,:]
no2 = file.groups['PRODUCT'].variables['nitrogendioxide_tropospheric_column_precision'][0,:,:]
print(lons.shape)
print (lats.shape)
print (no2.shape)
no2_units = file.groups['PRODUCT'].variables['nitrogendioxide_tropospheric_column_precision'].units
# +
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from mpl_toolkits.basemap import Basemap
plt.figure(figsize=(48,24))
lon_0 = lons.mean()
lat_0 = lats.mean()
#m = Basemap(width=5000000,height=3500000,
# resolution='l',projection='stere',\
# lat_ts=40,lat_0=lat_0,lon_0=lon_0)
m = Basemap(resolution='l',projection='mill',lon_0=0)
xi, yi = m(lons, lats)
# Plot Data
cs = m.pcolor(xi,yi,np.squeeze(no2),norm=LogNorm(), cmap='jet')
# Add Grid Lines
m.drawparallels(np.arange(-80., 81., 10.), labels=[1,0,0,0], fontsize=10)
m.drawmeridians(np.arange(-180., 181., 10.), labels=[0,0,0,1], fontsize=10)
# Add Coastlines, States, and Country Boundaries
m.drawcoastlines()
m.drawstates()
m.drawcountries()
# Add Colorbar
cbar = m.colorbar(cs, location='bottom', pad="10%")
cbar.set_label(no2_units)
# Add Title
plt.title('NO2 in atmosphere')
plt.show()
# -
| playbooks/sentinel/S5P_NO2_NRTI_plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MCMC Sampling
#
# The `CmdStanModel` class method `sample` invokes Stan's adaptive HMC-NUTS
# sampler which uses the Hamiltonian Monte Carlo (HMC) algorithm
# and its adaptive variant the no-U-turn sampler (NUTS) to produce a set of
# draws from the posterior distribution of the model parameters conditioned on the data.
# It returns a `CmdStanMCMC` object
# which provides properties to retrieve information about the sample, as well as methods
# to run CmdStan's summary and diagnostics tools.
#
# In order to evaluate the fit of the model to the data, it is necessary to run
# several Monte Carlo chains and compare the set of draws returned by each.
# By default, the `sample` command runs 4 sampler chains, i.e.,
# CmdStanPy invokes CmdStan 4 times.
# CmdStanPy uses Python's `subprocess` and `multiprocessing` libraries
# to run these chains in separate processes.
# This processing can be done in parallel, up to the number of
# processor cores available.
# ## Prerequisites
#
#
# CmdStanPy displays progress bars during sampling via use of package [tqdm](https://github.com/tqdm/tqdm).
# In order for these to display properly, you must have the
# [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/index.html) package installed,
# and depending on your version of Jupyter or JupyterLab, you must enable it via command:
# !jupyter nbextension enable --py widgetsnbextension
# For more information, see the the
# [installation instructions](https://ipywidgets.readthedocs.io/en/latest/user_install.html#),
# also [this tqdm GitHub issue](https://github.com/tqdm/tqdm/issues/394#issuecomment-384743637).
#
#
#
# ## Fitting a model to data
#
# In this example we use the CmdStan example model
# [bernoulli.stan](https://github.com/stan-dev/cmdstanpy/blob/master/test/data/bernoulli.stan)
# and data file
# [bernoulli.data.json](https://github.com/stan-dev/cmdstanpy/blob/master/test/data/bernoulli.data.json>).
#
# We instantiate a `CmdStanModel` from the Stan program file
# +
import os
from cmdstanpy import CmdStanModel, cmdstan_path
bernoulli_dir = os.path.join(cmdstan_path(), 'examples', 'bernoulli')
stan_file = os.path.join(bernoulli_dir, 'bernoulli.stan')
data_file = os.path.join(bernoulli_dir, 'bernoulli.data.json')
# instantiate, compile bernoulli model
model = CmdStanModel(stan_file=stan_file)
# -
# By default, the model is compiled during instantiation. The compiled executable is created in the same directory as the program file. If the directory already contains an executable file with a newer timestamp, the model is not recompiled.
#
# We run the sampler on the data using all default settings: 4 chains, each of which runs 1000 warmup and sampling iterations.
# run CmdStan's sample method, returns object `CmdStanMCMC`
fit = model.sample(data=data_file)
# The `sample` method returns a `CmdStanMCMC` object, which contains:
# - metadata
# - draws
# - HMC tuning parameters `metric`, `step_size`
print('sampler diagnostic variables:\n{}'.format(fit.metadata.method_vars_cols.keys()))
print('stan model variables:\n{}'.format(fit.metadata.stan_vars_cols.keys()))
fit.summary()
# The sampling data from the fit can be accessed either as a `numpy` array or a pandas `DataFrame`:
print(fit.draws().shape)
fit.draws_pd().head()
# Additionally, if `xarray` is installed, this data can be accessed another way:
fit.draws_xr()
# The ``fit`` object records the command, the return code,
# and the paths to the sampler output csv and console files.
# The string representation of this object displays the CmdStan commands and
# the location of the output files.
#
# Output filenames are composed of the model name, a timestamp
# in the form YYYYMMDDhhmm and the chain id, plus the corresponding
# filetype suffix, either '.csv' for the CmdStan output or '.txt' for
# the console messages, e.g. `bernoulli-201912081451-1.csv`. Output files
# written to the temporary directory contain an additional 8-character
# random string, e.g. `bernoulli-201912081451-1-5nm6as7u.csv`.
fit
# The sampler output files are written to a temporary directory which
# is deleted upon session exit unless the ``output_dir`` argument is specified.
# The ``save_csvfiles`` function moves the CmdStan CSV output files
# to a specified directory without having to re-run the sampler.
# The console output files are not saved. These files are treated as ephemeral; if the sample is valid, all relevant information is recorded in the CSV files.
# ### Sampler Progress
#
# Your model make take a long time to fit. The `sample` method provides two arguments:
#
# - visual progress bar: `show_progress=True`
# - stream CmdStan ouput to the console - `show_console=True`
#
# To illustrate how progress bars work, we will run the bernoulli model. Since the progress bars are only visible while the sampler is running and the bernoulli model takes no time at all to fit, we run this model for 200K iterations, in order to see the progress bars in action.
fit = model.sample(data=data_file, iter_warmup=100000, iter_sampling=100000, show_progress=True)
# The Stan language `print` statement can be use to monitor the Stan program state.
# In order to see this information as the sampler is running, use the `show_console=True` argument.
# This will stream all CmdStan messages to the terminal while the sampler is running.
#
# +
fit = model.sample(data=data_file, chains=2, parallel_chains=1, show_console=True)
# -
# ## Running a data-generating model
#
# In this example we use the CmdStan example model
# [data_filegen.stan](https://github.com/stan-dev/cmdstanpy/blob/master/docs/notebooks/data_filegen.stan)
# to generate a simulated dataset given fixed data values.
model_datagen = CmdStanModel(stan_file='bernoulli_datagen.stan')
datagen_data = {'N':300, 'theta':0.3}
fit_sim = model_datagen.sample(data=datagen_data, fixed_param=True)
fit_sim.summary()
# Compute, plot histogram of total successes for `N` Bernoulli trials with chance of success `theta`:
# +
drawset_pd = fit_sim.draws_pd()
drawset_pd.columns
# restrict to columns over new outcomes of N Bernoulli trials
y_sims = drawset_pd.drop(columns=['lp__', 'accept_stat__'])
# plot total number of successes per draw
y_sums = y_sims.sum(axis=1)
y_sums.astype('int32').plot.hist(range(0,datagen_data['N']+1))
| docs/examples/MCMC Sampling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Solubility calculation assignment, PharmSci 175/275
#
# Solubility estimation/prediction is a huge problem in drug discovery. Here, we will attempt to build a simple empirical model for solubility prediction as in a recent literature challenge. We will take a set of ~100 solubility values, and develop a simple model which reproduces those values reasonably well, then test this model on a new set of compounds (a test set). To put it another way, we have a test set and a training set, and want to use the known solubilities from the training set to predict solubilities for the test set.
#
# This builds on the solubility challenge of [Llinàs et al.](https://dx.doi.org/10.1021/ci800058v) and the conclusions/subsequent work of [Hopfinger et al.](https://dx.doi.org/10.1021/ci800436c).
#
#
# ## Overview
#
# Solubility calculation is an important problem for drug discovery, partly because it is so important that drugs be soluble. Solubility is an important factor in the design of orally bioavailable drugs, as we have discussed in class. However, no good physical models are available for work in this area yet, so most of the models for solubility estimation are empirical, based on measuring a set of simple molecular properties for molecules and combining these to estimate a solubility in some way, based on calibration to experimental data.
#
# Recently, Llinàs et al., [(J. Chem. Inf. Model 48:1289 (2008))](https://dx.doi.org/10.1021/ci800058v) posed a challenge: Can you predict a set of 32 solubilities on a test set, using a database (training set) of 100 reliable solubility measurements? Follow up work [(Hopfinger et al., J. Chem. Inf. Model 49:1 (2009))](https://dx.doi.org/10.1021/ci800436c) provided the solubility measurements of the test set and assessed performance of a wide variety of solubility estimation techniques in this challenge.
#
# Here, your job is to construct several simple linear models to predict solubilities using the training set of roughly 100 compounds, and then test their performance on the test set, comparing them with one another, with a null model, and with the performance of research groups which participated in the challenge. You should also implement and test a simple variant of the LINGO-based approach of Vidal et al. (J. Chem. Inf. Model 45(2):386-393 (2005)).
#
# A good deal of the technology you will need to use here is provided for you, including example models. Your job in this assignment is simply going to be to adjust the Python code I have provided to build several (five or more) new models for predicting solubilities, plus one based on the approach of Vidal, and compare their performance to select your favorite.
#
# ## Some setup notes
#
# In this directory, you should also find a module you can import which will help with some statistics -- `tools.py`. You will also find two directories containing structures of molecules in the different sets -- `llinas_predict`, containing molecules whose solubilities we want to predict, and `llinas_set`, containing molecules in the training set. Additionally, in the `scripts` directory there is `solubilities.pickle` which contains solubility data (not human readable).
#
# I also provide some fairly extensive example code below which you can use as the basis for your assignment. To briefly summmarize the provided code (you can see more detail by reading the comments and code below) it loads the structures of the molecules and their names, computes a reasonably extensive set of descriptros or properties of the different molecules and loads in the actual solubility data. It then proceeds to build two extremely simple models for predicting solubilities based on a simple linear combination/fit of physical properties. You will be able to use this part of the program as a template for building your own solubility models.
#
# ## For solubility prediction, we'll use a series of *descriptors*
#
# Descriptors are properties of our molecule which might (or might not) be related to the solubility. For example, we might think that solubility will in general tend to go down as molecular weight goes up, and go up as polarity increases (or go down as polarity decreases) and so on.
#
# Here, let's take a sample molecule and calculate a series of descriptors which we might want to use in constructing a simple solubility model.
# +
from openeye.oechem import *
from openeye.oemolprop import *
from openeye.oeiupac import *
from openeye.oezap import *
from openeye.oeomega import *
import numpy as np
import scipy.stats
#Initialize an OpenEye molecule
mol = OEMol()
#let's look at phenol
OEParseIUPACName( mol, 'naphthalene' )
#Generate conformation
omega = OEOmega()
omega(mol)
#Here one of the descriptors we'll use is the calculated solvation free energy, from OpenEye's ZAP electrostatics solver
#Get zap ready for electrostatics calculations
zap = OEZap()
zap.SetInnerDielectric( 1.0 )
zap.SetGridSpacing(0.5)
area = OEArea()
#Reduce verbosity
OEThrow.SetLevel(OEErrorLevel_Warning)
#Let's print a bunch of properties
#Molecular weight
print( "Molecular weight: %.2f" % OECalculateMolecularWeight(mol) )
#Number of atoms
print( "Number of atoms: %s" % mol.NumAtoms() )
#Number of heavy atoms
print( "Number of heavy atoms: %s" % OECount(mol, OEIsHeavy() ) )
#Number of ring atoms
print( "Number of ring atoms: %s" % OECount(mol, OEAtomIsInRing() ) )
#Number of halogens
print( "Number of halogens: %s" % OECount( mol, OEIsHalogen() ))
print ("Number of nitrogens: %s" % OECount( mol, OEIsNitrogen() ) )
print( "Number of oxygens: %s" % OECount( mol, OEIsOxygen() ) )
print( "Number of rotatable bonds: %s" % OECount( mol, OEIsRotor() ) )
#Calculated logP - water to octanol partitioning coefficient (which is often something which may correlate somewhat with solubility)
print( "Calculated logP: %.2f" % OEGetXLogP( mol ) )
print( "Number of aromatic rings: %s" % OEGetAromaticRingCount( mol ) )
#Calculate lots of other properties using molprop toolkit as per example in OE MolProp manual
#Handle the setup of 'filter', which computes lots of properties with the goal of filtering compounds. Here we'll not do any filtering
#and will use it solely for property calculation
filt = OEFilter()
ostr = oeosstream()
pwnd = False
filt.SetTable( ostr, pwnd)
headers = ostr.str().decode().split('\t')
ostr.clear()
filt(mol)
fields = ostr.str().decode().split('\t')
tmpdct = dict( zip(headers, fields) ) #Format the data we need into a dictionary for easy extraction
print("Polar surface area: %s" % tmpdct[ '2d PSA' ] )
print("Number of hbond donors: %s" % int(tmpdct['hydrogen-bond donors']) )
print("Number of hbond acceptors: %s" % int(tmpdct['hydrogen-bond acceptors']) )
print ("Number of rings: %s" % int(tmpdct['number of ring systems']) )
#print(tmpdct.keys())
#Quickly estimate hydration free energy, or a value correlated with that -- from ZAP manual
#Do ZAP setup for molecule
OEAssignBondiVdWRadii(mol)
OEMMFFAtomTypes(mol)
OEMMFF94PartialCharges(mol)
zap.SetMolecule( mol )
solv = zap.CalcSolvationEnergy()
aval = area.GetArea( mol )
#Empirically estimate solvation free energy (hydration)
solvation = 0.59*solv + 0.01*aval #Convert electrostatic part to kcal/mol; use empirically determined kcal/sq angstrom value times surface area term
print ("Calculated solvation free energy: %.2f" % solvation)
# -
# ## Linear models for solubility: Understanding your task
#
# Here, your first job is to construct some linear models for solubility and attempt to use them to predict solubilities for a test set of molecules.
# Many different models for solubilities would be possible. Here, however, we focus on linear models -- that is, models having the form:
# $y = mx + b$
#
# where $y$ is the solubility, $m$ and $b$ are constants, and $x$ is some descriptor of the molecule. Or with two variables:
# $y = mx + nz + b$
#
# Here we've added a second descrptor, $z$, and another constant, $n$. Still more generally, we could write:
#
# $y = b + \sum_i m_i x_i$
#
# In this case, we now have a constant, $b$, and a set of other constants, $m_i$, and descriptors, $x_i$; the sum runs over all values of $i$.
#
# What does this all mean? Basically, we are going to assume that we can predict solubilities out of some linear combination of descriptors or molecular properties. For example, (as a null model) I might assume that solubility can be predicted simply based on molecular weight -- perhaps heavier compounds will in general be less (or more) soluble. I might write:
#
# $y = m\times MW + b$
#
# This has the form $y=mx + b$ but I replaced $x$ with $MW$, the molecular weight. To fit this model, I would then need to find the coefficients $m$ and $b$ to give the best agreement with the actual solublity data.
#
# Here, I would first develop parameters $m$ and $b$ to fit my training set -- that is, I would fit $m$ and $b$ on the training set data, the (roughly 100) compounds provided in the first paper. Then, I would apply the same $m$ and $b$ to the test set data to see how well I can predict the 32 "new" compounds.
#
# In this project, you will test the "null model" I just described (which turns out actually to be not too bad, here!), as well as another model I built, which has the form
#
# $y = m\times MW +n\times F + b$
# where I've added a new descriptor, F, which is the calculated hydration free energy of the compound (calculated with a PB model). So my model predicts that solubility is a constant plus some factor times molecular weight and another factor times the calculated hydration free energy.
#
# Finding the parameters $m$, $n$, and $b$ is a very simple via a least-squares fit. This is done for you within Python.
# Here you will need to develop several of your own linear solubility prediction models (as discussed below) and test their performance.
#
# ## Lingo-based solubility models
#
# In class, when we discussed the LINGO similarity approach, I mentioned in passing that this approach had been used to attempt to estimate solubilities based on functional group/LINGO fragment contributions. This was done in work by Vidal et al. (J. Chem. Inf. Model 45(2):386-393 (2005)).
#
# While the approach of Vidal et al. is outside the scope of this assignment, you should quickly implement a related idea (optional for undergraduates). Particularly, you should test what happens if, for each compound in your test set you simply predict the value of the most similar (by LINGO) compound in the training set. This will allow you to quickly test how well you can predict solubilities based on pure molecular similarity to compounds in your training set. Obviously your training set is limited in size, but it’s still a worthwhile test.
#
# ## Your assignment: Build and test at least five new solubility models plus (for graduate students) the Lingo-based approach
#
# This section deals with what you are trying to do. A separate section, below, deals with the “how to” aspect. Your goal in this project is to build and test five new solubility models plus the approach based on LINGO similarity.
#
# This section focuses primarily on building linear solubility models; I’ll assume the LINGO similarity idea is simple enough you can implement it yourself. (Though if you like, for extra credit, you can combine it with a linear solubility model to see if it can do better than either approach alone.)
#
# **Building solubility models**: Building a solubility model, here, amounts to selecting a set of descriptors (possible choices are listed below), getting their values, and then doing a least squares fit on the training set (the knowns) to find the parameters.
#
# **Testing solubility models**: Testing a solubility model, here, means taking the parameters that were found for a specific solubility model and applying that model to the test set, predicting solubility values and seeing how well the predicted values compare to experiment.
#
# **Descriptors**: Here, a variety of molecular descriptors are precalculated for you. These are stored below within a dictionary, `compounds`, such that `compounds[molname][descriptorname]` gives you the value of the descriptor named `descriptorname` for compound name `molname`. For example, `compounds['naloxone']['mw']` gives the molecular weight of naloxone. Here are the descriptors available to you below, by their abbreviation (i.e. "mw" for molecular weight) with a small amount of information about each:
# - `mw`: Molecular weight
# - `numatoms`: Number of atoms including hydrogens
# - `heavyatoms`: Number of heavy atoms
# - `ringatoms`: Number of atoms in rings
# - `halogens`: Number of halogens
# - `nitrogens`: Number of nitrogens
# - `oxygens`: Number of oxygens
# - `rotatable`: Number of rotatable bonds
# - `XlogP`: Calculated logP (water to octanol partitioning coefficient)
# - `aromaticrings`: Number of aromatic rings
# - `PSA`: Polar surface area of the compound
# - `SA`: Surface area of the compound
# - `hbond-donors`: Number of hydrogen bond donors
# - `hbond-acceptors`: Number of hydrogen bond acceptors
# - `rings`: Number of rings
# - `hydration`: Estimated hydration free energy (essentially a measure of the interactions with solvent)
#
#
# As you might guess, some of these probably ought to have more to do with solubility than others. The number of atoms in rings is, perhaps, not that related to solubility, nor should the number of rings be that related to solubility. Perhaps there may generally be a trend that larger compounds are somewhat less soluble -- not for chemical reasons, but rather for reasons of pharmaceutical interest (many drugs tend to be somewhat large and somewhat less soluble), so some of the descriptors correlated with molecular weight (such as number of atoms, number of heavy atoms, etc.) may be better predictors of solubility than you might guess. On the other hand, hydration free energy is closely related to solubility (it’s the solution part of a solubility), and some of the other descriptors may be as well.
#
# In any case, one of the goals here is to build a variety of different models to start seeing (a) how you typically can get better results in the training set as you keep adding more descriptors; (b) which descriptors tend to work better; and (c) how well your best model(s) can do on the test set. You may also gain some insight into (d), how to avoid overfitting.
#
# So, overall, you should select some specific descriptors you think are interesting, and build models involving those. Be sure to also test the approach based on LINGO similarity if you are a grad student (you will have to implement it based on the LINGO examples already seen earlier in the course).
#
# ### Solubility versus log S
#
# Solubilities potentially cover a huge range. In fact, this dataset tends to have a relatively large number of compounds which are not very soluble, and a small number which are extremely soluble. What this means is that if we aren’t careful, the few extremely soluble compounds will end up playing a huge role in the development of our models. Thus, here, it actually makes more sense to work with the logarithm of the solubility, which we’ll call logS. So, in our project, our real goal is going to be to calculate the logS, not the solubility itself. My code has been written to work with logS, so henceforth when I talk about solubility I’m really going to be talking about logS.
#
# ### How to achieve your goal: Some specific hints
#
# To get going on the problem, view the code below and find the section below dealing with building first and second simple models. Here, I provide two initial models noted above -- one based on molecular weight as a descriptor, and one based on molecular weight plus hydration free energy. For your starting point, read through the code for ["Build a first simple model"](#Build-a-first-simple-model) based on hydration free energy and molecular weight. You will basically need to copy this code and modify it to handle your descriptors.
#
# Take a quick look for the code for ["Build a second simple model"](#Build-another-simple-model). There, the first step, before we can build a model, is to get values of our descriptors for the molecules of interest. We’ve already done that for the molecular weight in ["Build a first simple model"](#Build-a-first-simple-model) (refer there if you like), so this code begins by getting the hydration free energies for the knowns and the molecules we want to predict. The code is commented, but basically what you need to know is that if you want to switch to another metric, say, number of rings, you’d take the code like
#
# ```python
# known_hydr = [ compounds[mol]['hydration'] for mol in knownnames ]
# known_hydr = np.array(known_hydr)
# ```
# and switch it to
# ```python
# known_rings = [ compounds[mol]['rings'] for mol in knownnames ]
# known_rings = np.array(known_rings)
# ```
#
# This gets descriptor values for the number of rings for the knowns (training set molecules). You’d then need to do the same for changing `p_hydr` into `p_rings` (number of rings for the "prediction" or test set molecules).
#
# Then, in the next section, there least squares fit is done to actually get the parameters. The formatting here is a little tricky, but the main thing you need to know is that this code
#
# ```python
# A = np.vstack( [known_mw, known_hydr, np.ones(len(known_mw) ) ] ).T
# ```
# provides your descriptors in a list, followed by `np.ones...`. So if you wanted to switch this to use rings, molecular weight, and hydration, you'd do something like:
# ```python
# A = np.vstack( [known_rings, known_mw, known_hydr, np.ones(len(known_mw) ) ] ).T
# ```
#
# The actual least-squares fit is done by this:
# ```python
# m, n, b = np.linalg.lstsq( A, known_solubilities)[0]
# ```
#
# For the case where you'd fitted rings, molecular weight, and hydration, you would calculate the resulting fitted values using:
# ```python
# m, n, o, b = np.linalg.lstsq( A, known_solubilities)[0]
# fittedvals = m*known_rings + n*known_mw + o*known_hydr + b
# ```
#
# You'd make similar changes to the computation of `predictvals` to parallel those made calculating `fittedvals`. You can leave all of the statistics code below that unchanged, and just modify the print statements to indicate what model it is you are testing.
#
# **Be sure to read the discussion below before getting too carried away on the problem**, as it provides some more information on assessing what is and what isn’t a good model.
#
#
# ### Performance metrics for your models
#
# As noted in class, one should always have metrics for judging the performance of a model. Here, my code (in `tools.py`, imported below) provides several. The Kendall tau value is a measure of ability to rank-order pairs of compounds, and runs from -1 (every pair ranked in the opposite order) to 1 (every pair ranked perfectly) with a value of 0 corresponding to every pair being ranked incorrectly. The RMS error measures a type of average error across the entire set of compounds relative to experiment; units here are logS, and lower values mean lower error on average. The $R^2$ (here called `R2` or `Rsquared`) value is the correlation coefficient, and like the Kendall tau has to do with predictive power (in this case, how well the calculated values correlate with the experimental ones), though it has some limitations (such as sensitivity to extremes of the data). It runs from -1 to 1, with -1 meaning perfect anticorrelation, 1 meaning perfect correlation, and 0 meaning no correlation. Also, for the purposes of comparison with the Hopfinger paper, I have provided code to calculate the percentage of predictions within 0.5 log units, which will allow you to compare with the different methods listed there in terms of both RMS error and percentage correct.
#
# In addition to these metrics, the code also automatically compares to the null hypothesis that there is no correlation between the calculated and measured logS values, and provides the probability (based on the Kendall tau test) that you could get this Kendall tau accidentally when in fact there was no correlation. When this probability is extremely small, it means that your model almost certainly has at least some predictive power.
#
# In general, what you should see is that as you make your models better, the Kendall tau and $R^2$ values should go up towards 1, and the RMS error should go down. You should also see the probability of getting the Kendall tau value by change go down towards zero.
#
# ### Reminder concerning good versus bad models
#
# Remember that, as discussed in class, adding parameters to a model should always make it fit the data better. That is to say, if you compare to models, one using one descriptor, and another using two descriptors, in general the model with two descriptors should have a higher Kendall tau on the training set and a lower RMS error than the model with one descriptor. This doesn’t mean the model with two descriptors is better, necessarily -- it just means it has more parameters.
#
# So, as we noted in class, a good model is, in general, the simplest possible model that fits the data well enough. And a model with fewer parameters is generally preferable over one with more. Also, a good model should perform relatively similarly on the training set (the known compounds) versus the test set (those we are predicting). So, as you construct your models, you may want to keep this in mind. It also might be worth deliberately trying to construct a model which is overfitted, perhaps by including a whole lot of descriptors, until you reach the point where your performance is significantly worse in the test set than the training set.
#
# ### Statistical significance tests
#
# In general, we should also be calculating uncertainties for our different metrics, and applying statistical significance tests to test whether each new model is significantly different than the old model. For example, the t-test could be used to attempt to reject the null hypothesis that a new model is no better on average than the old model. Also, having error bars (calculated via bootstrapping, for example) on the RMS error, $R^2$, etc., could also help us know when two models are not significantly different. However, because this assignment must be done fairly quickly, these tests are not included as part of it.
#
# ### What to do and what to turn in
#
# You need to build and test at least five different models. You should try at least one model that uses four or more descriptors, hopefully getting to the point where you see significantly worse performance on the test set than on the training set. Keep track of every set of descriptors you try.
#
# When you complete the assignment, turn in a brief report (entering it below following the code is fine) containing your discussion and any relevant statistics, etc. This should include:
# - Your Python code
# - The sets of descriptors you tried
# - The statistics describing performance of the model you believe is best, and a brief description of why you chose that model as best
# - A brief discussion comparing your best model with performance of the contestants in Hopfinger et al., as per the logS section of table 2 on the 28 compound test. Specifically, you should be able to compare your Rsquared and percentage within 0.5 log units with the values given in that table. Is your simple model beating many of the contestants? Why do you think that is? How much worse is it than the best models?
# - (If you did the LINGO section -- mandatory for graduate students) Comment on how well the LINGO similarity approach worked relative to other approaches you tried, and why you think it succeeded or failed.
#
#
# # Now here's the material you need to get going
#
# Here's the Python code I'm providing for you which will form the starting point for your assignment.
#
# ## Get some things set up
# +
#============================================================================
#IMPORTS OF PACKAGES NEEDED
#============================================================================
import tools
import pickle
from openeye.oechem import *
from openeye.oemolprop import *
from openeye.oezap import *
import glob
import numpy as np
import scipy.stats
# %pylab inline
#============================================================================
#LOAD OUR MOLECULES FOR WHICH WE ARE PREDICTING SOLUBILITIES
#============================================================================
#Load our molecules, storing lists of the names of the knowns and the ones to predict, and storing the actual molecules to a dictionary.
molecules = glob.glob('llinas_set/*.sdf')
molecules = molecules + glob.glob('llinas_predict/*.sdf')
compounds = {}
knownnames = [] #This will be a list of the molecules in our training set -- molecules with 'known' solubilities
predictnames = [] #This will be a list of molecules in the test set -- molecules with solubilities we are trying to 'predict'
#Loop over molecules and load files, storing them to a 'compounds' dictionary
for filename in molecules:
name = filename.split('/')[1].replace('.sdf','')
compounds[name] = {}
istream = oemolistream(filename)
mol = OEMol()
OEReadMolecule( istream, mol )
compounds[name]['mol'] = mol
istream.close()
if 'predict' in filename:
predictnames.append(name)
else:
knownnames.append(name)
#Make a list of all the molecule names
molnames = knownnames + predictnames
#============================================================================
#MISCELLANEOUS PREP
#============================================================================
#Get zap ready for electrostatics calculations
zap = OEZap()
zap.SetInnerDielectric( 1.0 )
zap.SetGridSpacing(0.5)
area = OEArea()
#Reduce verbosity
OEThrow.SetLevel(OEErrorLevel_Warning)
# -
# ## Compute some descriptors and store
# +
#============================================================================
#COMPUTE DESCRIPTORS FOR OUR MOLECULES -- VARIOUS PROPERTIES OF THE MOLECULES WHICH MIGHT BE USEFUL IN SOLUBILITY ESTIMATION
#============================================================================
#Compute a bunch of descriptors for our molecules. Descriptors will be stored in the compounds dictionary, by compound name.
#For example, compounds['terfenadine']['mw'] will give the 'mw' (molecular weight) of terfenadine).
#A full description of the descriptors calculated will be put in the homework writeup.
#Loop over molecules
for molname in molnames:
print("Calculating descriptors for %s (%s/%s)..." % (molname, molnames.index(molname)+1, len(molnames) )) #Print progress
#Load the OEMol representation of our molecule from where it's stored
mol = compounds[molname]['mol']
#Compute molecular weight and store
compounds[ molname ]['mw'] = OECalculateMolecularWeight( mol )
#Number of atoms -- store
compounds[molname]['numatoms'] = mol.NumAtoms()
#Number of heavy atoms
compounds[molname]['heavyatoms'] = OECount(mol, OEIsHeavy() )
#Number of ring atoms
compounds[molname]['ringatoms'] = OECount(mol, OEAtomIsInRing() )
#Number of halogens
compounds[molname]['halogens'] = OECount( mol, OEIsHalogen() )
#Number of nitrogens
compounds[molname]['nitrogens'] = OECount( mol, OEIsNitrogen() )
#Number of oxygens
compounds[molname]['oxygens'] = OECount( mol, OEIsOxygen() )
#Number of rotatable bonds
compounds[molname]['rotatable'] = OECount( mol, OEIsRotor() )
#Calculated logP
compounds[molname]['XlogP'] = OEGetXLogP( mol )
#Number of aromatic rings
compounds[molname]['aromaticrings'] = OEGetAromaticRingCount( mol )
#Calculate lots of other properties using molprop toolkit as per example in OE MolProp manual
#Handle the setup of 'filter', which computes lots of properties with the goal of filtering compounds. Here we'll not do any filtering
#and will use it solely for property calculation
filt = OEFilter()
ostr = oeosstream()
pwnd = False
filt.SetTable( ostr, pwnd)
headers = ostr.str().decode('UTF-8').split('\t')
ostr.clear()
filt(mol)
fields = ostr.str().decode('UTF-8').split('\t')
tmpdct = dict( zip(headers, fields) ) #Format the data we need into a dictionary for easy extraction
#Extract polar surface area, store
compounds[molname]['PSA'] = tmpdct[ '2d PSA' ]
#Number of hbond donors
compounds[molname]['hbond-donors'] = int(tmpdct['hydrogen-bond donors'])
#Number of hbond acceptors
compounds[molname]['hbond-acceptors'] = int(tmpdct['hydrogen-bond acceptors'])
#Number of rings
compounds[molname]['rings'] = int(tmpdct['number of ring systems'])
#Quickly estimate hydration free energy, or a value correlated with that -- from ZAP manual
#Do ZAP setup for molecule
OEAssignBondiVdWRadii(mol)
OEMMFFAtomTypes(mol)
OEMMFF94PartialCharges(mol)
zap.SetMolecule( mol )
solv = zap.CalcSolvationEnergy()
aval = area.GetArea( mol )
#Empirically estimate solvation free energy (hydration)
solvation = 0.59*solv + 0.01*aval #Convert electrostatic part to kcal/mol; use empirically determined kcal/sq angstrom value times surface area term
compounds[molname]['hydration'] = solvation
#Also store surface area
compounds[molname]['SA'] = aval
# -
# ## Load in the reference data from Llinas et al./Hopfinger et al.
# +
#============================================================================
# LOAD AND PREP THE ACTUAL SOLUBILITY DATA WE'LL BE USING
#============================================================================
#Load solubility data
import pickle
file = open('scripts/solubilities.pickle', 'rb')
solubilities = pickle.load(file)
file.close()
new_solubilities = {}
#Adjust some naming to match that from file names
for name in solubilities.keys():
newname = name.replace(',','').replace(' ','')
new_solubilities[newname] = solubilities[name]
solubilities = new_solubilities
#Build arrays of solubilities -- actually, work with logarithms of solubilities since they cover such a huge range
#Build a list of the solubilities for the molecules in the training set (knowns)
known_solubilities = [ solubilities[mol] for mol in knownnames]
#Convert to an array and take the log
known_solubilities = log(np.array( known_solubilities)) #Note conversion to log
#Build a list of the solubilities for molecules in the test set (unknowns)
predict_solubilities = [ solubilities[mol] for mol in predictnames]
#Convert to an array and take the log
predict_solubilities = log(np.array( predict_solubilities )) #Note conversion to log
# -
# ## Build a first simple model
# +
#============================================================================
# BUILD SOME SAMPLE MODELS TO PREDICT SOLUBILITY
# You will want to read this code and make sure you get it, as your task takes off from here
#============================================================================
#SIMPLE MODEL #1: Predict solubility based on molecular weight alone
#============================================================================
#Build a really really simple model -- predict solubility based on molecular weight
#To do this, start by obtaining molecular weights -- for both the knowns (training set) and unknowns (test set)
#Make a list of molecular weight for the knowns, convert to array
known_mw = [ compounds[mol]['mw'] for mol in knownnames ]
known_mw = np.array(known_mw)
#Make a list of molecular weights to predict (test set), convert to array
p_mw = [compounds[mol]['mw'] for mol in predictnames ]
p_mw = np.array(p_mw)
#Our model will have the form (using y for logS, the log of the solubility), y = m*(mw) + b, which we rewrite (to feed into numpy) as y = A * p where A is an array consisting of [ mw, 1] and p is [m, b].
A = np.vstack( [known_mw, np.ones( len(known_mw) )] ).T #Write the array -- first our x value, then a 1 for the constant term
#Solve for coefficients using least squares fit -- we just put the array A and the thing we want to fit (known_solubilities) into the least squares algorithm and get back the coefficients m and b
m, b = np.linalg.lstsq( A, known_solubilities)[0]
print("Fit coefficients: %.2f, %.2f" % (m, b))
#Compute the calculated y values, y = m*x + b, for the test set
fittedvals = m*known_mw + b
#Compute some statistics for our model -- Kendall tau, RMS error, correlation coefficient
ktau, pvalue = scipy.stats.kendalltau( known_solubilities, fittedvals)
rms = tools.rmserr( known_solubilities, fittedvals)
R2 = tools.correl( known_solubilities, fittedvals)**2
print("For initial (molecular weight) model training, Kendall tau is %.2f, RMS error is %.2f, and Rsquared is %.2f. Probability of getting this Kendall tau value when in fact there is no correlation (null hypothesis): %.2g" % (ktau, rms, R2, pvalue))
#Now test its predictive power by applying it to the test set
predictvals = m*p_mw + b
ktau, pvalue = scipy.stats.kendalltau( predict_solubilities, predictvals)
rms = tools.rmserr( predict_solubilities, predictvals)
R2 = tools.correl( predict_solubilities, predictvals)**2
halflog = tools.percent_within_half( predict_solubilities, predictvals ) #Figure out percentage within 0.5 log units
print("For initial (molecular weight) model test, Kendall tau is %.2f, RMS error is %.2f, and Rsquared is %.2f. Probability of getting this Kendall tau value when in fact there is no correlation (null hypothesis): %.2g. Percentage within 0.5 log units: %.2f" % (ktau, rms, R2, pvalue, halflog))
#Now, for fun, take all of the data (training and test set) and do a plot of the actual values versus molecular weight (for test and training set separately) and then an overlay of the predicted fit
plot( known_mw, known_solubilities, 'bo' ) #Plot knowns with blue circles
plot( p_mw, predict_solubilities, 'rs' ) #Plot test set with red squares
#Do a plot of the predicted fit
#First, figure out molecular weight range
minmw = min( known_mw.min(), p_mw.min() )
maxmw = max( known_mw.max(), p_mw.max() )
#Compute solubility estimates corresponding to the minimum and maximum
minsol = m*minmw+b
maxsol = m*maxmw+b
#Plot a line
plot( [ minmw, maxmw], [minsol, maxsol], 'k-' ) #Plot as a black line overlaid
xlabel('Molecular weight')
ylabel('logS')
# Show figure
show()
#Save figure
savefig('mw_model.pdf')
#Clear
figure()
# -
# ## Build another simple model
# +
#============================================================================
#SIMPLE MODEL #2: Predict based on hydration free energy (ought to have something to do with solubility) plus molecular weight
#============================================================================
#Build another model -- this time using hydration free energy plus molecular weight (should do better on training set, not clear if it will on test set)
print("\nHydration plus mw model:")
known_hydr = [ compounds[mol]['hydration'] for mol in knownnames] #Build a list of hydration free energies for the knowns, with names listed in knownnames (that is, hydration free energies for the training set)
known_hydr = np.array(known_hydr) #Convert this to a numpy array
p_hydr = [ compounds[mol]['hydration'] for mol in predictnames] #Build list of hydration free energies for the test set
p_hydr = np.array(p_hydr) #Convert to numpy array
#Prep for least squares fit and perform it
A = np.vstack( [known_mw, known_hydr, np.ones(len(known_mw) ) ] ).T #Write array for fit -- see more detailed discussion above in the molecular weight section
#Solve for coefficients
m, n, b = np.linalg.lstsq( A, known_solubilities)[0]
print("Fit coefficients: %.2f (mw), %.2f (hyd), %.2f (constant)" % (m, n, b))
fittedvals = m*known_mw + n*known_hydr + b #Calculate the values we 'predict' based on our model for the training set
#Computed test set results too
predictvals = m*p_mw + n*p_hydr + b
#Do stats -- training set
#Compute kendall tau and pvalue
ktau, pvalue = scipy.stats.kendalltau( known_solubilities, fittedvals)
#RMS error
rms = tools.rmserr( known_solubilities, fittedvals)
#Correlation coefficient
R2 = tools.correl( known_solubilities, fittedvals)**2
halflog = tools.percent_within_half( predict_solubilities, predictvals ) #Figure out percentage within 0.5 log units
print("For mw+hydration model test, Kendall tau is %.2f, RMS error is %.2f, and Rsquared is %.2f. Probability of getting this Kendall tau value when in fact there is no correlation (null hypothesis): %.2g. Percentage within 0.5 log units: %.2f" % (ktau, rms, R2, pvalue, halflog))
# -
# # Do your assignment below
# +
#============================================================================
#ADD YOUR MODELS HERE, FOLLOWING THE PATTERNS OF THE TWO SIMPLE MODELS ABOVE
#============================================================================
| uci-pharmsci/assignments/solubility/physprops_solubility.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: '''Python Interactive'''
# name: f3f1c6d5-6ceb-4fbe-a290-0df896ef3f8e
# ---
# This section enables to use the module code referenced in the repo
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
# Imports for the excercise
import pandas as pd
import logging
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from arcus.ml.evaluation import classification as clev
# ## Initial model evaluation of a basic Logic Regression classifier
#
# 1. Loading of a dataset
# 1. Fitting simple Logistic Regression classifier
# +
df = pd.read_csv('../tests/resources/datasets/student-admission.csv')
y = df.Admission.values
X = np.asarray(df.drop(['Admission'],axis=1))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# trainen van de logistic regression classifier
logreg = linear_model.LogisticRegression(C=1e5,solver='liblinear')
logreg.fit(X_train, y_train)
# -
# ### Visualization of model evaluation, without Roc curve
_ = clev.evaluate_model(logreg, X_test, y_test)
# ### Adding the ROC curve
clev.evaluate_model(logreg, X_test, y_test, show_roc=True)
# +
df = pd.read_csv('../tests/resources/datasets/wine-makers.csv')
y = df.Cultivar.values
X = df.drop(['Cultivar'],axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# trainen van de logistic regression classifier
logreg = linear_model.LogisticRegression(C=1,solver='lbfgs')
logreg.fit(X_train, y_train)
_ = clev.evaluate_model(logreg, X_test, y_test, False)
# -
#np.array(map(str, np.array(sorted(np.unique(y_test)))))
np.char.mod('%d', sorted(np.unique(y_train)))
| samples/model_evaluations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="zwBCE43Cv3PH"
# ##### Copyright 2019 The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# + cellView="form" colab_type="code" id="fOad0I2cv569" colab={}
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="YQB7yiF6v9GR"
# # 使用 tf.data 加载 pandas dataframes
# + [markdown] colab_type="text" id="Oqa952X4wQKK"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/pandas_dataframe"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="yvUZdtkue89v" colab_type="text"
# Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
# [官方英文文档](https://www.tensorflow.org/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到
# [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入
# [<EMAIL> Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
# + [markdown] colab_type="text" id="UmyEaf4Awl2v"
# 本教程提供了如何将 pandas dataframes 加载到 `tf.data.Dataset`。
#
# 本教程使用了一个小型[数据集](https://archive.ics.uci.edu/ml/datasets/heart+Disease),由克利夫兰诊所心脏病基金会(Cleveland Clinic Foundation for Heart Disease)提供. 此数据集中有几百行CSV。每行表示一个患者,每列表示一个属性(describe)。我们将使用这些信息来预测患者是否患有心脏病,这是一个二分类问题。
# + [markdown] colab_type="text" id="iiyC7HkqxlUD"
# ## 使用 pandas 读取数据
# + colab_type="code" id="5IoRbCA2n0_V" colab={}
from __future__ import absolute_import, division, print_function, unicode_literals
# !pip install tensorflow-gpu==2.0.0-rc0
import pandas as pd
import tensorflow as tf
# + [markdown] colab_type="text" id="-2kBGy_pxn47"
# 下载包含心脏数据集的 csv 文件。
# + colab_type="code" id="VS4w2LePn9g3" colab={}
csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/applied-dl/heart.csv')
# + [markdown] colab_type="text" id="6BXRPD2-xtQ1"
# 使用 pandas 读取 csv 文件。
# + colab_type="code" id="UEfJ8TcMpe-2" colab={}
df = pd.read_csv(csv_file)
# + colab_type="code" id="8FkK6QIRpjd4" colab={}
df.head()
# + colab_type="code" id="_MOAKz654CT5" colab={}
df.dtypes
# + [markdown] colab_type="text" id="ww4lRDCS3qPh"
# 将 `thal` 列(数据帧(dataframe)中的 `object` )转换为离散数值。
# + colab_type="code" id="LmCl5R5C2IKo" colab={}
df['thal'] = pd.Categorical(df['thal'])
df['thal'] = df.thal.cat.codes
# + colab_type="code" id="s4XA1SNW2QyI" colab={}
df.head()
# + [markdown] colab_type="text" id="WWRhH6r4xxQu"
# ## 使用 `tf.data.Dataset` 读取数据
# + [markdown] colab_type="text" id="GuqmVVH_yApQ"
# 使用 `tf.data.Dataset.from_tensor_slices` 从 pandas dataframe 中读取数值。
#
# 使用 `tf.data.Dataset` 的其中一个优势是可以允许您写一些简单而又高效的数据管道(data pipelines)。从 [loading data guide](https://www.tensorflow.org/alpha/guide/data) 可以了解更多。
# + colab_type="code" id="2wwhILm1ycSp" colab={}
target = df.pop('target')
# + colab_type="code" id="W6Yc-D3aqyBb" colab={}
dataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))
# + colab_type="code" id="chEnp_Swsf0a" colab={}
for feat, targ in dataset.take(5):
print ('Features: {}, Target: {}'.format(feat, targ))
# + [markdown] colab_type="text" id="GzwlAhX6xH9Q"
# 由于 `pd.Series` 实现了 `__array__` 协议,因此几乎可以在任何使用 `np.array` 或 `tf.Tensor` 的地方透明地使用它。
# + colab_type="code" id="GnpHHkpktl5y" colab={}
tf.constant(df['thal'])
# + [markdown] colab_type="text" id="9XLxRHS10Ylp"
# 随机读取(shuffle)并批量处理数据集。
# + colab_type="code" id="R3dQ-83Ztsgl" colab={}
train_dataset = dataset.shuffle(len(df)).batch(1)
# + [markdown] colab_type="text" id="bB9C0XJkyQEk"
# ## 创建并训练模型
# + colab_type="code" id="FQd9PcPRpkP4" colab={}
def get_compiled_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(3, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
return model
# + colab_type="code" id="ybDzNUheqxJw" colab={}
model = get_compiled_model()
model.fit(train_dataset, epochs=15)
# + [markdown] colab_type="text" id="d6V_6F_MBiG9"
# ## 代替特征列
# + [markdown] colab_type="text" id="X63B9vDsD8Ly"
# 将字典作为输入传输给模型就像创建 `tf.keras.layers.Input` 层的匹配字典一样简单,应用任何预处理并使用 [functional api](../../guide/keras/functional.ipynb)。 您可以使用它作为 [feature columns](../keras/feature_columns.ipynb) 的替代方法。
# + colab_type="code" id="FwQ47_WmOBnY" colab={}
inputs = {key: tf.keras.layers.Input(shape=(), name=key) for key in df.keys()}
x = tf.stack(list(inputs.values()), axis=-1)
x = tf.keras.layers.Dense(10, activation='relu')(x)
output = tf.keras.layers.Dense(3, activation='sigmoid')(x)
model_func = tf.keras.Model(inputs=inputs, outputs=output)
model_func.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
# + [markdown] colab_type="text" id="qSCN5f_vUURE"
# 与 `tf.data` 一起使用时,保存 `pd.DataFrame` 列结构的最简单方法是将 `pd.DataFrame` 转换为 `dict` ,并对该字典进行切片。
# + colab_type="code" id="wUjRKgEhPZqK" colab={}
dict_slices = tf.data.Dataset.from_tensor_slices((df.to_dict('list'), target.values)).batch(16)
# + colab_type="code" id="WWRaiwxeyA9Z" colab={}
for dict_slice in dict_slices.take(1):
print (dict_slice)
# + colab_type="code" id="8nTrfczNyKup" colab={}
model_func.fit(dict_slices, epochs=15)
| site/zh-cn/tutorials/load_data/pandas_dataframe.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import csv
import os
import sklearn
import pandas
from pandas import read_csv as read
from sklearn import svm, preprocessing, ensemble
from sklearn.model_selection import train_test_split, cross_val_score, KFold
from scipy.spatial import distance
import heapq
import matplotlib.pyplot as plt
path_train = "lab2data/arcene_train.data"
data_train = read(path_train, delimiter=" ")
data_train.columns = [i for i in range(1, 10001)] + ['class']
data_train = data_train.drop(['class'], axis=1)
data_train.head()
path_test = "lab2data/arcene_valid.data"
data_test = read(path_test, delimiter=" ")
data_test.columns = [i for i in range(1, 10001)] + ['class']
data_test = data_test.drop(['class'], axis=1)
data_test.head()
path_train_l = "lab2data/arcene_train.labels"
data_train_l = read(path_train_l, delimiter=" ")
path_test_l = "lab2data/arcene_valid.labels"
data_test_l = read(path_test_l, delimiter=" ")
X_train, X_test, y_train, y_test = data_train, data_test, np.ravel(data_train_l), np.ravel(data_test_l)
# # Base result
rf = ensemble.RandomForestClassifier(n_estimators=100, random_state=11)
rf.fit(X_train, y_train)
base_score = rf.score(X_test, y_test)
print(base_score)
# # Filter1 feature importance
# +
importances = rf.feature_importances_
indices = np.argsort(importances)[::-1]
print("Feature importances:")
for f, idx in enumerate(indices[:20]):
print("{:2d}. feature '{:5d}' ({:.4f})".format(f + 1, X_train.columns[idx], importances[idx]))
# -
best_features = indices[:135]
best_features_names = X_train.columns[best_features]
best_f = [int(i) for i in best_features_names]
print(best_f)
print(X_train[best_f])
rf2 = ensemble.RandomForestClassifier(n_estimators=100, random_state=11)
rf2.fit(X_train[best_f], y_train)
future_importance_score = rf2.score(X_test[best_f], y_test)
print(future_importance_score)
# # Filter2 euclidean distance
# +
# normalize the data attributes
normalized_X = preprocessing.normalize(X_train)
rf4 = ensemble.RandomForestClassifier(n_estimators=100, random_state=11)
dst_array = []
for i in range(0, 10000):
dst_array.append(distance.euclidean(normalized_X[:,i], y_train))
dst_array = np.asarray(dst_array)
ind = heapq.nsmallest(20, range(len(dst_array)), dst_array.take)
rf4.fit(X_train[ind], y_train)
euclidean_score = rf4.score(X_test[ind], y_test)
print(euclidean_score)
# -
# # Filter 3 coefficient correlation
# +
rf5 = ensemble.RandomForestClassifier(n_estimators=100, random_state=11)
normalized_X = preprocessing.normalize(X_train)
np.seterr(divide='ignore', invalid='ignore')
corr_array = []
for i in range(0, 10000):
corr_array.append(np.corrcoef(normalized_X[:,i], y_train)[0, 1])
corr_array = np.asarray(corr_array)
ind2 = heapq.nlargest(45, range(len(corr_array)), corr_array.take)
rf4.fit(X_train[ind2], y_train)
coef_corr_score = rf4.score(X_test[ind2], y_test)
print(coef_corr_score)
# -
# # Wrapper
import random
rf3 = ensemble.RandomForestClassifier(n_estimators=100, random_state=11)
c = list(range(1, 10001))
c = random.sample(c, 10000)
g = random.sample(c, 10)
rf_fit = rf3.fit(X_train[g], y_train)
wrapper_score = rf_fit.score(X_test[g], y_test)
for i in range(1, 10001): #Forward Selection wrapper
if c[i] in g:
continue
g.append(c[i])
fit = rf_fit.fit(X_train[g], y_train)
score = rf_fit.score(X_test[g], y_test)
if score < wrapper_score:
break
wrapper_score = score
print(wrapper_score)
# # Plot
# +
x = [1, 2, 3, 4, 5]
y = [base_score, future_importance_score, euclidean_score, coef_corr_score, wrapper_score]
labels = ['Base', 'Future_importance', 'Eucledean', 'Coef_corr', 'Wrapper']
plt.plot(x, y, 'ro')
# You can specify a rotation for the tick labels in degrees or with keywords.
plt.xticks(x, labels, rotation='vertical')
# Pad margins so that markers don't get clipped by the axes
plt.margins(0.2)
# Tweak spacing to prevent clipping of tick-labels
plt.subplots_adjust(bottom=0.15)
plt.show()
# -
| code/Future_selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: gds
# language: python
# name: gds
# ---
# # The evolution of cooperation
#
# Why do members of the same species often cooperate? Why does it seem that
# cooperation is evolutionarily beneficial, and how might this have come
# about? The economist and nobel laureate <NAME> worked on these
# questions in the late 1970s and early 1980s. To investigate this question,
# he set up the following computer experiment. He investigated the performance
# of different strategies for playing the iterated prisoner's dilemma. He set
# up a tournament where he invited colleagues to submit strategies. He
# informed them that the strategies would play the iterated prisoner's dilemma
# for an unknown number of iterations. The aim was to design a strategy that
# would collect the most points while playing against all other submitted
# strategies. Some strategies that were submitted are
#
# * **tit-for-tat**; play nice, unless, in the previous move, the other player betrayed you.
# * **contrite tit for tat**; play nice, unless, in the previous two moves, the other player betrayed you.
# * **grim trigger**; play nice until the other player betrays you after which you will always defect as well.
# * **pavlov**; there is an entire family of pavlov strategies. The basic idea is that this strategy sticks to what was successful. The simplest version is if my and opponent move were the same last time, stay, else switch.
# * **always defect**;
# * **always cooperate**;
#
# More details and variations on this can be found in
#
# * [Axelrod & Hamilton (1981) The evolution of cooperation, Science, Vol. 211, No. 4489](http://www-personal.umich.edu/~axe/research/Axelrod%20and%20Hamilton%20EC%201981.pdf)
# * [<NAME> (1984), The Evolution of Cooperation, Basic Books, ISBN 0-465-02122-0](https://www.amazon.com/Evolution-Cooperation-Revised-Robert-Axelrod/dp/0465005640)
#
# In this assignment, step by step, we are going to build a model for
# exploring how these different strategies perform when playing the iterated
# prisoner's dilemma. In the simplest version, we can do this using the Python
# anaconda distribution. However, there are also more specialized libraries
# for developing these models. In this course, we will be using a library
# called [MESA](https://mesa.readthedocs.io/en/master/). MESA, and virtually
# all other tools for agent based modelling rely on a particular programming
# paradigm known as object oriented programming. Learning about agent based
# modeling thus also require learning more about object oriented programming.
#
# The structure of this assignment is as follows. We start with some more
# background on MESA and object oriented programming. Next, we are going to
# apply this information by building our first agent based model of the
# evolution of cooperation. In this first version, a set of strategies plays
# the iterated prisoner's dilemma game against all other strategies, and we
# tally up the total scores. After that, we are going to slowly expand this
# model. First by adding some randomness to it. This is something that is very
# common in agent based modeling. Second, we are going to make the model
# dynamic by adding a small evolutionary mechanism to it. Third, we are going
# to combine the randomness and evolutionary dynamic.
# ## MESA
#
# Mesa is agent-based modeling (or ABM) framework in Python. It enables users
# to quickly develop ABMs. It provides a variety of convenience components
# often used in ABMs, like different kinds of spaces within which agents can
# interact, different kinds of schedulers for controlling which agents in what
# order are making their moves, and basic support for dealing with the
# intrinsic stochastic nature of ABMs. MESA is ideally suited for learning
# agent-based modeling. It is less suited for developing large-scale computationally heavy ABMs. Given that MESA is a python library, and its focus on learning ABM, we have chosen to use MESA. The documentation of MESA can be found online: https://mesa.readthedocs.io/en/master/
#
# ## What is object oriented programming?
#
# There exist different programming paradigms. If you have some prior experience with more than one programming language, you might already have encountered different paradigms. Within python, it is even possible to mix and match different paradigms (up to a point) depending on your needs. Next to object oriented programming, a commonly encountered paradigm is procedural programming.
#
# In procedural programming, you describe, step by step, what should happen to solve a given task. Most programming you have been doing in Python in the previous quarter was of this style. Basically, in Python, you are using procedural programming if you use one or more functions to achieve your objectives.
#
# In contrast, in object oriented programming, you break down tasks into separate components which have well defined behavior and state. Next, programming objectives are achieved by having these components, or objects, interact. In Python, you have been using this implicitly all the time because everything (including functions.... ) are actually objects. That is, a pandas DataFrame, for example, is actually an object.
#
# There is some terminology involved in object-oriented programming. Exact definitions of these terms are tricky and a heated topic of debate within computer science. Below, I give loose characterizations which should be roughly right and sufficient for you to get started.
#
# * **class**; a template describing the state and behavior of a given type of objects
# * **object**; an instance of a class
# * **method**; a 'function' belonging to a class where the behavior of this 'function' is partly dependent on the state of the object (i.e. instance of the class).
# * **attribute**; a variable on a class where its value is unique to an
# object. Attributes are used for data that describes the state of the object and which influences the behavior of the object.
# * **inheritance**; a way of having a family of classes were some classes are subtypes of a more generic type
#
# Given that in agent-based modelling, we are interested in creating many agents and see how from their interaction aggregate patterns emerge, object-oriented programming is a natural fit. Agents are clearly objects, with state and behavior. In building models of the emergence of collaboration, you will be introduced step by step in the use of object-oriented programming in MESA (and by extension, Python).
#
# For a more thorough introduction of object oriented programming, the [Wikipedia entry](https://en.wikipedia.org/wiki/Object-oriented_programming) is quite good. For a more specific introduction of what Object-Oriented programming means in the context of python, please check: https://realpython.com/python3-object-oriented-programming/.
# ## Developing a first model
# Below, we give the initial code you will be expanding on while developing
# models of the emergence of cooperation. You can look at the code block
# below, or scroll further down where I explain this code in more detail.
#
# +
from collections import deque
from enum import Enum
from itertools import combinations
import iteround
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from mesa import Model, Agent
class Move(Enum):
COOPERATE = 1
DEFECT = 2
class AxelrodAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.points = 0
def step(self):
"""
the move and any logic for deciding
on the move goes here
Returns
-------
Move.COOPERATE or Move.DEFECT
"""
raise NotImplemetedError
def receive_payoff(self, payoff, my_move, opponent_move):
"""
Parameters
----------
payoff : int
my_move : {Move.COOPERATE, Move.DEFECT}
opponents_move : {Move.COOPERATE, Move.DEFECT}
"""
self.points += payoff
def reset(self):
"""
called after playing N iterations against
another player
"""
raise NotImplementedError
class TitForTat(AxelrodAgent):
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.opponent_last_move = Move.COOPERATE
def step(self):
return self.opponent_last_move
def receive_payoff(self, payoff, my_move, opponent_move):
super().receive_payoff(payoff, my_move, opponent_move)
self.opponent_last_move = opponent_move
def reset(self):
self.opponent_last_move = Move.COOPERATE
class ContriteTitForTat(AxelrodAgent):
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.opponent_last_two_moves = deque([Move.COOPERATE, Move.COOPERATE], maxlen=2)
def step(self):
if (self.opponent_last_two_moves[0] == Move.DEFECT) and\
(self.opponent_last_two_moves[1] == Move.DEFECT):
return Move.DEFECT
else:
return Move.COOPERATE
def receive_payoff(self, payoff, my_move, opponent_move):
super().receive_payoff(payoff, my_move, opponent_move)
self.opponent_last_two_moves.append(opponent_move)
def reset(self):
self.opponent_last_two_moves = deque([Move.COOPERATE, Move.COOPERATE], maxlen=2)
class AxelrodModel(Model):
"""A model with some number of agents."""
def __init__(self, N, seed=None):
super().__init__(seed=seed)
self.num_iterations = N
self.agents = []
self.payoff_matrix = {}
self.payoff_matrix[(Move.COOPERATE, Move.COOPERATE)] = (2, 2)
self.payoff_matrix[(Move.COOPERATE, Move.DEFECT)] = (0, 3)
self.payoff_matrix[(Move.DEFECT, Move.COOPERATE)] = (3, 0)
self.payoff_matrix[(Move.DEFECT, Move.DEFECT)] = (1, 1)
# Create agents
for i, agent_class in enumerate(AxelrodAgent.__subclasses__()):
a = agent_class(i, self)
self.agents.append(a)
def step(self):
"""Advance the model by one step."""
for agent_a, agent_b in combinations(self.agents, 2):
for _ in range(self.num_iterations):
move_a = agent_a.step()
move_b = agent_b.step()
payoff_a, payoff_b = self.payoff_matrix[(move_a, move_b)]
agent_a.receive_payoff(payoff_a, move_a, move_b)
agent_b.receive_payoff(payoff_b, move_b, move_a)
agent_a.reset()
agent_b.reset()
# -
# The above code gives the basic setup for a first version of a model of the emergence of collaboration. Let's quickly walk through this code.
#
# We begin with a number of import statements. We import the ``deque`` class from the ``collections`` library. Deque is basically a pipeline of fixed length. We put stuff in at one end, and stuff drops of at the other end. We use the deque to create a memory of previous moves of a given length. See the ``ContriteTitForTat`` class for how we use it. Next, we import the ``Enum`` class from the ``enum`` library. Enums are a way of specifying a fixed number of unique names and associated values. We use it to standardize the 2 possible moves Agents can make. Next, we import the ``combinations`` function from the ``itertools`` library. We use this function to pair off all agents against one other. See the ``step`` method in the ``AxelrodModel`` class. The Python programming language comes with many useful libraries. It is well worth spending some time reading the detailed documentation for in particular itertools and collections. Many common tasks can readily be accomplished by using these libraries.
#
# Next, we move to importing the ``Model`` and ``Agent`` classes from MESA. Agent is the base class from which all agents in a model have to be derived. Similarly, Model is the base class from which any given model is derived.
#
# Next, I have defined a generic ``AxelrodAgent``. Let's look at this class in a bit more detail starting with the first line
#
# ```python
# class AxelrodAgent(Agent):
# ```
#
# The word ``class`` is like ``def`` in that it indicates that we are describing something that can be used later. Here we are defining a class, which we can use by instantiating it as an object. We call this class ``AxelrodAgent`` and it extends (i.e. is a further detailing of) the base ``Agent`` class that we imported from Mesa.
#
# This AxelrodAgent has 4 methods
#
# ```python
# def __init__(self, unique_id, model):
# ...
#
# def step(self):
# ...
#
# def receive_payoff(self, payoff, my_move, opponent_move):
# ...
#
# def reset(self):
# ...
#
# ```
# Any method in Python has as its first variable ``self``. This is not something that you need to pass when calling the method. It is something automatically inserted by Python. Self refers to this specific instance of the class, and it allows you to assign values to it or call methods on itself.
#
# The first method, ``__init__`` is common to any Python class. This method is
# called when instantiating the class as an object. The two variables in the ``__init__``, ``unique_id`` and ``model``, are expected by MESA. The ``step`` method is also expected by Mesa. The other two methods, ``receive_payoff`` and ``reset``,have been chosen by me. Note how we are specifying, implicitly, a pattern of interaction. Each of these methods is called under specific conditions and does something to the state of the agent (receive payoff and reset), or allows the agent to behave conditional on its state (step). ``receive_payoff`` is called after each interaction of the prisoners' dilemma. ``reset`` is called after having played the iterated prisoner's dilemma against another strategy.
#
# Of the 4 methods, 2 are implemented and 2 raise an error. Any specific
# strategy class that we are going to implement thus needs to implement always
# at least the step and reset method, while it can rely on the behavior of ``__init__`` and ``receive_payoff``, extend this behavior, or overwrite it. Let's look at these three options in some more detail.
#
# If a subclass of AxelrodAgent does not implement either ``__init__`` and ``receive_payoff``, it automatically falls back on using the behavior specified in the AxelrodAgent class. We can also extend the behavior. For this look at the ``TitForTat`` class:
#
# ```python
# class TitForTat(AxelrodAgent):
#
# def __init__(self, unique_id, model):
# super().__init__(unique_id, model)
# self.opponent_last_move = Move.COOPERATE
#
# def step(self):
# return self.opponent_last_move
#
# def receive_payoff(self, payoff, my_move, opponent_move):
# super().receive_payoff(payoff, my_move, opponent_move)
# self.opponent_last_move = opponent_move
#
# def reset(self):
# self.opponent_last_move = Move.COOPERATE
# ```
#
# Note how both ``__init__`` and ``receive_payoff`` start with ``super()``. This means that we first call the same method on the parent class (so ``AxelrodAgent``). Next we have some additional things we want to do. In ``__init__`` we create a novel attribute ``opponent_last_move``, which we set to ``Move.COOPERATE``. Note how we use the ``self`` variable. In receive_payoff, we update this attribute. Finally, we can overwrite the entire implementation of a method. For this, all we need to do is not call super.
# # Assignment 1: implementing your first strategies
# Before looking at the model class more closely, implement the following strategies as classes (in order of easy to difficult)
#
# * **Defector**; always defect
# * **Cooperator**; always cooperate
# * **GrimTrigger**; cooperate until betrayed, after which it always defects
# * **Pavlov**; The basic idea is that this strategy sticks to what was successful. The simplest version is if my and opponent move were the same last time, stay, else switch. Pavlov always starts assuming in the previous move, both agents played ``Move.COOPERATE``
#
# To help you, I have given you the basic template and all you need to do is write some code replacing the dots.
# +
class Defector(AxelrodAgent):
def step(self):
return Move.DEFECT
def reset(self):
pass
class Cooperator(AxelrodAgent):
def step(self):
return Move.COOPERATE
def reset(self):
pass
class GrimTrigger(AxelrodAgent):
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.opponent_last_move = Move.COOPERATE
def step(self):
if self.opponent_last_move == Move.DEFECT:
return Move.DEFECT
else:
return Move.COOPERATE
def receive_payoff(self, payoff, my_move, opponent_move):
super().receive_payoff(payoff, my_move, opponent_move)
self.opponent_last_move = opponent_move
def reset(self):
self.opponent_last_move = Move.COOPERATE
class Pavlov(AxelrodAgent):
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.opponent_last_move = Move.COOPERATE
self.my_last_move = Move.COOPERATE
def step(self):
if self.opponent_last_move == self.my_last_move:
return self.my_last_move
else:
return self.opponent_last_move
def receive_payoff(self, payoff, my_move, opponent_move):
super().receive_payoff(payoff, my_move, opponent_move)
self.opponent_last_move = opponent_move
self.my_last_move = my_move
def reset(self):
self.opponent_last_move = Move.COOPERATE
self.my_last_move = Move.COOPERATE
# -
# Before running the model, let's quickly walk through the code of the model class.
#
# ```python
# class AxelrodModel(Model):
# ```
#
# So, here we define a new model class which extends the default model class from MESA. This class typically has at least 2 methods: ``__init__`` and ``step``. In the init we instantiate the model, while in step we specify what should happen in one step of the model. A step, or tick, is something particular to Agent Based Models. A step generally involves allowing all agents to take an action (i.e. you call step on all agents).
#
# Lets' look more closely at the init
#
# ```python
# def __init__(self, N, seed=None):
# super().__init__(seed=seed)
# self.num_iterations = N
# self.agents = []
# self.payoff_matrix = {}
#
# self.payoff_matrix[(Move.COOPERATE, Move.COOPERATE)] = (2, 2)
# self.payoff_matrix[(Move.COOPERATE, Move.DEFECT)] = (0, 3)
# self.payoff_matrix[(Move.DEFECT, Move.COOPERATE)] = (3, 0)
# self.payoff_matrix[(Move.DEFECT, Move.DEFECT)] = (1, 1)
#
# # Create agents
# for i, agent_class in enumerate(AxelrodAgent.__subclasses__()):
# a = agent_class(i, self)
# self.agents.append(a)
# ```
#
# we are extending the default ``__init__`` from ``Model``, so we call ``super`` first. Seed is a specific argument that will be explained in more detail later in this assignment. Next, we specify a number of attributes such as a list with all agents and the payoff matrix. Note that a high payoff is desirable. Next we populate the model with one instance of each type of strategy.
#
# ```python
# for i, agent_class in enumerate(AxelrodAgent.__subclasses__()):
# ```
#
# The Python buildin function ``enumerate`` takes a collection and iterates over it. It will return the index and the item itself. This allows us to loop over something while keeping track of where we are in the collection at the same time. ``AxelrodAgent.__subclasses__()`` is another python *magic* method as indicated by the leading and trailing double underscore. Moreover, this is a so-called class method. Remember, when introducing object oriented programming, I said that methods are tied to objects (i.e. instances of the class). This is 95% true, but it is possible to define methods (and attributes) at the class level as well. These are useful for doing tasks that don't rely on the state of the object but are relevant to the behavior of the class. Don't worry too much about getting your head around this. It is a rather advanced and esoteric topic that we don't need to worry about too much. Here, ``__subclasses__`` returns a list with all classes that extend ``AxelrodAgent``. That is, all the different strategies we have been defining. Note that this also showcases a benefit of using Object Orientation. If we implement new strategies, as long as they extend ``AxelrodAgent``, we don't have to change the model itself. It will just work.
#
# Next, let's look at the step method. Basically, here we let all strategies play against all other strategies for N rounds of the prisoner's dilemma.
#
# ```python
# def step(self):
# '''Advance the model by one step.'''
# for agent_a, agent_b in combinations(self.agents, 2):
# for _ in range(self.num_iterations):
# move_a = agent_a.step()
# move_b = agent_b.step()
#
# payoff_a, payoff_b = self.payoff_matrix[(move_a, move_b)]
#
# agent_a.receive_payoff(payoff_a, move_a, move_b)
# agent_b.receive_payoff(payoff_b, move_b, move_a)
# agent_a.reset()
# agent_b.reset()
# ```
#
# First, we use ``combinations`` to generate all possible unique pairs of agents. Next, for each pair we play the game. We do this by first asking both agents for their move. Next, we look up the resulting payoff. Finally, we inform both agents of their payoff, their own move, and their opponents move. It might seem redundant to inform agents of their own move. And in this basic case, this is correct. But in a next version of the model, we will introduce the possibility of error were the opposite move from the intended move is executed. Finally, after having played the game for N rounds, both agents are reset if necessary. This is to ensure that when the agents play again next, they start without any prior history based on previous games.
#
# We can now run the model and get the scores out.
#
# ```python
# scores = [(agent.__class__.__name__, agent.points) for agent in model.agents]
# ```
# Here we iterate over all agents, and use magic attributes to get the class name and the points accumulated over playing against all other strategies. Next, we sort this list in place based on the number of points, and sort it in reverse order.
#
# ```python
# scores.sort(key=lambda x: x[1], reverse=True)
# ```
#
#
# +
model = AxelrodModel(200)
model.step()
scores = [(agent.__class__.__name__, agent.points) for agent in model.agents]
scores.sort(key=lambda x: x[1], reverse=True)
for entry in scores:
print(entry)
# -
# # Assignment 2: adding a random agent
#
# The strategies we have been looking at so far are deterministic. Let's make this story a bit more complicated. Below, implement an additional strategy whose moves are random with an equal chance of either cooperate or defect. How does this change the results? If you rerun this model multiple times, what do you see? Why does this happen?
#
# *tip: it is a best practice in MESA to use ``self.random`` on either any instance of a Mesa agent or model*
# +
class Random(AxelrodAgent):
def step(self):
decide = [Move.COOPERATE, Move.DEFECT]
return self.random.choice(decide)
def reset(self):
pass
for _ in range(10):
model = AxelrodModel(200)
model.step()
scores = [(agent.__class__.__name__, agent.points) for agent in model.agents]
scores.sort(key=lambda x: x[1], reverse=True)
for entry in scores:
print(entry)
print()
# -
# ## Pseudo random number generation
#
# By adding an agent which plays a random move, we introduce randomness in the outcomes of the model. Every time we run the model, the payoffs received by each strategy will be slightly different. This might create all kinds of issues. For example, what if you have an error in your code that only occurs under very specific conditions. How can you ensure that these conditions occur when debugging if randomness plays a role? Or, how can we draw conclusions from results that are not deterministic?
#
# Randomness is intrinsic to virtually all agent based models. Computers don't actually produce real random numbers, but rather rely on deterministic algorithms that produce numbers that appear very close to random. Such algorithms are known as pseudo-random number generators. As long as we know the initial state of this algorithm, we can reproduce the exact same random numbers. If you want to know more, the [Wikipedia entry on Random Number Generation](https://en.wikipedia.org/wiki/Random_number_generation) is a good starting point. So how can we control this state?
#
# It is here that the ``seed`` argument comes in. Remember our model ``__init__`` function had ``seed`` as an optional keyword argument set to ``None`` by default. By providing a specific value, we can actually make the random numbers deterministic. Have a look at the code below to see this in action.
import random
random.seed(123456789)
[random.random() for _ in range(10)]
random.seed(123456789)
[random.random() for _ in range(10)]
# By setting seed to the same value in both code blocks, we start the generation of random numbers from the same initial condition. Thus, the random numbers are the same. If seed is set to None, the computer will look at the time and use this as initial condition.
# # Assignment 3: Noise
#
# In the foregoing, we have explored how well different strategies for playing
# the iterated prisoners' dilemma perform assuming that there is no noise.
# That is, the moves of agents are executed as intended. Next, let's
# complicate the situation. What happens if there is a small chance that an
# agent makes the opposite move from what it intended to do?
#
# For this, we can adapt the model itself. If you have implemented the
# strategies smartly, there is no need to change anything in the strategy
# classes. Modify the model in the following ways:
# * There is a user specifiable probability of making the opposite move. This
# probability is constant for all agents.
# * Both agents can simultaneously be affected by noise.
# * Agents are informed of the actual move they made.
#
# *tip: extend AxelrodModel rather than copy and paste all code by adding code at
# the dots below*
#
class AxelrodModelWithNoise(AxelrodModel):
"""A model with some number of agents."""
def __init__(self, N, noise_level, seed=None):
super().__init__(N, seed=seed)
self.noise = noise_level
def step(self):
"""Advance the model by one step."""
counter = 0
counter_no_noise = 0
choices = [Move.COOPERATE, Move.DEFECT]
self.l = []
for agent_a, agent_b in combinations(self.agents, 2):
self.l.append([agent_a, agent_b])
for _ in range(self.num_iterations):
move_a = agent_a.step()
move_b = agent_b.step()
rand_noise = random.random()
if rand_noise > self.noise:
pass
counter_no_noise += 1
else:
counter += 1
if move_a == choices[0]:
move_a = choices[1]
else:
move_a = choices[0]
if move_b == choices[0]:
move_b = choices[1]
else:
move_b = choices[0]
payoff_a, payoff_b = self.payoff_matrix[(move_a, move_b)]
agent_a.receive_payoff(payoff_a, move_a, move_b)
agent_b.receive_payoff(payoff_b, move_b, move_a)
agent_a.reset()
agent_b.reset()
print("Noise counter", counter)
print("No Noise counter", counter_no_noise)
print()
# +
model_noise = AxelrodModelWithNoise(200, noise_level = 0.01)
model_noise.step()
scores = [(agent.__class__.__name__, agent.points) for agent in model_noise.agents]
scores.sort(key=lambda x: x[1], reverse=True)
for entry in scores:
print(entry)
# -
# Experiment with different levels of noise, ranging from 1% to 10%. How does this affect the ranking of the strategies?
#
# You can use ``np.linspace`` to generate a range of evenly spaced values between 0.01 and 0.1. You can use the above code for printing the scores of a given run to the screen.
# +
for noise_level in np.linspace(0.01, 0.1, 10):
print("--------------------------------------------")
print()
print("Noise level:", noise_level)
model_noise_testing = AxelrodModelWithNoise(200, noise_level)
model_noise_testing.step()
scores = [(agent.__class__.__name__, agent.points) for agent in model_noise_testing.agents]
scores.sort(key=lambda x: x[1], reverse=True)
for entry in scores:
print(entry)
# -
# # Assignment 4: adding evolutionary dynamics
#
# Up till now, we have run the model only for one step. That is all agents
# play against each other only once. Let's make the model more dynamic by adding an evolutionary component to it. We start by generating *M* agents of each strategy. These agents play against one another as before. Next, after each step, we tally up the total scores achieved by each strategy. We create a new population, proportional to how well each strategy performed. Over multiple steps, badly performing strategies will die out. However, with changing proportions of the different strategies, how well each strategy is performing will also change. Can you predict which strategies will come to dominate this population?
#
# 1. Implement the ``build_population`` method which creates a population given a dictionary with proportions
# 2. Calculate the new proportions as part of ``step``
#
# *hint: for keeping track at the scores per agent type in a given generation look at the ``Counter`` class in the ``collections`` library. You can get the type, or class, of an agent using the ``__class__`` attribute*
#
# To help in keeping track of the changing proportions over time, I have added a small piece of code. In the ``__init__`` we create an attribute with the scores. This attribute is a dictionary with a list for each class of agents. In ``step`` we append the new proportions to these lists.
#
# +
from collections import Counter, defaultdict
class EvolutionaryAxelrodModel(Model):
"""A model with some number of agents."""
def __init__(self, num_agents, N, seed=None):
super().__init__(seed=seed)
self.num_iterations = N
self.agents = []
self.payoff_matrix = {}
self.population_size = len(AxelrodAgent.__subclasses__())*num_agents
self.payoff_matrix[(Move.COOPERATE, Move.COOPERATE)] = (2, 2)
self.payoff_matrix[(Move.COOPERATE, Move.DEFECT)] = (0, 3)
self.payoff_matrix[(Move.DEFECT, Move.COOPERATE)] = (3, 0)
self.payoff_matrix[(Move.DEFECT, Move.DEFECT)] = (1, 1)
strategies = AxelrodAgent.__subclasses__()
num_strategies = len(strategies)
proportions = {agent_class:1/num_strategies for agent_class in strategies}
self.num_iterations = N
self.scores = defaultdict(list)
for agent_class in strategies:
self.scores[agent_class].append(proportions[agent_class])
self.initial_population_size = num_agents * num_strategies
self.agent_id = 0
self.build_population(proportions)
def step(self):
"""Advance the model by one step."""
for agent_a, agent_b in combinations(self.agents, 2):
for _ in range(self.num_iterations):
move_a = agent_a.step()
move_b = agent_b.step()
payoff_a, payoff_b = self.payoff_matrix[(move_a, move_b)]
agent_a.receive_payoff(payoff_a, move_a, move_b)
agent_b.receive_payoff(payoff_b, move_b, move_a)
agent_a.reset()
agent_b.reset()
# calculate scores per class of agents
scores = Counter()
...
# normalize scores on unit interval
proportions = {}
...
# keep track of proportions over the generations
for agent_class in AxelrodAgent.__subclasses__:
self.scores[agent_class].append(proportions[agent_class])
self.build_new_population(proportions)
def build_population(self, proportions):
"""build the new population
Parameters
----------
proportions : dict
key is agent class, value is float
"""
# build new population
population = []
# create a number of agents proportional to the normalized scores
# ensure that the total size of the population (num_agents * num_strategies)
# stays as close to the initial population size
#total population
self.initial_population_size
# Create agents
for agent_class, share in proportions.items():
for num in range(num_agents):
for i, agent_class in enumerate(AxelrodAgent.__subclasses__()):
a = agent_class(i, self)
population.append(a)
self.agents = population
# +
strategies = AxelrodAgent.__subclasses__()
num_strategies = len(strategies)
proportions = {agent_class:1/num_strategies for agent_class in strategies}
for key, value in proportions.items():
print(key)
print(value)
# -
strategies
# +
initial_distribution = [1/len(AxelrodAgent.__subclasses__())]\
* len(AxelrodAgent.__subclasses__())
rounded_initial_distribution = iteround.saferound(initial_distribution, 2)
agent_dict = dict(zip(AxelrodAgent.__subclasses__(),
rounded_initial_distribution))
# -
initial_distribution = [1/len(AxelrodAgent.__subclasses__())]* len(AxelrodAgent.__subclasses__())
initial_distribution
# +
initial_distribution = [round(1/len(AxelrodAgent.__subclasses__()), 3)]\
* len(AxelrodAgent.__subclasses__())
agent_dict = dict(zip(AxelrodAgent.__subclasses__(), initial_distribution))
agent_dict
# -
# Instantiate the model with 10 agents per strategy and play 200 rounds of the game. Next run the model for 100 steps and visualize how the relative proportions of the different strategies evolve over the steps.
# +
model = EvolutionaryAxelrodModel(10, 200)
for _ in range(100):
model.step()
# visualizing results using matplotlib
fig, ax = plt.subplots()
for k, v in model.scores.items():
ax.plot(v, label=k)
ax.legend()
plt.show()
# -
# # Assignment 5: Evolution with noise
#
# Building on the previous two versions of the model, as a final step we are going to explore how noise affects the evolutionary dynamics. To do this, you extend the Evolutionary model from the previous step with noise. Again, explore the dynamics of this model for 100 steps over noise levels ranging from 1% to 10%. What do you see? Can you explain what is happening in the model? Are you surprised by these results?
#
# Note how in ``NoisyEvolutionaryAxelrodModel`` we only needed to slightly modify the ``__init__`` and ``step`` method from ``EvolutionaryAxelrodModel``. Due to the use of inheritance, we could reuse almost all of our code. There is still some repetition. The 2 nested for loops in ``step`` are still the same. Can you think of one or more ways to further reduce code duplication?
# +
from collections import Counter, defaultdict
class NoisyEvolutionaryAxelrodModel(EvolutionaryAxelrodModel):
def __init__(self, num_agents, N, noise_level=0.01, seed=None):
super().__init__(num_agents, N, seed=seed)
self.noise_level = noise_level
def step(self):
"""Advance the model by one step."""
for agent_a, agent_b in combinations(self.agents, 2):
for _ in range(self.num_iterations):
move_a = agent_a.step()
move_b = agent_b.step()
#insert noise in movement
...
payoff_a, payoff_b = self.payoff_matrix[(move_a, move_b)]
agent_a.receive_payoff(payoff_a, move_a, move_b)
agent_b.receive_payoff(payoff_b, move_b, move_a)
agent_a.reset()
agent_b.reset()
# calculate scores per class of agents
scores = Counter()
for agent in self.agents:
scores[agent.__class__] += agent.points
# normalize scores on unit interval
total = sum(scores.values())
proportions = {k:v/total for k,v in scores.items()}
# keep track of proportions over the generations
for agent_class in AxelrodAgent.__subclasses__():
self.scores[agent_class].append(proportions[agent_class])
self.build_population(proportions)
# -
| src/week_2/Assignment 1 cooperation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import tensorflow as tf
import numpy as np
import os
import pandas as pd
from sklearn.model_selection import train_test_split
# from tensorflow.keras import backend as K
import json
# -
# # Initial global var
# +
## 미리 Global 변수를 지정하자. 파일 명, 파일 위치, 디렉토리 등이 있다.
DATA_IN_PATH = '../data_in/'
DATA_OUT_PATH = '../data_out/'
TRAIN_Q1_DATA_FILE = 'train_q1.npy'
TRAIN_Q2_DATA_FILE = 'train_q2.npy'
TRAIN_LABEL_DATA_FILE = 'train_label.npy'
NB_WORDS_DATA_FILE = 'data_configs.json'
## 학습에 필요한 파라메터들에 대해서 지정하는 부분이다.
## CPU에서는 Epoch 크기를 줄이는 걸 권장한다.
BATCH_SIZE = 4096
EPOCH = 50
HIDDEN = 64
NUM_LAYERS = 3
DROPOUT_RATIO = 0.3
TEST_SPLIT = 0.1
RNG_SEED = 13371447
EMBEDDING_DIM = 128
MAX_SEQ_LEN = 31
# -
# # Load Dataset
# +
## 데이터를 불러오는 부분이다. 효과적인 데이터 불러오기를 위해, 미리 넘파이 형태로 저장시킨 데이터를 로드한다.
q1_data = np.load(open(DATA_IN_PATH + TRAIN_Q1_DATA_FILE, 'rb'))
q2_data = np.load(open(DATA_IN_PATH + TRAIN_Q2_DATA_FILE, 'rb'))
labels = np.load(open(DATA_IN_PATH + TRAIN_LABEL_DATA_FILE, 'rb'))
prepro_configs = None
with open(DATA_IN_PATH + NB_WORDS_DATA_FILE, 'r') as f:
prepro_configs = json.load(f)
# -
VOCAB_SIZE = prepro_configs['vocab_size']
BUFFER_SIZE = len(labels)
# # Split train and test dataset
q1_data_len = np.array([min(len(x), MAX_SEQ_LEN) for x in q1_data])
q2_data_len = np.array([min(len(x), MAX_SEQ_LEN) for x in q2_data])
# +
## 데이터를 나누어 저장하자. sklearn의 train_test_split을 사용하면 유용하다. 하지만, 쿼라 데이터의 경우는
## 입력이 1개가 아니라 2개이다. 따라서, np.stack을 사용하여 두개를 하나로 쌓은다음 활용하여 분류한다.
X = np.stack((q1_data, q2_data), axis=1)
y = labels
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=TEST_SPLIT, random_state=RNG_SEED)
train_Q1 = train_X[:,0]
train_Q2 = train_X[:,1]
test_Q1 = test_X[:,0]
test_Q2 = test_X[:,1]
# +
def rearrange(base, hypothesis, labels):
features = {"base": base, "hypothesis": hypothesis}
return features, labels
def train_input_fn():
dataset = tf.data.Dataset.from_tensor_slices((train_Q1, train_Q2, train_y))
dataset = dataset.shuffle(buffer_size=len(train_Q1))
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.map(rearrange)
dataset = dataset.repeat(EPOCH)
iterator = dataset.make_one_shot_iterator()
return iterator.get_next()
def eval_input_fn():
dataset = tf.data.Dataset.from_tensor_slices((test_Q1, test_Q2, test_y))
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.map(rearrange)
iterator = dataset.make_one_shot_iterator()
return iterator.get_next()
# -
# # Model setup
# +
def Malstm(features, labels, mode):
TRAIN = mode == tf.estimator.ModeKeys.TRAIN
EVAL = mode == tf.estimator.ModeKeys.EVAL
PREDICT = mode == tf.estimator.ModeKeys.PREDICT
def basic_bilstm_network(inputs, name):
with tf.variable_scope(name, reuse=tf.AUTO_REUSE):
lstm_fw = [
tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(HIDDEN), output_keep_prob=DROPOUT_RATIO)
for layer in range(NUM_LAYERS)
]
lstm_bw = [
tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(HIDDEN), output_keep_prob=DROPOUT_RATIO)
for layer in range(NUM_LAYERS)
]
multi_lstm_fw = tf.nn.rnn_cell.MultiRNNCell(lstm_fw)
multi_lstm_bw = tf.nn.rnn_cell.MultiRNNCell(lstm_bw)
(fw_outputs, bw_outputs), _ = tf.nn.bidirectional_dynamic_rnn(cell_fw=multi_lstm_fw,
cell_bw=multi_lstm_bw,
inputs=inputs,
dtype=tf.float32)
outputs = tf.concat([fw_outputs, bw_outputs], 2)
return outputs[:,-1,:]
embedding = tf.keras.layers.Embedding(VOCAB_SIZE, EMBEDDING_DIM)
base_embedded_matrix = embedding(features['base'])
hypothesis_embedded_matrix = embedding(features['hypothesis'])
base_sementic_matrix = basic_bilstm_network(base_embedded_matrix, 'base')
hypothesis_sementic_matrix = basic_bilstm_network(hypothesis_embedded_matrix, 'hypothesis')
base_sementic_matrix = tf.keras.layers.Dropout(DROPOUT_RATIO)(base_sementic_matrix)
hypothesis_sementic_matrix = tf.keras.layers.Dropout(DROPOUT_RATIO)(hypothesis_sementic_matrix)
# merged_matrix = tf.concat([base_sementic_matrix, hypothesis_sementic_matrix], -1)
# logit_layer = tf.keras.layers.dot([base_sementic_matrix, hypothesis_sementic_matrix], axes=1, normalize=True)
# logit_layer = K.exp(-K.sum(K.abs(base_sementic_matrix - hypothesis_sementic_matrix), axis=1, keepdims=True))
logit_layer = tf.exp(-tf.reduce_sum(tf.abs(base_sementic_matrix - hypothesis_sementic_matrix), axis=1, keepdims=True))
logit_layer = tf.squeeze(logit_layer, axis=-1)
if PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'is_duplicate':logit_layer
})
#prediction 진행 시, None
if labels is not None:
labels = tf.to_float(labels)
# loss = tf.reduce_mean(tf.keras.metrics.binary_crossentropy(y_true=labels, y_pred=logit_layer))
loss = tf.losses.mean_squared_error(labels=labels, predictions=logit_layer)
# loss = tf.reduce_mean(tf.losses.sigmoid_cross_entropy(labels, logit_layer))
if EVAL:
accuracy = tf.metrics.accuracy(labels, tf.round(logit_layer))
eval_metric_ops = {'acc': accuracy}
return tf.estimator.EstimatorSpec(
mode=mode,
eval_metric_ops= eval_metric_ops,
loss=loss)
elif TRAIN:
global_step = tf.train.get_global_step()
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss, global_step)
return tf.estimator.EstimatorSpec(
mode=mode,
train_op=train_op,
loss=loss)
# -
# # Training & Eval
# +
os.environ["CUDA_VISIBLE_DEVICES"]="7" #For GPU
model_dir = os.path.join(os.getcwd(), DATA_OUT_PATH + "/checkpoint/rnn2/")
os.makedirs(model_dir, exist_ok=True)
config_tf = tf.estimator.RunConfig()
lstm_est = tf.estimator.Estimator(Malstm, model_dir=model_dir)
# -
lstm_est.train(train_input_fn)
lstm_est.evaluate(eval_input_fn)
# # Load test dataset |& create submit dataset to kaggle
# +
TEST_Q1_DATA_FILE = 'test_q1.npy'
TEST_Q2_DATA_FILE = 'test_q2.npy'
TEST_ID_DATA_FILE = 'test_id.npy'
test_q1_data = np.load(open(DATA_IN_PATH + TEST_Q1_DATA_FILE, 'rb'))
test_q2_data = np.load(open(DATA_IN_PATH + TEST_Q2_DATA_FILE, 'rb'))
test_id_data = np.load(open(DATA_IN_PATH + TEST_ID_DATA_FILE, 'rb'))
# +
predict_input_fn = tf.estimator.inputs.numpy_input_fn(x={"base":test_q1_data,
"hypothesis":test_q2_data},
shuffle=False)
predictions = np.array([p['is_duplicate'] for p in lstm_est.predict(input_fn=
predict_input_fn)])
# +
print(len(predictions)) #2345796
output = pd.DataFrame( data={"test_id":test_id_data, "is_duplicate": list(predictions)} )
output.to_csv( "rnn_predict.csv", index=False, quoting=3 )
| 5.TEXT_SIM/Appendix/5.3.3_Quora_LSTM_Appendix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python
'''
DESCRIPTION
-----------
LocalOutlierFactor with trained model
RETURN
------
{DATASET}_lof_seen.png : png file
Similarity scores of seen label
{DATASET}_lof_seen.png : png file
Similarity score of unseen label
EXPORTED FILE(s) LOCATION
-------------------------
./reports/retrieval/{EXPERIMENT}/{DATASET}_lof_seen.png
./reports/retrieval/{EXPERIMENT}/{DATASET}_lof_unseen.png
'''
# importing default libraries
import os, argparse, sys
# sys.path.append('./')
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath('__file__')))
os.chdir(ROOT_DIR)
sys.path.append(ROOT_DIR)
# importing scripts in scripts folder
from scripts import config as src
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from sklearn.manifold import TSNE
# from sklearn.neighbors import LocalOutlierFactor
from tensorflow import keras
import warnings
warnings.filterwarnings('ignore')
import glob
TINY_SIZE = 8
SMALL_SIZE = 10
MEDIUM_SIZE = 16
BIGGER_SIZE = 20
plt.rc('font', size=MEDIUM_SIZE) # controls default text sizes
plt.rc('axes', titlesize=12) # fontsize of the axes title
plt.rc('axes', labelsize=12) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=TINY_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('legend', title_fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=MEDIUM_SIZE) # fontsize of the figure title
# -
# ## MELANOMA
# +
loc_output = './reports/retrieval_lof/exper_melanoma'
df_query = pd.read_pickle('./data/processed/exper_melanoma/query_log1p.pck')
df_reference = pd.read_pickle('./data/processed/exper_melanoma/reference_log1p.pck')
X_query = df_query.iloc[:, :-1].values
y_ground_truth_query = df_query.iloc[:, -1:]
X_reference = df_reference.iloc[:, :-1].values
y_ground_truth_reference = df_reference.iloc[:, -1:]
# -
order_plot = list(np.unique(y_ground_truth_reference))
seen_label = dict(zip(order_plot, range(len(order_plot))))
order_plot.append('Neg.cell')
unseen_label = dict(zip(order_plot, range(len(order_plot))))
print(seen_label)
print(unseen_label)
# ## FULL MODEL
model_name = '1_layer_signaling'
model , model_encoding = src.loading_model(f'./models/exper_melanoma/train_test_split/design_{model_name}_reference_log1p_Adam_relu_0.h5', -1)
model_encoding.summary()
# +
encoding_query = model_encoding.predict(X_query)
encoding_reference = model_encoding.predict(X_reference)
print(encoding_query.shape)
print(encoding_reference.shape)
# -
threshold, df_lof_reference, df_lof_query = src.calculate_threshold(encoding_with_seen=encoding_reference
, encoding_with_unseen=encoding_query
, y_with_seen=y_ground_truth_reference
, y_with_unseen=y_ground_truth_query)
fig, axes = plt.subplots(ncols=2, sharey=True, figsize=(15,5))#, dpi=100)
sns.violinplot(x="cell_type", y="score", data=df_lof_reference, ax=axes[0], order=seen_label)
sns.violinplot(x="cell_type", y="score", data=df_lof_query, ax=axes[1], order=unseen_label)
axes[0].axhline(threshold, color='crimson')
axes[1].axhline(threshold, color='crimson')
axes[0].set_title('for seen label')
axes[1].set_title('with unseen label')
axes[0].set(xlabel='', ylabel='similarity score')
axes[1].set(xlabel='', ylabel='')
fig.suptitle('Distribution of similarity of each cell type - '+model_name+' design')
plt.tight_layout()
plt.savefig(os.path.join(loc_output, 'similarity_score_violin_'+model_name+'.png'), dpi=300, bbox_inches = 'tight')
plt.savefig(os.path.join(loc_output, 'similarity_score_violin_'+model_name+'.pdf'), dpi=300, bbox_inches = 'tight')
plt.savefig(os.path.join(loc_output, 'similarity_score_violin_'+model_name+'.svg'), dpi=300, bbox_inches = 'tight')
df_lof_query['threshold'] = 'above'
df_lof_query.loc[df_lof_query['score']<=threshold, 'threshold'] = 'below'
# (df_score_query.groupby(['threshold', 'cell_type']).size() / df_score_query.groupby('cell_type').size())*100
df_lof_query[df_lof_query['cell_type']=='Neg.cell'].groupby('threshold').size() / len(df_lof_query[df_lof_query['cell_type']=='Neg.cell'])
# +
## scibet comparison
df_pred_truth = src.scibet_compare(y_with_seen=y_ground_truth_reference
, y_with_unseen=y_ground_truth_query
, X_with_unseen=X_query
, lof_unseen=df_lof_query
, model=model
, threshold=threshold)
df_logo_crosstab = pd.crosstab(df_pred_truth['cell_type'], df_pred_truth['pred'])
df_logo_crosstab = df_logo_crosstab.reindex(list(unseen_label)[::-1])#.sort_index(ascending=False)
df_logo_crosstab
src.scibet_confusion_matrix(df_logo_crosstab)
plt.title(f'{model_name} with train_test_split')
plt.savefig(os.path.join(loc_output, 'confusion_matrix_scibet_'+model_name+'.png'), dpi=300, bbox_inches = 'tight')
plt.savefig(os.path.join(loc_output, 'confusion_matrix_scibet_'+model_name+'.pdf'), dpi=300, bbox_inches = 'tight')
plt.savefig(os.path.join(loc_output, 'confusion_matrix_scibet_'+model_name+'.svg'), dpi=300, bbox_inches = 'tight')
# -
# ## LOGO
# +
LOGO_encoding_q = pd.DataFrame()
LOGO_encoding_r = pd.DataFrame()
model_name = '1_layer_signaling'
for i_ in range(5):
_, model_encoding = src.loading_model(f'./models/exper_melanoma/LeaveOneGroupOut/design_{model_name}_reference_log1p_Adam_relu_'+str(i_)+'.h5', -1)
encoding_prediction_q = model_encoding.predict(X_query)
encoding_prediction_r = model_encoding.predict(X_reference)
LOGO_encoding_q = pd.concat([LOGO_encoding_q, pd.DataFrame(encoding_prediction_q)], axis=1)
LOGO_encoding_r = pd.concat([LOGO_encoding_r, pd.DataFrame(encoding_prediction_r)], axis=1)
# -
threshold, df_lof_reference, df_lof_query = src.calculate_threshold(encoding_with_seen=LOGO_encoding_r
, encoding_with_unseen=LOGO_encoding_q
, y_with_seen=y_ground_truth_reference
, y_with_unseen=y_ground_truth_query)
fig, axes = plt.subplots(ncols=2, sharey=True, figsize=(15,5))#, dpi=100)
sns.violinplot(x="cell_type", y="score", data=df_lof_reference, ax=axes[0], order=seen_label)
sns.violinplot(x="cell_type", y="score", data=df_lof_query, ax=axes[1], order=unseen_label)
axes[0].axhline(threshold, color='crimson')
axes[1].axhline(threshold, color='crimson')
axes[0].set_title('for seen label')
axes[1].set_title('with unseen label')
axes[0].set(xlabel='', ylabel='similarity score')
axes[1].set(xlabel='', ylabel='')
fig.suptitle('Distribution of similarity of LOGO models - '+model_name+' design')
plt.tight_layout()
plt.savefig(os.path.join(loc_output, 'similarity_score_violin_LOGO_'+model_name+'.png'), dpi=300, bbox_inches = 'tight')
plt.savefig(os.path.join(loc_output, 'similarity_score_violin_LOGO_'+model_name+'.pdf'), dpi=300, bbox_inches = 'tight')
plt.savefig(os.path.join(loc_output, 'similarity_score_violin_LOGO_'+model_name+'.svg'), dpi=300, bbox_inches = 'tight')
df_lof_query['threshold'] = 'above'
df_lof_query.loc[df_lof_query['score']<=threshold, 'threshold'] = 'below'
# (df_score_query.groupby(['threshold', 'cell_type']).size() / df_score_query.groupby('cell_type').size())*100
df_lof_query[df_lof_query['cell_type']=='Neg.cell'].groupby('threshold').size() / len(df_lof_query[df_lof_query['cell_type']=='Neg.cell'])
# +
## scibet comparison
df_pred_truth = src.scibet_compare(y_with_seen=y_ground_truth_reference
, y_with_unseen=y_ground_truth_query
, X_with_unseen=X_query
, lof_unseen=df_lof_query
, model=model
, threshold=threshold)
df_logo_crosstab = pd.crosstab(df_pred_truth['cell_type'], df_pred_truth['pred'])
df_logo_crosstab = df_logo_crosstab.reindex(list(unseen_label)[::-1])#.sort_index(ascending=False)
df_logo_crosstab
src.scibet_confusion_matrix(df_logo_crosstab)
plt.title(f'{model_name} with LOGO')
plt.savefig(os.path.join(loc_output, 'confusion_matrix_scibet_LOGO_'+model_name+'.png'), dpi=300, bbox_inches = 'tight')
plt.savefig(os.path.join(loc_output, 'confusion_matrix_scibet_LOGO_'+model_name+'.pdf'), dpi=300, bbox_inches = 'tight')
plt.savefig(os.path.join(loc_output, 'confusion_matrix_scibet_LOGO_'+model_name+'.svg'), dpi=300, bbox_inches = 'tight')
# -
| notebooks/7.1-pg-lof.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import cPickle
import h5py
import networkx as nx
h5 = h5py.File('data/para_long0_7644_traj.h5', 'r')
with open('all_fix_distances.pck', 'rb') as fdf:
fd = cPickle.load(fdf)
sp = [x for x in h5['/particles/atoms/species/value'][-1] if x != -1]
st = [x for x in h5['/particles/atoms/state/value'][-1] if x != -1]
pid2sp = {i: x for i, x in enumerate(sp, 1)}
pid2st = {i: x for i, x in enumerate(st, 1)}
fd_sp = [map(pid2sp.get, x) for x in fd]
cl = sorted([tuple(sorted(b)) for b in h5['/connectivity/chem_bonds_0/value'][-1] if -1 not in b])
stcl = [b for b in h5['/connectivity/bonds_0'] if -1 not in b]
g = nx.Graph()
g.add_edges_from(cl)
g.add_edges_from(stcl)
len({k: v for k,v in g.degree().items() if v == 2 and k <= 1000})
len(set([tuple(sorted(b)) for b in cl]))
pidsE = [i for i, x in enumerate(sp, 1) if x == 4]
pidsD = [i for i, x in enumerate(sp, 1) if x == 0]
pidsA = [i for i, x in enumerate(sp, 1) if x == 1]
pidsB = [i for i, x in enumerate(sp, 1) if x == 2]
pidsC = [i for i, x in enumerate(sp, 1) if x == 3]
print len(pidsE)
gcl = nx.Graph()
gcl.add_edges_from(cl)
gcl.degree().values().count(2)
len([i for i, v in pid2st.items() if i <= 1000 and v == 1])
num_zero = 0
pidsEleft = pidsE[:]
for b in cl:
b_degree = map(gcl.degree().get, b)
b_sp = map(pid2sp.get, b)
b_st = map(pid2st.get, b)
if b_sp[0] == 4:
num_zero += 1
if b[0] in pidsEleft:
pidsEleft.remove(b[0])
print num_zero
print len(pidsEleft)
map(pid2sp.get, pidsEleft)
map(pid2st.get, pidsEleft)
len([x for k, v in pid2st.items() if k <= 1000 and v == 0])
[b for b in cl if b[0] in pidsEleft or b[0] in pidsEleft]
pidsEleft
observe_pid = 68
clsubset = spsubset = stsubset = None
spstep = list(h5['/particles/atoms/species/step'])
print spstep
for tidx in range(h5['/connectivity/chem_bonds_0/value'].shape[0]):
clsubset = h5['/connectivity/chem_bonds_0/value'][tidx]
g = nx.Graph()
g.add_nodes_from(range(1, 6001))
g.add_edges_from(clsubset)
g.add_edges_from(stcl)
step = h5['/connectivity/chem_bonds_0/step'][tidx]
trjidx = tidx
spsubset = h5['/particles/atoms/species/value'][trjidx]
stsubset = h5['/particles/atoms/state/value'][trjidx]
print step, [b for b in clsubset if observe_pid in b],spsubset[observe_pid-1], stsubset[observe_pid-1], g.degree()[observe_pid], g.edge[observe_pid]
g.edge[observe_pid]
| movie_preparation/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # SymPy
#
#
# [SymPy](https://es.wikipedia.org/wiki/SymPy) es una biblioteca de Python que permite realizar cálculos simbólicos.
# Nos ofrece las capacidades de álgebra computacional, y se puede usar en línea a través de [SymPy Live](http://live.sympy.org/) o [SymPy Gamma](http://www.sympygamma.com/), este último es similar a
# [Wolfram Alpha](https://www.wolframalpha.com/).
#
# Si usas Anaconda este paquete ya viene instalado por defecto pero si se usa miniconda o pip debe instalarse.
#
# ````python
# conda install sympy # Usando el gestor conda de Anaconda/Miniconda
# pip install sympy # Usando el gestor pip (puede requerir instalar más paquetes)
# ````
#
# Lo primero que debemos hacer, antes de usarlo, es importar el módulo, como con cualquier
# otra biblioteca de Python.
#
# Si deseamos usar SymPy de forma interactiva usamos
#
# ```python
# from sympy import *
# init_printing()
# ```
#
# Para scripting es mejor importar la biblioteca de la siguiente manera
#
# ```python
# import sympy as sym
# ```
#
# Y llamar las funciones de la siguiente manera
#
# ```python
# x = sym.Symbols("x")
# expr = sym.cos(x)**2 + 3*x
# deriv = expr.diff(x)
# ```
#
# en donde calculamos la derivada de $\cos^2(x) + 3x$,
# que debe ser $-2\sin(x)\cos(x) + 3$.
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
init_printing()
# Definamos la variable $x$ como un símbolo matemático. Esto nos permite hacer uso de
# esta variable en SymPy.
x = symbols("x")
# Empecemos con cálculos simples. Abajo, tenemos una _celda de código_ con una suma.
# Ubica el cursor en ella y presiona SHIFT + ENTER para evaluarla.
#
1 + 3
# Realicemos algunos cálculos.
factorial(5)
1 // 3
1 / 3
S(1) / 3
# Podemos evaluar esta expresión a su versión en punto flotante
sqrt(2*pi)
float(sqrt(2*pi))
# También podemos almacenar expresiones como variables, como cualquier variable de Python.
#
radius = 10
height = 100
area = pi * radius**2
volume = area * height
volume
float(volume)
# Hasta ahora, hemos usado SymPy como una calculadora. Intentemos
# algunos cálculos más avanzados. Por ejemplo, algunas integrales.
#
integrate(sin(x), x)
integrate(sin(x), (x, 0, pi))
# Podemos definir una función, e integrarla
f = lambda x: x**2 + 5
f(5)
integrate(f(x), x)
y = symbols("y")
integrate(1/(x**2 + y), x)
# Si asumimos que el denominador es positivo, esta expresión se puede simplificar aún más
a = symbols("a", positive=True)
integrate(1/(x**2 + a), x)
# Hasta ahora, aprendimos lo más básico. Intentemos algunos ejemplos
# más complicados ahora.
#
# **Nota:** Si quieres saber más sobre una función específica se puede usar
# la función ``help()`` o el comándo _mágico_ de IPython ``??``
help(integrate)
# +
# integrate??
# -
# ## Ejemplos
# ### Solución de ecuaciones algebraicas
#
# Para resolver sistemas de ecuaciones algebraicos podemos usar:
# [``solveset`` and ``solve``](http://docs.sympy.org/latest/tutorial/solvers.html).
# El método preferido es ``solveset``, sin embargo, hay sistemas que
# se pueden resolver usando ``solve`` y no ``solveset``.
#
# Para resolver sistemas usando ``solveset``:
a, b, c = symbols("a b c")
solveset(a*x**2 + b*x + c, x)
# Debemos ingresar la expresión igualada a 0, o como una ecuación
solveset(Eq(a*x**2 + b*x, -c), x)
# ``solveset`` no permite resolver sistemas de ecuaciones no lineales, por ejemplo
#
solve([x*y - 1, x - 2], x, y)
# ### Álgebra lineal
#
# Usamos ``Matrix`` para crear matrices. Las matrices pueden contener variables y expresiones matemáticas.
#
# Usamos el método ``.inv()`` para calcular la inversa, y ``*`` para multiplicar matrices.
A = Matrix([
[1, -1],
[1, sin(c)]
])
display(A)
B = A.inv()
display(B)
A * B
# Esta expresión debería ser la matriz identidad, simplifiquemos la expresión.
# Existen varias formas de simplificar expresiones, y ``simplify`` es la más general.
simplify(A * B)
# ### Graficación
#
# SymPy permite realizar gráficos 2D y 3D
from sympy.plotting import plot3d
plot(sin(x), (x, -pi, pi));
monkey_saddle = x**3 - 3*x*y**2
p = plot3d(monkey_saddle, (x, -2, 2), (y, -2, 2))
# ### Derivadas y ecuaciones diferenciales
#
# Podemos usar la función ``diff`` o el método ``.diff()`` para calcular derivadas.
f = lambda x: x**2
diff(f(x), x)
f(x).diff(x)
g = lambda x: sin(x)
diff(g(f(x)), x)
# Y sí, ¡SymPy sabe sobre la regla de la cadena!
#
# Para terminar, resolvamos una ecuación diferencial de segundo orden
#
# $$ u''(t) + \omega^2 u(t) = 0$$
t = symbols("t")
u = symbols("u", cls=Function)
omega = symbols("omega", positive=True)
ode = u(t).diff(t, 2) + omega**2 * u(t)
dsolve(ode, u(t))
# ## Convertir expresiones de SymPy en funciones de NumPy
#
# ``lambdify`` permite convertir expresiones de sympy en funciones para hacer cálculos usando NumPy.
#
# Veamos cómo.
f = lambdify(x, x**2, "numpy")
f(3)
f(np.array([1, 2, 3]))
# Intentemos un ejemplo más complejo
fun = diff(sin(x)*cos(x**3) - sin(x)/x, x)
fun
fun_numpy = lambdify(x, fun, "numpy")
# y evalúemoslo en algún intervalo, por ejemplo, $[0, 5]$.
pts = np.linspace(0, 5, 1000)
fun_pts = fun_numpy(pts + 1e-6) # Para evitar división por 0
plt.figure()
plt.plot(pts, fun_pts)
# ## Ejercicios
#
# 1. Calcule el límite
#
# $$ \lim_{x \rightarrow 0} \frac{\sin(x)}{x}\, .$$
#
# 2. Resuelva la ecuación diferencial de Bernoulli
#
# $$x \frac{\mathrm{d} u(x)}{\mathrm{d}x} + u(x) - u(x)^2 = 0\, .$$
#
# ## Recursos adicionales
#
# - Equipo de desarrollo de SymPy. [SymPy Tutorial](http://docs.sympy.org/latest/tutorial/index.html), (2018). Consultado: Julio 23, 2018
# - <NAME>. [Taming math and physics using SymPy](https://minireference.com/static/tutorials/sympy_tutorial.pdf), (2017). Consultado: Julio 23, 2018
| notebooks/herramientas/sympy_basico.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 (KnowNet)
# language: python
# name: knownet
# ---
# +
from pathlib import Path
import os
import sys
current_location = Path(os.getcwd())
parent_directory = current_location.parent
sys.path.append(str(parent_directory))
# -
from data_platform.datasource import SQLiteDS
from data_platform.config import ConfigManager
current_path = Path(os.getcwd())
data_path = current_path / 'data'
sqlite_path = data_path / 'sqlite' / 'data.db'
# +
sample_table1 = {
('table1', "row1"): {"col1": 1, "col2": 2, "col3": 3, "col4": 4, "col5": 5},
('table1', "row2"): {"col2": 2, "col3": 3, "col4": 4, "col5": 5, "col6": 6, 'foo': 'baz'},
('table1', "go_user1"): {'name': 'player', 'nickname': 'boo', 'from': 'pa', 'li': ['unique', 'foo']},
('table1', "matrix1"): {'content': [[1, 0, 0], [0, 1, 0], [0, 0, 1]]}
}
sample_table2 = {
('table2', "1"): {"col1": 1, "col2": 2, "col3": 3, "col4": 4, "col5": 5},
('table2', "22"): {"col2": 2, "col3": 3, "col4": 4, "col5": 5, "col6": 6, 'foo': 'baz'},
('table2', "333"): {'name': 'player', 'nickname': 'boo', 'from': 'pa', 'li': ['unique', 'foo']},
('table2', "4444"): {'content': [[1, 0, 0], [0, 1, 0], [0, 0, 1]]}
}
# -
config = ConfigManager({'init': {'location': sqlite_path}})
ds = SQLiteDS(config)
ds.create_table('table1')
ds.create_table('table2')
# +
for row_key, row_data in sample_table1.items():
print(ds.create_row(row_key, row_data))
for row_key, row_data in sample_table2.items():
print(ds.create_row(row_key, row_data))
# -
ds.read_row()
ds.update_row(('@*', 'row1'), {'new_column': 1})
ds.read_row(('table1', 'row1'))
ds.delete_row(('table1', '@*'))
ds.read_row()
ds.delete_table('table1')
ds.delete_table('table2')
| examples/Test_SQLiteDS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Tce3stUlHN0L"
# ##### Copyright 2019 The TensorFlow Authors.
#
# + cellView="form" id="tuOe1ymfHZPu"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="MfBg1C5NB3X0"
# # TensorFlow ile egitimin dagitimi
# + [markdown] id="r6P32iYYV27b"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/tr/r1/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/tr/r1/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] id="SNsdT6sJihFa"
# Note: Bu dökümanlar TensorFlow gönüllü kullanıcıları tarafından çevirilmiştir.
# Topluluk tarafından sağlananan çeviriler gönüllülerin ellerinden geldiğince
# güncellendiği için [Resmi İngilizce dökümanlar](https://www.tensorflow.org/?hl=en)
# ile bire bir aynı olmasını garantileyemeyiz. Eğer bu tercümeleri iyileştirmek
# için önerileriniz var ise lütfen [tensorflow/docs](https://github.com/tensorflow/docs)
# havuzuna pull request gönderin. Gönüllü olarak çevirilere katkıda bulunmak için
# [<EMAIL>](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-tr)
# listesi ile iletişime geçebilirsiniz.
# + [markdown] id="xHxb-dlhMIzW"
# ## Genel Bakis
# `tf.distribute.Strategy` API'si TF programinizi birden cok bilgisayar/GPU'ya dagitmak icin bir taslak saglar. Buradaki amac kullanicilarin var olan modellerini ve kodlarini olabildigince ayni tutarak dagitim yapmalarini saglamaktir.
#
# Bu rehber `tf.distribute.MirroredStrategy` taktigini kullanarak ayni makinedeki birden fazla GPU'da ayni "graph"i kullanarak eszamanli egitim yapmamizi saglar. Isin ozunde, bu taktik modelinizin degiskenlerini butun islemcilere kopyalar. Daha sonra, [all-reduce](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/) yontemi ile butun islemcilerden gelen degisimleri birlestirir ve elde edilen bu degeri varolan butun kopyalara uygular.
#
# `MirroredStrategy` taktigi TensorFlow'un icindeki dagitim taktiklerinden sadece biridir. Diger taktikler hakkinda bilgi edinmek icin bu rehbere goz atabilirsiniz: [dagitim taktikleri rehberi](../../guide/distribute_strategy.ipynb).
#
# + [markdown] id="MUXex9ctTuDB"
# ### Keras API
# Bu ornekteki modeller ve egitim donguleri `tf.keras` API'si kullanilarak olusturulmustur. Eger, kendinize ozgu bir egitim dongusu olusturmak istiyorsaniz, [buraya](training_loops.ipynb) goz atiniz.
# + [markdown] id="Dney9v7BsJij"
# ## Gereksinimleri indirelim
# + id="MpTqjus4CWOR"
# TensorFlow kutuphanesini getirelim
import tensorflow.compat.v1 as tf
# + id="74rHkS_DB3X2"
import tensorflow_datasets as tfds
import os
# + [markdown] id="hXhefksNKk2I"
# ## Veri setini indirelim
# + [markdown] id="OtnnUwvmB3X5"
# MNIST veri setini [TensorFlow Datasets](https://www.tensorflow.org/datasets) kullanarak indirip yukleyelim.
# + [markdown] id="lHAPqG8MtS8M"
# `with_info` degiskenini `True` olarak belirtmek metadata'nin butun veri setini kapsadigini belirtir. Bu `metadata` `ds_info` degiskeninde saklanmaktadir.
# Bu metadata nesnesi egitim ve test orneklerinin sayisini da icinde barindirir.
#
# + id="iXMJ3G9NB3X6"
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
# + [markdown] id="GrjVhv-eKuHD"
# ## Dagitim taktigimizi tanimlayalim
#
# + [markdown] id="TlH8vx6BB3X9"
# Bir `MirroredStrategy` nesnesi olusturalim. Bu sayede dagitimi kontrol edebilir ve modelimizi icinde olusturabilecegimiz bir ortam yonetmeni (`tf.distribute.MirroredStrategy.scope`) saglamis oluruz.
# + id="4j0tdf4YB3X9"
strategy = tf.distribute.MirroredStrategy()
# + id="cY3KA_h2iVfN"
print ('Number of devices: {}'.format(strategy.num_replicas_in_sync))
# + [markdown] id="lNbPv0yAleW8"
# ## Input pipeline kuralim
# + [markdown] id="psozqcuptXhK"
# Eger bir model birden fazla GPU ile egitiliyorsa, grup boyutleri ayni oranda arttirilmalidir ki islemci gucunu verimli sekilde kullanabilelim. Ayni zamanda ogrenme hizinin da GPU miktarina gore ayarlanmasi gerekir.
# + id="p1xWxKcnhar9"
# 'ds_info.splits.total_num_examples' metodunu kullanarak da orneklerin sayisini bulabilirsiniz.
num_train_examples = ds_info.splits['train'].num_examples
num_test_examples = ds_info.splits['test'].num_examples
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
# + [markdown] id="0Wm5rsL2KoDF"
# Piksel degerleri, ki bunlar aslinda 0-255 arasindadir, [normallestirilerek 0 ile 1 arasinda bir degere indirgenmelidir](https://en.wikipedia.org/wiki/Feature_scaling). Bu ingirdenme olcegini bir fonksiyon ile tanimlayalim.
# + id="Eo9a46ZeJCkm"
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
# + [markdown] id="WZCa5RLc5A91"
# Simdi bu fonksiyonu egitim ve test verisine uygulayalim. Sonrasinda egitim verisini karistiralim ve [egitim icin gruplayalim](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch).
#
# + id="gRZu2maChwdT"
train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
# + [markdown] id="4xsComp8Kz5H"
# ## Modeli olusturalim
# + [markdown] id="1BnQYQTpB3YA"
# Keras modelimizi `strategy.scope` ortaminda olusturalim ve toparlayalim.
# + id="IexhL_vIB3YA"
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
# + [markdown] id="8i6OU5W9Vy2u"
# ## Bildirim fonksiyonlarini tanimlayalim
#
# + [markdown] id="YOXO5nvvK3US"
# Burada kullanilan bildirim fonksiyonlari sunlardir:
# * *Tensorboard*: Bu geri arama Tensorboard'lar icin kayit tutar. Bu kayitlar ile bir grafik olusturabiliriz.
# * *Model Checkpoint*: Bu geri arama her devir sonunda modeli kaydeder.
# * *Learning Rate Scheduler*: Bu geri aramayi kullanarak, ogrenme hizinin her devir ya da gruptan sonra degismesini programlayabilirsiniz.
#
# Daha aciklayici olmasi icin, 'ogrenme hizini' bir geri arama ile kitapciga yazdirabilirsiniz.
# + id="A9bwLCcXzSgy"
# Kontrol noktalarini kaydetmek icin bir kontrol noktasi dosyasi olusturalim.
checkpoint_dir = './training_checkpoints'
# Kontrol noktasi dosyalarinin ismi
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
# + id="wpU-BEdzJDbK"
# Azalan ogrenme hizi icin bir fonksiyon
# Ihtiyaciniz olan butun azalma fonksiyonlarini kullanabilirsiniz.
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
# + id="jKhiMgXtKq2w"
# Her devrin sonunda LR'i (ogrenme hizini) yazdirmak icin bir geri arama fonksiyonu.
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print ('\nLearning rate for epoch {} is {}'.format(
epoch + 1, tf.keras.backend.get_value(model.optimizer.lr)))
# + id="YVqAbR6YyNQh"
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
# + [markdown] id="70HXgDQmK46q"
# ## Egitim ve yorumlama
# + [markdown] id="6EophnOAB3YD"
# Simdi, modelimizi her zamanki gibi egitelim: `fit` yontemini model uzerinde cagirip bu rehberin basinda olusturdugumuz veri setini modele verelim. Bu asama dagitim yapsaniz da yapmasaniz da hep ayni kalacaktir.
#
# + id="7MVw_6CqB3YD"
model.fit(train_dataset, epochs=10, callbacks=callbacks)
# + [markdown] id="NUcWAUUupIvG"
# Asagida gordugunuz gibi, kontrol noktalari hafizaya kaydedilmektedir.
# + id="JQ4zeSTxKEhB"
# Kontrol noktalari dosyasina bakalim.
# !ls {checkpoint_dir}
# + [markdown] id="qor53h7FpMke"
# Modelin nasil calistigini gormek icin, en son kaydedilen kontrol noktasini yukleyip 'evaluate' yontemini test verisinde cagirabilirsiniz.
#
# 'evaluate' yontemini daha once yaptiginiz gibi uygun veri setlerinde kullanmalisiniz.
# + id="JtEwxiTgpQoP"
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
eval_loss, eval_acc = model.evaluate(eval_dataset)
print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
# + [markdown] id="IIeF2RWfYu4N"
# Programin sonuclarini, TensorBoard kayitlarini indirerek terminalde gorebilirsiniz.
#
# ```
# $ tensorboard --logdir=path/to/log-directory
# ```
# + id="LnyscOkvKKBR"
# !ls -sh ./logs
# + [markdown] id="kBLlogrDvMgg"
# ## Kaydedilen modelin _'SavedModel'_ cikartilmasi
# + [markdown] id="Xa87y_A0vRma"
# Eger "graph"larin ve degiskenlerin program disinda kullanilmasini istiyorsaniz, `SavedModel` sizin icin ideal yontem. Bu yontem herhangi bir kapsama bagli olmadan yuklenebir ve herhangi bir platforma da bagli degildir.
# + id="h8Q4MKOLwG7K"
path = 'saved_model/'
# + id="4HvcDmVsvQoa"
tf.keras.experimental.export_saved_model(model, path)
# + [markdown] id="vKJT4w5JwVPI"
# Modeli 'strategy.scope' olmadan yukleyelim.
# + id="T_gT0RbRvQ3o"
unreplicated_model = tf.keras.experimental.load_from_saved_model(path)
unreplicated_model.compile(
loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)
print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
# + [markdown] id="8uNqWRdDMl5S"
# ## Sirada ne var?
#
# Dagitim taktikleri rehberini [distribution strategy guide](../../guide/distribute_strategy_tf1.ipynb) okuyunuz.
#
# Note: `tf.distribute.Strategy` surekli gelistirilmektedir ve yakin zamanda yeni ornekler ve rehberler buraya eklenecektir. Lutfen bu yontemleri deneyiniz. Sizden gelen yorumlara her zaman acigiz. Bu yorumlari buraya yazabilirsiniz [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
| site/tr/r1/tutorials/distribute/keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # RCS W2D2 Variables and Data Types
# ## Useful Jupyter Commands
# * Ctrl-Enter run cell inplace
# * Shift-Enter run cell, select cell below
# * Alt-Enter run cell, insert below (Option-Enter on Mac)
# * Ctrl-Click multiple cursors when editing
# * Setup custom keyboard shortcuts through Help -> Keyboard Shortcuts
# * ESC A - New Cell Above Current
# * ESC B - New Cell Below
# %lsmagic
# !dir
# +
The following identifiers are used as reserved words, or keywords of the language, and cannot be used as ordinary identifiers. They must be spelled exactly as written here:
False class finally is return
None continue for lambda try
True def from nonlocal while
and del global not with
as elif if or yield
assert else import pass
break except in raise
# -
import keyword ## this is how we will be importing various modules/libraries
keyword.kwlist
# ## Data types in Python 3.x
# * Integers type(42) int
# * Floating Point type(3.14) float
# * Boolean type(True),type(False) bool
# * String(ordered, **immutable** char sequence) type("OyCaramba") str
# * List type([1,2,63,"aha","youcanmixtypeinsidelist", ["even","nest"]]) list
# * Dictionary(key:value pairs) type({"foo":"bar", "favoriteday":"Friday"}) dict
# * Tuple - ordered immutable sequence type("sup",7,"dwarves") tup
# * Set (unordered collection of uniques) ("k","a","r","t","u","p","e","l","i","s")
#
#
a=42
print(type(a), type(42))
b=20
print(b, id(b))
# ## id(object)
#
# Return the “identity” of an object.
#
# This is an integer which is guaranteed to be unique and constant for this object during its lifetime.
# Two objects with non-overlapping lifetimes may have the same id() value.
#
# #### CPython implementation detail: This is the address of the object in memory.
b+=22 # shortcut operator to add to existing value of a variable b = b + 22
# # += /= -= *=
print(b)
id(a),id(b),id(42),id(3),id(4),id(5),id(5000),id(5001),id(5002) # notice how ids are incremented!
id(6)
id(-1),id(0),id(1)
id(7)
id(-6),id(-5),id(-4)
a is b
c = -6
d = -6
c,d
c == d
c is d
# ### Why is there 16 byte difference between id(3),id(4) ? Shouldn't it be 8 bytes?
# ### Extra credit! Find out which numbers have constant ids!
type(3.14)
type(False),type(True)
type("AyAyOy")
type([1,6,3,4,1,1,2,2])
type({"foo":"bar", 1:2})
type((1,6,3,3,5,"wow"))
type({2,"u","n","i","x"})
# # Exercise:
#
# Assign a and b new values 1918 and 2018 respectively and then swap values in these variables.
#
#
# print(a,b) should show 2018, 1918
a = 1918
b = 2018
tmp = a
print(tmp, a, b)
a = b
b = tmp
print(a,b)
c = 55
d = 77
print(c,d)
a, b, c, d = d, c, b, a
print(a,b,c,d)
# ## Floating Point
4/5 + 2/3 + 1/7
0.1+0.2+0.3-0.6
# More on floating point https://docs.python.org/2/tutorial/floatingpoint.html
import math
dir(math)
from math import tan as tangent
tangent(33)
tan(66)
sin(60)
sqrt = math.sqrta
sqrt(16)
print(16**0.5)
print(math.sqrt(16))
print(math.factorial(5))
math.sin()
# ## Strings
# * immutable
name = "Valdis"
print(name)
len(name)
name[0]
name[5],name[-1],name[-2]
name[0:3]
name[:4]
name[1:4]
name[-3]
name[0:6:1]
name[::1]
name[1:6:2]
name[::-1]
name[5:2:-1]
name[3]='b'
#name[3]="b" # Vai tā drīkst?
name=name[0:3]+"d"+name[4:]
print(name)
n2=name[:]
print(n2)
print(id(n2),id(name))
print(name[:])
name = "Voldemars"
print(n,n2,name)
n2 = "Something"
name=name[::-1]
print(name)
multi_string = """Really
Long
And
Boring
String"""
print(multi_string)
t = "Talk "+10
print(t)
str(555)
id('25'),id(25)
int('345')
# ?str
# Automatic string concatenation without +
funny_string = ('Monty '
'Python '
'Spam '*5)
print(funny_string)
# Notice that string multiplication happens after concatenation
print("It's \ta \"wonderful\" \nlife")
print('Really really long line\
# ## Fstrings (new since Python 3.6)
# * faster way to format strings
# * can use variables in current scope
# * same idea as .format() but faster and easier to read
weekday="Tuesday"
month="June"
day=8
greeting = f'Today is {day}th of {month} which is {weekday}. Nice {month} isn\'t it ? '
print(greeting)
#nicer than format method
seconds=345.24556
width=22 # how much space our formatted value should take up
precision=8 # how many digits to use
print(f'Elapsed {seconds} seconds since start of new era')
print(f'Elapsed {seconds:{12}.{precision}} seconds since start of new era')
print(f'Elapsed {seconds:{7}.{5}} seconds since start of new era')
print(f'Number {5556.2222:{12}.{6}}')
greeting[10:33:1]
# ### Python 3.X strings are Unicode by default no more 'u prefixes
# ## String Exercise:
# * Assign your favorite food to variable of your choosing
# * Output 3rd letter of your favorite
# * Output first 2 letters of your favorite
# * Output last 3 letters of your favorite
# * Output reverse of your favorite
# * Output every 2nd character of your favorite in reverse
#
food = ""
# ## Python Lists
#
# * Ordered
# * Mutable(can change individual members!)
# * Comma separated between brackets [1,3,2,5,6,2]
# * Can have duplicates
# * Can be nested
groceries=['Bread','Eggs','Milk']
print(groceries)
'Bread' in groceries
'Apples' in groceries
name_list = list("Valdis")
print(name_list)
name_list[3]
name_list[3]='b'
print(name_list)
glist=list(groceries)
print(glist, id(glist), id(groceries))
glist2=groceries
print(id(glist2),id(groceries),id(glist))
glist == groceries
glist[2] = 'Apples'
glist.append('Peaches')
glist
glist == groceries
dir(glist)
x = list(range(1,10))
print(x)
# Python 3.x range is an improved Python 2.x xrange!
y = [x*x for x in x]
print(y)
# List comprehensions! Pythonism! We'll talk about those again!
sum(x),sum(y)
len(x),len(y)
y
x+y
z=(x+y)*3
z
tmp = y*2
print(tmp)
len(tmp)
tmp[3:8:2]
tmp[3:98:2]
tmp[::-1]
len(y)
dir(tmp)
y.insert(0, 33)
y
55 in y
33 in y
y.index(16)
y.index(33)
# y.remove(33)
y.remove(33)
y
y+=[11, -22, 66]
y
y.sort()
y
y.reverse()
y
x=y[::-1]
x
z=x.pop(5)
print(x, z)
# ## Common list methods.
#
# * `list.append(elem)` -- adds a single element to the end of the list. Common error: does not return the new list, just modifies the original.
# * `list.insert(index, elem)` -- inserts the element at the given index, shifting elements to the right.
# * `list.extend(list2)` adds the elements in list2 to the end of the list. Using + or += on a list is similar to using extend().
# * `list.index(elem)` -- searches for the given element from the start of the list and returns its index. Throws a ValueError if the element does not appear (use "in" to check without a ValueError).
# * `list.remove(elem)` -- searches for the first instance of the given element and removes it (throws ValueError if not present)
# * `list.sort()` -- sorts the list in place (does not return it). (The sorted() function shown later is preferred.)
# * `list.reverse()` -- reverses the list in place (does not return it)
# * `list.pop(index)`-- removes and returns the element at the given index. Returns the rightmost element if index is omitted (roughly the opposite of append()).
list(range(55,59))
tmp
tmp.count(64)
# We count occurences!
tmp.count('Valdis')
print(tmp.index(64))
tmp[7]
# ?tmp.index
len(x)
[] is []
[] == []
# ## List exercise
# * create list of number 5 to 15 (included) and assign to a variable
# * output first 4 numbers
# * output last 3 numbers
# * output last 3 numbers in reversed order
# * extra credit output average!
nums = list(range(5,16))
nums
nums[:4]
nums[-3:]
nums[-1:-4:-1]
avg = sum(nums) / len(nums)
avg
isCool = True
isCool
# ## Sets
#
# * unordered
# * uniques only
# * curly braces {3, 6, 7}
s={3,3,6,1,3,6,7}
print(s)
list(s)
s.issuperset(range(6))
s2={3,6,76,2,8,8}
s2
dir(s)
s3=s.union(s2)
s3
s4=s.difference(s2)
s4
s2.difference(s)
s.intersection(s2)
s3=s+s2
s3
print(tmp)
set(tmp)
# ## Tuples
#
# * ordered
# * immutable (cannot be changed!)
# * Can be used as a collection of fields
mytuple = 6, 4, 9
print(mytuple)
mytuple[2]
mytuple.count(9)
mytuple.index(9)
dir(mytuple)
# A special problem is the construction of tuples containing 0 or 1 items: the syntax has some extra quirks to accommodate these. Empty tuples are constructed by an empty pair of parentheses; a tuple with one item is constructed by following a value with a comma (it is not sufficient to enclose a single value in parentheses). Ugly, but effective. For example:
# +
empty = ()
singleton = 'hello', # <-- note trailing comma
double= ('hello', 'super')
len(empty)
0
len(singleton)
1
print(empty,singleton,double)
#('hello',)
# The statement t = 12345, 54321, 'hello!' is an example of tuple packing: the values 12345, 54321 and 'hello!' are packed together in a tuple. The reverse operation is also possible:
# -
lst=list(double)
lst
dir(singleton)
# ## Unpacking tuples
_, _, c = mytuple
print(c)
mytuple[2]
ytup = ([1,2,3], "Petr", 66)
ytup
ytup[1] = 'Valdis'
ytup[2] = 'Valdis'
ytup[0] ='Valdis'
ytup[0][1] = 'Valdis'
ytup
ytup[0].append('something')
ytup
_
_
a, b, c = mytuple
print(a, b, c)
55
_
set(tmp)
x= range(10)
x
type(x)
list(x)
x=range(1000000000)
y=list(x[45:66])
y
# ## in Python 3.X range returns an iterable(Lazy, values on demand, saves memory on longer iteration)
# ### in Python 2.7 range returned a list ! (xrange was the iterator version, not as fully featured as the current range)
# Full list of differences: http://treyhunner.com/2018/02/python-3-s-range-better-than-python-2-s-xrange/
# ## Dictionaries
#
# * Collection of Key - Value pairs
# * also known as associative array
# * unordered
# * keys unique in one dictionary
# * storing, extracting
#
tel = {'jack': 4098, 'sape': 4139}
print(tel)
tel['guido'] = 4127
print(tel.keys())
print(tel.values())
tel['valdis'] = 4127
tel
tel['jack']
del tel['sape']
tel['sape']
'valdis' in tel.keys()
'karlis' in tel.keys()
4127 in tel.values()
type(tel.values())
dir(tel.values())
tel['irv'] = 4127
tel
list(tel.keys())
list(tel.values())
sorted([5,7,1,66], reverse=True)
# ?sorted
tel.keys()
sorted(tel.keys())
'guido' in tel
'Valdis' in tel
t2=dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])
print(t2)
names = ['Valdis', 'valdis', 'Antons', 'Anna', 'Kārlis', 'karlis']
names
sorted(names)
# * `globals()` always returns the dictionary of the module namespace
# * `locals()` always returns a dictionary of the current namespace
# * `vars()` returns either a dictionary of the current namespace (if called with no argument) or the dictionary of the argument.
'print(a,b)' in globals()['In']
vars().keys()
| .ipynb_checkpoints/Python Variables and Data Types-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy
import pandas
import re
import os
season = 2011
# +
input_file_name = 'data/rankings/{}_composite_rankings.csv'.format(season)
def pattern_match(pattern, string):
return (re.search(pattern, string) is not None)
special_columns = ['Team', 'Rank', 'Conf', 'Record', 'Mean', 'Median', 'St.Dev']
# +
def get_fields(width, line, data_type, n=1):
data = list()
for i in range(n):
y = line[:width]
#print '"{}"'.format(y)
z = numpy.nan if y.strip() == '' else data_type(y.strip())
data.append(z)
line = line[width:]
return (data, line)
def parse_line(line):
ranker_width = 4
section_width = 2
rank_width = 5
team_width = 17
conf_width = 5
record_width = 7
team_short_width = 9
float_width = 6
float_2_width = 7
data = list()
temp_line = line
# First Block
temp_data, temp_line = get_fields(ranker_width, temp_line, int, 5)
data.extend(temp_data)
temp_data, temp_line = get_fields(section_width, temp_line, str)
temp_data, temp_line = get_fields(ranker_width, temp_line, int, 5)
data.extend(temp_data)
temp_data, temp_line = get_fields(rank_width, temp_line, int)
data.extend(temp_data)
temp_data, temp_line = get_fields(team_width, temp_line, str)
data.extend(temp_data)
temp_data, temp_line = get_fields(conf_width, temp_line, str)
data.extend(temp_data)
temp_data, temp_line = get_fields(record_width, temp_line, str)
data.extend(temp_data)
# Blocks 2 through 4
for i in range(2):
for j in range(3):
temp_data, temp_line = get_fields(section_width, temp_line, str)
temp_data, temp_line = get_fields(ranker_width, temp_line, int, 5)
data.extend(temp_data)
temp_data, temp_line = get_fields(rank_width, temp_line, int)
data.extend(temp_data)
temp_data, temp_line = get_fields(team_short_width, temp_line, str)
data.extend(temp_data)
# Block 5
for j in range(2):
temp_data, temp_line = get_fields(section_width, temp_line, str)
temp_data, temp_line = get_fields(ranker_width, temp_line, int, 5)
data.extend(temp_data)
temp_data, temp_line = get_fields(section_width, temp_line, str)
temp_data, temp_line = get_fields(ranker_width, temp_line, int, 2)
data.extend(temp_data)
temp_data, temp_line = get_fields(section_width, temp_line, str)
temp_data, temp_line = get_fields(float_width, temp_line, float, 2)
data.extend(temp_data)
temp_data, temp_line = get_fields(float_2_width, temp_line, float)
data.extend(temp_data)
# print zip(header[:len(data)], data)
# print temp_line
return data
# +
with open(input_file_name, 'r') as input_file:
for line_number, line in enumerate(input_file):
if line_number == 0:
header = map(lambda s: s.strip().strip(','), line.split())
df_header = list()
for f in header:
if f not in df_header:
df_header.append(f)
df_dict = dict([(f, list()) for f in df_header])
continue
# skip empty lines
if line.strip() == '':
continue
# Check for a duplicate header line
duplicate_header = map(lambda s: s.strip().strip(','), line.split())
if header == duplicate_header:
continue
data = parse_line(line)
recorded = list()
for f, x in zip(header, data):
if f not in recorded:
df_dict[f].append(x)
recorded.append(f)
df = pandas.DataFrame(df_dict)
ranker_list = sorted(list(set(df.columns) - set(special_columns)))
feature_list = list(special_columns) + ranker_list
for ranker in ranker_list:
df[ranker] = df[ranker].fillna(df['Median'])
df[feature_list][:5]
# -
output_file = 'data/rankings/{}_composite_rankings.clean.csv'.format(season)
df[feature_list].to_csv(output_file, sep='|')
| 2011_rankings_reader.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# # _*QISKIT for quantum machine learning*_
# This tutorial is about how to use QISKIT(created by IBMQ) to realize quantum machine learning algorithoms on the quantum processer provided by IBMQ. The machine learning algorithms include K-means methood and support vector machines(SVM). We also discuss the Harrow-Hassidim-Lloyd(HHL) Algorithm for solving system of linear equations, which is useful to implment the quantum SVM algorithm and quantum optimization algorithms such as gradient-decent and Newton methods.
#
# ***
# ### Contributors
# <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> and <NAME>$^{1}$
#
# 1. Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China,Chengdu, China,610051
# ***
# ## Introduction
# Quantum machine learning is an interactional topics that arises from the intersection of quantum computing and machine learning. The development of this field is mainly reflected in two aspects: on the one hand, machine learning provides more research methods and applications for us to understand and control quantum systems; on the other hand, the development of quantum computing techniques makes the quantum acceleration of the classical machine learning algorithm possible and greatly expands the scope of machine learning applications and computational effectiveness . We can say that in this vibrant area, quantum computing and machine learning promote each other and make progress together.<br \>
# Quantum machine learning is the use of quantum effects to machine learning algorithms, such as: supervised learning, unsupervised learning algorithms, etc. run on a quantum computer. Quantum machine learning algorithms can achieve significant speedup when compared to classical machine learning algorithms when dealing with many tasks (clustering, classification, etc.).<br \>
# In a word, Quantum machine learning is an emerging interdisciplinary research area at the intersection of quantum physics and machine learning.Quantum machine learning algorithms can use the advantages of quantum computation in order to improve classical methods of machine learning, for example by developing efficient implementations of expensive classical algorithms on a quantum computer. In this tutorial, we will achieve some simple machine learning algorithms via qiskit.
#
# This tutoprial is organized into the following topics:
# 1. [Quantum K-Means algorithm](#section1)
# 2. [Quantum algorithm for linear system of equation](#section2)
# 3. [Quantum support vector machine](#section3)
# 4. [Future work](#section4)
# ## 1. Quantum K-Means algorithm<a id='section1'></a>
# In this section, we mainly introduce a kind of unsupervised learning algorithms called K-Means.
# #### Classical K-Means algorithm
# K-Means algoruthm is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining.K-Means aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. k-means clustering tends to find clusters of comparable spatial extent.Given a set of observations$\left( {{x_1},{x_2},...,{x_n}} \right)$, where each observation is a d-dimensional real vector, and k-means clustering divides the n observations into k(k ≤ n) sets $S = \{ {S_1},{S_2},...,{S_k}\}$ so as to minimize the within-cluster sum of squares. In other words, its goal is to find the cluster that satisfies:
# $\arg \mathop {\min }\limits_S {\sum\limits_{i = 1}^k {\sum\limits_{X \in {S_i}} {\left\| {X - {\mu _i}} \right\|} } ^2}$
# where ${{\mu _i}}$ is the mean of points in ${{S _i}}$
# #### Quantum K-Means algorithm
# * [Quantum K-Means algorithm](1_K_Means/Quantum K-Means Algorithm.ipynb)
# ## 2.Quantum algorithm for linear system of equations(HHL algorithm)<a id='section2'></a>
# In this section, we will introduce the quantum algorithm for linear system of equatoins . This algorithm is a useful tool to solve linear systems of equations and also a powerful method to achieve some quantum machine learning algorithm such as SVM.
# * [Quantum algorithm for linear system of equation](2_HHL/Quantum Algorithm for Linear System of Equations.ipynb)
# ## 3.Quantum support vector machine (to be continued )<a id='section3'></a>
# This section is mainly about quantum support vector machine(SVM).
#
# #### Support vector machine algorithm(SVM)
# Support Vector Machine (SVM) is a kind of supervised learning method, which can be widely used in statistical classification and regression analysis. It was first proposed by <NAME> Vapnik in 1995. It shows many unique advantages in solving small sample, nonlinear and high dimensional pattern recognition, and can be applied to the function fitting other machine learning problems. The characteristic of this classifier is that they can minimize both the experience error and the maximized geometric marginal area, so the support vector machine is also referred to as the maximum marginal zone classifier.
#
# #### Quantum support vector machine algorithm
# * [Quantum vector machine algorithm](3_SVM/Quantum Support Vector Machine.ipynb)
# ## 4.Future work<a id='section4'></a>
# Quantum machine learning has been widely used in many fields. Many excellent results have been achieved in solving optimization problems such as gradient descent and Newton method. Besides, there are also a lot of superior work in feature extraction, data reduction, and so on. Recently, some heuristic results have been achieved in the research of neural network and deep learning. Therefore, quantum machine learning has a broad prospect in the future and deserves much more in-depth research.
| community/awards/teach_me_qiskit_2018/quantum_machine_learning/QISKIT for quantum machine learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from functools import partial
from geopy.geocoders import Nominatim
from datetime import timedelta, datetime
from sklearn import datasets
from sklearn.cluster import KMeans
from scipy.cluster.hierarchy import linkage, dendrogram
import random, pandas as pd
import matplotlib.pyplot as plt
# +
orders = pd.read_excel('dataset_orders_final_new.xlsx')
clust_orders = orders[['latitude to','longtitude to','rapid']].copy()
clust_orders = clust_orders[clust_orders['rapid'] == 0]
clust_orders = clust_orders.drop(['rapid'], axis=1).reset_index()
clust_orders['latitude to'] = (clust_orders['latitude to'] - 55)*10
clust_orders['longtitude to'] = (clust_orders['longtitude to'] - 38)*10
# clust_orders
plt.plot(clust_orders['latitude to'].values, clust_orders['longtitude to'].values, 'o', color='black');
# +
# Извлекаем измерения как массив NumPy
samples = clust_orders.values
# Реализация иерархической кластеризации при помощи функции linkage
mergings = linkage(samples, method='complete')
# Строим дендрограмму, указав параметры удобные для отображения
dendrogram(mergings,
leaf_rotation=90,
leaf_font_size=1,
)
plt.show()
# +
plotlables = ['bo','go','ro','yo']
# Извлекаем измерения как массив NumPy
samples = clust_orders[['latitude to','longtitude to']].values
# Описываем модель
model = KMeans(n_clusters=3)
# Проводим моделирование
model.fit(samples)
# Предсказание на всем наборе данных
all_predictions = model.predict(samples)
all_predictions = pd.DataFrame(all_predictions, columns=['clusters'])
all_predictions['x'] = samples[:,0]
all_predictions['y'] = samples[:,1]
all_predictions['index'] = clust_orders['index']
all_predictions[all_predictions['clusters'] == 1]
for col in all_predictions.clusters.unique():
plt.plot(all_predictions[all_predictions['clusters'] == col]['x'].values,\
all_predictions[all_predictions['clusters'] == col]['y'].values,\
plotlables[int(col)])
# -
output = all_predictions.copy()
output['x'] = output['x']/10+55
output['y'] = output['y']/10+38
output.to_excel('for_peresekator.xlsx',index=False)
| data/github.com/benzom/A-2/4f3cd74ad5a51a54c448031feaf16c75f6fba374/geo_clusters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import nibabel as nib
from tqdm import tqdm
import logging
from sklearn.model_selection import StratifiedKFold
import time
from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix
import sys
import os
import matplotlib.pyplot as plt
from scipy.ndimage.interpolation import zoom
from fastai2.basics import *
torch.cuda.set_device(6)
# data_path = Path('/home/akanksha/brainlabs/projects/brain-transfer')
data_path = Path('/home/akanksha/brainlabs/projects/brain-seg/data_folder/preprocessing/ADNIBrain')
files = get_files(data_path, extensions=['.gz'])
files = L([f for f in files if f.parent.name in ['AD','CN']])
classes = Counter([f.parent.name for f in files]); classes
399/(399+424)
img = nib.load(files[0]).get_fdata()
plt.imshow(img[50])
img.shape
def norm(data):
return (data - data.min()) / (data.max() - data.min())
def resize(data, target_shape=[96, 128, 96]):
factor = [float(t) / float(s) for t, s in zip(target_shape, data.shape)]
resized = zoom(data, zoom=factor, order=1, prefilter=False)
#print(resized)
return resized
n = norm(img)
plt.imshow(n[50, :])
def resize(data, target_shape=[96, 112, 96]):
factor = [float(t) / float(s) for t, s in zip(target_shape, data.shape)]
resized = zoom(data, zoom=factor, order=1, prefilter=False)
#print(resized)
return resized
n = norm(resize(img))
plt.imshow(n[50, :])
def get_nii(path):
img = nib.load(str(path)).get_fdata()
img_n = norm(resize(img))
return torch.from_numpy(img_n).view(1, *img_n.shape).float()
def get_lbl(path):
return int(path.parent.name == 'AD')
tfms = [[get_nii],[get_lbl]]
dsets = Datasets(files, tfms, splits=RandomSplitter()(files))
def get_data_gen(files, bs, sz=None, nw=8,
batch_xtra=None, after_item=None, with_aug=True, test=False, **kwargs):
tfms = [[get_nii],[get_lbl]]
dsets = Datasets(files, tfms, splits=RandomSplitter(seed=42)(files))
dls = dsets.dataloaders(bs=bs, num_workers=nw)
dls.c = 1
return dls
dls = get_data_gen(files,bs=4)
xb,yb = dls.one_batch()
from rsna_retro.imports import *
from rsna_retro.metadata import *
from rsna_retro.preprocess import *
from rsna_retro.train import *
from rsna_retro.train3d import *
from rsna_retro.trainfull3d_labels import *
# +
# m = get_3d_head()
# config=dict(custom_head=m)
# learn = get_learner(dls, xresnet18, get_loss(), config=config)
# hook = ReshapeBodyHook(learn.model[0])
# # learn.add_cb(RowLoss())
# -
#export
class Flat3d(Module):
def forward(self, x): return x.view(x.shape[0],-1)
# +
#export
class XResNet3D(nn.Sequential):
@delegates(ResBlock)
def __init__(self, block, expansion, layers, p=0.0, c_in=3, c_out=1000, stem_szs=(32,32,64),
widen=1.0, sa=False, act_cls=defaults.activation, **kwargs):
store_attr(self, 'block,expansion,act_cls')
stem_szs = [c_in, *stem_szs]
stem = [ConvLayer(stem_szs[i], stem_szs[i+1], stride=2 if i==0 else 1, act_cls=act_cls, ndim=3)
for i in range(3)]
block_szs = [int(o*widen) for o in [64,128,256,512] +[256]*(len(layers)-4)]
block_szs = [64//expansion] + block_szs
blocks = [self._make_layer(ni=block_szs[i], nf=block_szs[i+1], blocks=l,
stride=1 if i==0 else 2, sa=sa and i==len(layers)-4, **kwargs)
for i,l in enumerate(layers)]
super().__init__(
*stem, nn.MaxPool3d(kernel_size=3, stride=2, padding=1),
*blocks,
*get_3d_head(c_out=c_out)
)
init_cnn(self)
def _make_layer(self, ni, nf, blocks, stride, sa, **kwargs):
return nn.Sequential(
*[self.block(self.expansion, ni if i==0 else nf, nf, stride=stride if i==0 else 1,
sa=sa and i==(blocks-1), act_cls=self.act_cls, **kwargs)
for i in range(blocks)])
# -
def get_3d_head(p=0.0, c_out=3, in_feat=512):
m = nn.Sequential(#Batchify(),
ConvLayer(in_feat,512,stride=2,ndim=3), # 8
# ConvLayer(512,1024,stride=2,ndim=3), # 4
# ConvLayer(1024,1024,stride=2,ndim=3), # 2
nn.AdaptiveAvgPool3d((1, 1, 1)), #Batchify(),
Flat3d(), nn.Dropout(p),
nn.Linear(512, c_out))
init_cnn(m)
return m
#export
def xres3d(c_out=6, **kwargs):
m = XResNet3D(ResBlock, expansion=1, layers=[2, 2, 2, 2], c_out=c_out, ndim=3, **kwargs)
init_cnn(m)
return m
m = xres3d(c_in=1, c_out=1).cuda()
learn = get_learner(dls, m, lf=BCEWithLogitsLossFlat(), metrics=accuracy_multi)
# +
with torch.no_grad():
out = m(xb)
learn.loss_func(out, yb)
# -
learn.lr_find()
# targs = [yb for xb,yb in dls.valid]
# 1 - torch.stack(targs).float().mean()
0.414634
do_fit(learn, 10, 1e-4)
# ## 2d -> 3d head
learn.summary()
class ReshapeBodyHook():
def __init__(self, body):
super().__init__()
self.pre_reg = body.register_forward_pre_hook(self.pre_hook)
self.reg = body.register_forward_hook(self.forward_hook)
self.shape = None
def deregister(self):
self.reg.remove()
self.pre_reg.remove()
def pre_hook(self, module, input):
x = input[0]
bs,nc,w,d,h = x.shape
self.shape = x.shape
self.bs = bs
self.w = w
return x.view(bs*96,1,112,96)
return (x.view(-1, *x.shape[3:]),)
def forward_hook(self, module, input, x):
# print(x.view(2,512,-1,*x.shape[2:]).shape)
return x.view(self.bs,512,self.w,*x.shape[2:])
return x.view(*self.shape[:3], *x.shape[1:])
# +
m = get_3d_head(c_out=1)
config=dict(custom_head=m)
arch = partial(xresnet18, c_in=1, c_out=1)
# learn = get_learner(dls, arch, get_loss(), config=config)
learn = get_learner(dls, arch, lf=BCEWithLogitsLossFlat(), metrics=accuracy_multi, config=config)
hook = ReshapeBodyHook(learn.model[0])
# learn.add_cb(RowLoss())
# -
do_fit(learn, 10, 1e-4)
| 07_adni_01.ipynb |