content
stringlengths 6
1.05M
|
|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Import modules:
# +
import sys, os
sys.path.append(os.path.realpath("aima"))
# %matplotlib inline
import csp_TUMsubclass as csp
# -
# # Arc Consistency Algorithm
#
# Let us have a look at pseudocode and implementation, which you can find at slide 34.
# <img src="attachment:AC-3_1.png" width="800">
#
# Note that the function csp.make_arc_consistent invoked in csp.AC3 performs the while loop described in the pseudo code.
psource csp.AC3
psource csp.make_arc_consistent
# 
#
# (in our slides remove-inconsistent-values is called revise in the aima code)
psource csp.AIMAcsp.revise
# ## AC-3 and backtracking search
#
# When we perform the AC-3 algorithm during backtracking search, the algorithm is called after a value x has been assigned to a variable X. In this case, the queue in the algorithm does not contain all arcs of the CSP, but just the arcs ($Y_{i}, X$) where each $Y_{i}$ is a neighbor of the variable X.
# # Arc Consistency (see slide 30)
#
# Let us instantiate a Map Coloring Australia CSP, which was already mentioned in the lecture. Note that we omit Tanzania in the CSP.
my_australia = csp.australia() #create a new Australia CSP
my_australia.support_pruning() #initialize domains
assignment = {} #the assignment is empty since we did not initialize any variable
# Consider the following start situation:
# 
# The following lines will assign values and domains to the variables according to the picture.
my_australia.assign( 'WA', 'R', assignment) #assign Red to WA
my_australia.curr_domains['WA'] = ['R'] #set domain of WA to R
my_australia.prune('NT', 'R', None) #remove Red from domain of NT
my_australia.prune('SA', 'R', None) #remove Red from domain of SA
my_australia.assign( 'Q', 'G', assignment) #assign Green to Q
my_australia.curr_domains['Q'] = ['G'] #set domain of Q to Green
my_australia.prune('NT', 'G', None) #remove Green from domain of NT
my_australia.prune('SA', 'G', None) #remove Green from domain of SA
my_australia.prune('NSW', 'G', None) #remove Green from domain of NSW
# We can verify the state of our CSP by displaying assignments and domains:
my_australia.display(assignment)
my_australia.display_domain()
# Let us start our example from here: we just removed green from the domain of NSW (e.g. due to forward checking after assigning green to Q), so we start the AC-3 algorithm from NSW. We first check if NSW (variable Xi) is arc-consistent with SA (variable Xj).
# 
queue = [('NSW', 'SA') ]
csp.make_arc_consistent(my_australia, queue)
queue
# 
# To make NSW arc-consistent with SA, we had to remove blue. Due to the
# removal we have to check arc-consistency of the neighbors of NSW (except SA) with NSW, for this reason we added the corresponding arcs to the queue.
csp.make_arc_consistent(my_australia, queue)
csp.make_arc_consistent(my_australia, queue)
queue
# 
# Q was arc-consistent with NSW, but we had to remove red from V. Due
# to the removal we have to check arc-consistency of the neighbors of V (except NSW) with V, for this reason we added the corresponding arcs to the queue.
csp.make_arc_consistent(my_australia, queue)
# Since SA is arc-consistent with V, the algorithm terminates.
# ## Review of arc- consistency
# The situation, where our AC-3 algorithm just ended, offers us a great opportunity to review the concept of arc-consistency.
# Consider the following situation:
# 
# As shown previously, SA is arc-consistent with V.
# However, V is not arc-consistent with SA. V would be arc-consistent with SA after removing blue from its domain.
#
# This is a good example to understand that every arc has a direction. The same arc can be arc-consistent in one direction (SA is arc-consistent with V) and not arc-consistent in the other (V is not arc-consistent with SA).
# ## Notes on arc consistency for this example
#
# Note that in the above example during backtracking search, the AC-3 algorithm was invoked with a specific arc as initial queue. After the AC-3 algorithm terminated, the CSP was not arc consistent.
#
# Applying the AC-3 algorithm during backtracking search after assigning a value to a variable, the resulting CSP will always be arc-consistent if it has a solution. This is shown in the next example
#
#
#
# # Arc Consistency Algorithm: Example (see slide 31)
#
# Let us apply the Arc Consistency Algorithm to the Australia CSP after the first assignment.
my_australia = csp.australia() #create a new Australia CSP
my_australia.support_pruning() #initialize domains
assignment = {} #the assignment is empty since we did not initialize any variable
# 
# We set WA = red. In order to check arc-consistency of the neighbours of WA with WA, we add the corresponding arcs to the queue.
my_australia.assign( 'WA', 'R', assignment) #assign Red to WA
my_australia.curr_domains['WA'] = ['R'] #set domain of WA to R
queue = [('NT', 'WA'), ('SA', 'WA')] #check arc-consistency of the neighbours of WA
# We check if the first arc in the queue is arc-consistency.
print(queue[0]) #take a look at the first element of the queue
csp.make_arc_consistent(my_australia, queue)
queue
# 
# Red was removed from the domain of NT. We have to check arc-consistency of the neighbours of NT (except WA) with NT. For this reason, we added the arcs (SA, NT) and (Q, NT) to the queue.
print(queue[0]) #take a look at the first element of the queue
csp.make_arc_consistent(my_australia, queue)
queue
# 
# Red was removed from the domain of SA. We have to check arc-consistency of the neighbours of SA (except WA) with SA, for this reason we added the arcs (NT, SA), (Q, SA), (NSW, SA) and (V, SA) to the queue.
csp.make_arc_consistent(my_australia, queue)
queue
# The first arc in the queue is already arc-consistent, as well as the other arcs. Therefore, the algorithm terminates.
while(queue):
csp.make_arc_consistent(my_australia, queue)
queue
# # The AC3 function
# Note that in the previous example, we perfomed the internal steps of the AC-3 algorithm separately, but the function AC3 will perform them all automatically.
#
# Let us perform the same example as before with the function AC3.
my_australia = csp.australia() #create a new Australia CSP
my_australia.support_pruning() #initialize domains
# 
# We set WA = red. In order to check arc-consistency of the neighbours of WA with WA, we add the corresponding arcs to the queue.
my_australia.curr_domains['WA'] = ['R'] #assign red to WA
queue = [('NT', 'WA'), ('SA', 'WA')] #check arc-consistency of the neighbours of WA
# Perform AC-3 with the AC3 function:
csp.AC3(my_australia, queue) #returns True iff my_australia is arc-consistent, otherwise False
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualizations using the sales estimates
# BASIC IMPORTS BEFORE WE BEGIN
import matplotlib
from matplotlib import pyplot as plt
# %matplotlib inline
import pandas as pd
import csv
import statsmodels.formula.api as smf
from scipy.stats import pearsonr
import numpy as np
import random
import scipy.stats as ss
authordata = pd.read_csv('salesdata/authordata.csv', index_col = 'author')
onlyboth = pd.read_csv('salesdata/pairedwithprestige.csv', index_col = 'author')
# +
def get_a_period(aframe, floor, ceiling):
''' Extracts a chronological slice of our data.
'''
subset = aframe[(aframe.midcareer >= floor) & (aframe.midcareer < ceiling)]
x = subset.percentile
y = subset.prestige
return x, y, subset
def get_an_author(anauthor, aframe):
''' Gets coordinates for an author in a given space.
'''
if anauthor not in aframe.index:
return 0, 0
else:
x = aframe.loc[anauthor, 'percentile']
y = aframe.loc[anauthor, 'prestige']
size = aframe.loc[anauthor, 'num_vols']
return x, y, size
def plot_author(officialname, vizname, aperiod, ax):
x, y, size = get_an_author(officialname, aperiod)
if size < 25:
offset = 0.01
elif size < 100:
offset = 0.02
else:
offset = 0.03
ax.text(x + offset, y - 0.01, vizname, fontsize = 14)
def revtocolor(number):
if number > 0.1:
return '0.7'
else:
return 'white'
def revtomark(number):
if number > 0.1:
return '^'
else:
return 'o'
def revtobinary(number):
if number > 0.1:
return 1
else:
return 0
# Let's plot the mid-Victorians
xvals, yvals, victoriana = get_a_period(onlyboth, 1840, 1875)
victoriana = victoriana.assign(samplecolor = victoriana.reviews.apply(revtocolor))
victoriana = victoriana.assign(marker = victoriana.reviews.apply(revtomark))
fig, ax = plt.subplots(figsize = (10,9))
for m in ['o', '^']:
thisgroup = victoriana[victoriana.marker == m]
if m == 'o':
face = '0.9'
lwd = '1.5'
edge = 'black'
elif m == '^':
lwd = '0.5'
face = '0.1'
edge = 'black'
ax.scatter(thisgroup.percentile, thisgroup.prestige, s = thisgroup.num_vols * 3 + 7, alpha = 0.4,
edgecolor = edge, facecolor = face, marker = m, linewidth = lwd)
authors_to_plot = {'Dickens, Charles': 'Charles Dickens', "Wood, Ellen": 'Ellen Wood',
'Ainsworth, William Harrison': 'W. H. Ainsworth',
'Lytton, Edward Bulwer Lytton': 'Edward Bulwer-Lytton',
'Eliot, George': 'George Eliot', 'Sikes, Wirt': 'Wirt Sikes',
'Collins, A. Maria': 'Maria A Collins',
'Hawthorne, Nathaniel': "Nathaniel Hawthorne",
'Southworth, Emma Dorothy Eliza Nevitte': 'E.D.E.N. Southworth',
'Helps, Arthur': 'Arthur Helps'}
for officialname, vizname in authors_to_plot.items():
plot_author(officialname, vizname, victoriana, ax)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('percentile ranking, sales', fontsize = 16)
ax.set_ylabel('probability of review in selective journals', fontsize = 16)
ax.set_title('The literary field, 1850-74\n', fontsize = 20)
ax.text(0.5, 0.1, 'area = number of volumes in Hathi\ntriangles = reviewed sample', color = 'black', fontsize = 12)
ax.set_ylim((0,1))
ax.set_xlim((-0.05,1.1))
spines_to_remove = ['top', 'right']
for spine in spines_to_remove:
ax.spines[spine].set_visible(False)
plt.savefig('images/field1850noline.png', bbox_inches = 'tight')
plt.show()
# +
# Let's plot modernism!
xvals, yvals, modernity2 = get_a_period(onlyboth, 1925,1950)
modernity2 = modernity2.assign(samplecolor = modernity2.reviews.apply(revtocolor))
modernity2 = modernity2.assign(marker = modernity2.reviews.apply(revtomark))
fig, ax = plt.subplots(figsize = (11,9))
for m in ['o', '^']:
thisgroup = modernity2[modernity2.marker == m]
if m == 'o':
face = '0.9'
lwd = '1.5'
edge = 'black'
elif m == '^':
lwd = '0.5'
face = '0.1'
edge = 'black'
ax.scatter(thisgroup.percentile, thisgroup.prestige, s = thisgroup.num_vols * 3 + 7, alpha = 0.4,
edgecolor = edge, facecolor = face, marker = m, linewidth = lwd)
authors_to_plot = {'Cain, James M': 'James M Cain', 'Faulkner, William': 'William Faulkner',
'Stein, Gertrude': 'Gertrude Stein', 'Hemingway, Ernest': 'Ernest Hemingway',
'Joyce, James': 'James Joyce', 'Forester, C. S. (Cecil Scott)': 'C S Forester',
'Spillane, Mickey': 'Mickey Spillane',
'Howard, Robert E': 'Robert E Howard', 'Buck, Pearl S': 'Pearl S Buck',
'Shute, Nevil': 'Nevil Shute', 'Greene, Graham': 'Graham Greene',
'Christie, Agatha': 'Agatha Christie', 'Rhys, Jean': 'Jean Rhys',
'Wodehouse, P. G': 'P G Wodehouse',
'Hurston, Zora Neale': 'Zora Neale Hurston'}
for officialname, vizname in authors_to_plot.items():
plot_author(officialname, vizname, modernity2, ax)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('percentile ranking, sales', fontsize = 16)
ax.set_ylabel('probability of review in selective journals', fontsize = 16)
ax.set_title('The literary field, 1925-49\n', fontsize = 20)
ax.text(0.5, 0.0, 'area = number of volumes in Hathi\ntriangles = reviewed sample', color = 'black', fontsize = 12)
ax.set_ylim((-0.05,1))
ax.set_xlim((-0.05, 1.17))
#modernity2 = modernity2.assign(binaryreview = modernity2.reviews.apply(revtobinary))
#y, X = dmatrices('binaryreview ~ percentile + prestige', data=modernity2, return_type='dataframe')
#crudelm = smf.Logit(y, X).fit()
#params = crudelm.params
#theintercept = params[0] / -params[2]
#theslope = params[1] / -params[2]
#newX = np.linspace(0, 1, 20)
#abline_values = [theslope * i + theintercept for i in newX]
#ax.plot(newX, abline_values, color = 'black')
spines_to_remove = ['top', 'right']
for spine in spines_to_remove:
ax.spines[spine].set_visible(False)
plt.savefig('images/field1925noline.png', bbox_inches = 'tight')
plt.show()
#print(theslope)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Given an array with n objects colored red, white or blue, sort them in-place so that objects of the same color are adjacent, with the colors in the order red, white and blue.
#
# Here, we will use the integers 0, 1, and 2 to represent the color red, white, and blue respectively.
#
# Note: You are not suppose to use the library's sort function for this problem.
#
# Example:
#
# Input: [2,0,2,1,1,0]
# Output: [0,0,1,1,2,2]
# Follow up:
#
# - A rather straight forward solution is a two-pass algorithm using counting sort. First, iterate the array counting number of 0's, 1's, and 2's, then overwrite array with total number of 0's, then 1's and followed by 2's.
# - Could you come up with a one-pass algorithm using only constant space?
# 1. [Dutch national flag problem](https://en.wikipedia.org/wiki/Dutch_national_flag_problem)
#
# 2. [快速排序深入之荷兰国旗问题](https://www.cnblogs.com/junyuhuang/p/4390780.html)
# +
# Dutch national flag problem
class Solution(object):
def sortColors(self, nums):
"""
:type nums: List[int]
:rtype: None Do not return anything, modify nums in-place instead.
"""
begin = 0
end = len(nums) - 1
cur = 0
# 1 is mid of [0,1,2]
mid = 1
while cur <= end:
if nums[cur] < mid:
nums[cur], nums[begin] = nums[begin], nums[cur]
cur += 1
begin += 1
elif nums[cur] > mid:
nums[cur], nums[end] = nums[end], nums[cur]
end -= 1
else:
cur += 1
return nums # do not need this line in leetcode
# because it returns nums as default
# test
nums = [2,0,2,1,1,0]
Solution().sortColors(nums)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from captcha.image import ImageCaptcha
from captcha.audio import AudioCaptcha
import matplotlib.pyplot as plt
import random
import string
lower_case = list(string.ascii_lowercase)
upper_case = list(string.ascii_uppercase)
numbers = list(string.digits)
#This function will create a random captcha string text based on above three list
#The input parameter is the captcha text length
def create_random_captcha_text(captcha_string_size=8):
captcha_string_list = []
base_char = lower_case+upper_case+numbers
for i in range(captcha_string_size):
#Select one character randomly
char = random.choice(base_char)
#Append the character to the list
captcha_string_list.append(char)
captcha_string = ''
#Change the character list to string
for item in captcha_string_list:
captcha_string +=str(item)
return captcha_string
#This function will create a fully digital captcha string text
def create_random_digital_text(captcha_string_size = 8):
captcha_string_list = []
#Loop in the number list and return a digital captcha string list
for i in range(captcha_string_size):
char = random.choice(numbers)
captcha_string_list.append(char)
captcha_string = ''
#Convert the digital list to string
for item in captcha_string_list:
captcha_string += str(item)
return captcha_string
#Create an image captcha with special text
def create_image_captcha(captcha_text):
image_captcha = ImageCaptcha()
#Create the captcha image
image = image_captcha.generate_image(captcha_text)
#Add noise curve for the image
image_captcha.create_noise_curve(image, image.getcolors())
#Add noise dots for the image
image_captcha.create_noise_dots(image, image.getcolors())
#Save the image to a png file
image_file = "./captcha_"+captcha_text+".png"
image_captcha.write(captcha_text, image_file)
#Display the image in a matplotlib viewer
plt.imshow(image)
plt.show()
print(image_file + " has been created.")
#Create an audio captcha file
def create_audio_captcha():
#create the audio captcha with the specified voice wav file library folder
#Each captcha char should be it's own directory under the specified folder (such as ./voices),
#For example ./voices/a/a.wav will be playedwhen the character is a.
#If you do not specify your own voice file library folder, the default built-in voice library which has only digital voice file will be used.
# audio_captcha = AudioCaptcha(voicedir='./voices')
#Create an audio captcha which use digital voice file only.
audio_captcha = AudioCaptcha()
#Because we use the default module voice library, so we can only generate digital text voice.
captcha_text = create_random_digital_text()
#Generate the audio captcha file.
audio_data = audio_captcha.generate(captcha_text)
#Save the audio captcha file
audio_file = './captcha_'+captcha_text+'.wav'
audio_captcha.write(captcha_text, audio_file)
print(audio_file + "has been created.")
if __name__ == '__main__':
# #create random text.
captcha_text = create_random_captcha_text()
#Create image captcha.
create_image_captcha(captcha_text)
#Create audio captcha
create_audio_captcha()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 0.0. IMPORTS
#import PANDAS
import pandas as pd
import inflection
import math
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.core.display import HTML
# ### 0.1. Helper Functions
# ### 0.2. Load Datasets
# +
#read dataset
df_sales_raw = pd.read_csv('../datasets/train.csv', low_memory=False)
df_store_raw = pd.read_csv('../datasets/store.csv', low_memory=False)
#merge df's
df_raw = pd.merge(df_sales_raw, df_store_raw, how='left', on="Store")
# -
#Show the sample of the data
df_raw.sample()
#
# # 1.0. DATA DESCRITPTION
#Copy the original data
df1 = df_raw.copy()
# ## 1.1. Rename Columns
# +
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment',
'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek','Promo2SinceYear', 'PromoInterval']
#Function Snake Case
snackecase = lambda x: inflection.underscore(x)
#Map of the names
cols_new = list(map(snackecase, cols_old))
#Rename name columns
df1.columns = cols_new
# -
# ## 1.2 Data Dimensions
print('Number of the rows: {}'.format(df1.shape[0]))
print('Number of the Cols: {}'.format(df1.shape[1]))
# ## 1.3 Data Types
#Change the type for DATETIME the column DATE
df1['date'] = pd.to_datetime(df1['date'])
# ## 1.4 Check NA
#Check SUM of the NA
df1.isna().sum()
# ## 1.5. Fillout NA
# +
#competition_distance 2642
df1['competition_distance'] = df1['competition_distance'].apply(lambda x: 200000.0 if math.isnan(x) else x )
#competition_open_since_month 323348
df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis=1)
#competition_open_since_year 323348
df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis=1)
#promo2_since_week 508031
df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis=1)
#promo2_since_year 508031
df1['promo2_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis=1)
#promo_interval 508031
month_map = {1: 'Jan', 2: 'Fev', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug', 9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec' }
df1['promo_interval'].fillna(0, inplace=True)
df1['month_map'] = df1['date'].dt.month.map(month_map)
df1['is_promo'] = df1[['promo_interval', 'month_map']].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split(',') else 0, axis=1)
# -
# ## 1.6. Change Types
# +
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype('int64')
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype('int64')
df1['promo2_since_week'] = df1['promo2_since_week'].astype('int64')
df1['promo2_since_year'] = df1['promo2_since_year'].astype('int64')
# -
# ## 1.7. Descriptive Statistical
# +
#Separate the numerics and categorics
#numerics
num_attributes = df1.select_dtypes( include=['int64', 'float64'] )
#categorics
cat_attributes = df1.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] )
# -
# ### 1.7.1 Numerical Attributes
# +
# Central Tendency - mean, median
ct1 = pd.DataFrame(num_attributes.apply( np.mean )).T
ct2 = pd.DataFrame(num_attributes.apply( np.median )).T
# Dispersion - std, min, max, range, skew, kurtosis
d1 = pd.DataFrame(num_attributes.apply(np.std)).T
d2 = pd.DataFrame(num_attributes.apply(min)).T
d3 = pd.DataFrame(num_attributes.apply(max)).T
d4 = pd.DataFrame(num_attributes.apply( lambda x: x.max() - x.min() ) ).T
d5 = pd.DataFrame(num_attributes.apply( lambda x: x.skew() ) ).T
d6 = pd.DataFrame(num_attributes.apply( lambda x: x.kurtosis() ) ).T
#concatenate the dispersions
metrics = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index()
# -
metrics.columns = [['Attributes','Min', 'Max', 'Range', 'Media', 'Mediana', 'STD', 'Skew', 'Kurtosis']]
metrics
sns.distplot( df1['competition_distance'] )
# ### 1.7.2 Categorical Attributes
#Types of categorical
cat_attributes.apply( lambda x: x.unique().shape[0] )
# +
#Create BOX PLOT
aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0 )]
plt.subplot( 1, 3 ,1)
sns.boxplot(x='state_holiday' , y='sales', data=aux1)
plt.subplot( 1, 3 ,2)
sns.boxplot(x='store_type' , y='sales', data=aux1)
plt.subplot( 1, 3 ,3)
sns.boxplot(x='assortment' , y='sales', data=aux1)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# https://www.tensorflow.org/get_started/estimator
import os
import urllib
import numpy as np
import tensorflow as tf
# +
# Data sets
IRIS_TRAINING = "iris_training.csv"
IRIS_TRAINING_URL = "http://download.tensorflow.org/data/iris_training.csv"
IRIS_TEST = "iris_test.csv"
IRIS_TEST_URL = "http://download.tensorflow.org/data/iris_test.csv"
# -
# If the training and test sets aren't stored locally, download them.
if not os.path.exists(IRIS_TRAINING):
raw = urllib.request.urlopen(IRIS_TRAINING_URL).read()
with open(IRIS_TRAINING, "wb") as f:
f.write(raw)
if not os.path.exists(IRIS_TEST):
raw = urllib.request.urlopen(IRIS_TEST_URL).read()
with open(IRIS_TEST, "wb") as f:
f.write(raw)
# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TRAINING,
target_dtype=np.int,
features_dtype=np.float32)
test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TEST,
target_dtype=np.int,
features_dtype=np.float32)
# Specify that all features have real-value data
feature_columns = [tf.feature_column.numeric_column("x", shape=[4])]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
# Define the training inputs
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(training_set.data)},
y=np.array(training_set.target),
num_epochs=None,
shuffle=True)
# Train model.
classifier.train(input_fn=train_input_fn, steps=2000)
# Define the test inputs
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(test_set.data)},
y=np.array(test_set.target),
num_epochs=1,
shuffle=False)
# Evaluate accuracy.
accuracy_score = classifier.evaluate(input_fn=test_input_fn)["accuracy"]
print("\nTest Accuracy: {0:f}\n".format(accuracy_score))
# +
# Classify two new flower samples.
new_samples = np.array(
[[6.4, 3.2, 4.5, 1.5],
[5.8, 3.1, 5.0, 1.7]], dtype=np.float32)
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": new_samples},
num_epochs=1,
shuffle=False)
predictions = list(classifier.predict(input_fn=predict_input_fn))
predicted_classes = [p["classes"] for p in predictions]
print(
"New Samples, Class Predictions: {}\n"
.format(predicted_classes))
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
NETS = ['dilation', 'multi_lstm_init', 'unet', 'FF', 'multi_lstm']
import tensorflow as tf
import datasets
import pickle
import os
import copy
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
import itertools
import seaborn as sns
# +
results_data_generalization = []
for NET in NETS:
if NET == 'dilation':
import experiments.dilation as experiments
elif NET == 'segnet':
import experiments.segnet as experiments
elif NET == 'lstm':
import experiments.lstm as experiments
elif NET == 'coloring':
import experiments.coloring as experiments
elif NET == 'crossing':
import experiments.crossing as experiments
elif NET == 'unet':
import experiments.unet as experiments
elif NET == 'multi_lstm':
import experiments.multi_lstm as experiments
elif NET == 'multi_lstm_init':
import experiments.multi_lstm_init as experiments
elif NET == 'FF':
import experiments.FF as experiments
elif NET == 'optimal_lstm':
import experiments.optimal_lstm as experiments
output_path = '/om/user/xboix/share/insideness/' + NET + '/'
run_opt = experiments.get_best_of_the_family(output_path)
opt_data = datasets.get_datasets(output_path)
NUM_COMPLEXITIES = 5
results_data_generalization.append([])
for opt in run_opt:
data_point = {}
if NET == 'multi_lstm_init' and not opt.dataset.complexity==4:
opt.dataset.complexity_strict = True
if opt.dataset.complexity<4:
continue
if opt.dataset.complexity_strict == False:
continue
data_point["dataset_complexity"] = opt.dataset.complexity
data_point["dataset_strict_complexity"] = opt.dataset.complexity_strict
if not os.path.isfile(opt.log_dir_base + opt.name + '/results/generalization_accuracy.pkl'):
data_point["results"] = "empty"
print('EMPTY')
else:
with open(opt.log_dir_base + opt.name + '/results/generalization_accuracy.pkl', 'rb') as f:
data_point["results"] = pickle.load(f)
results_data_generalization[-1].append(copy.deepcopy(data_point))
datasets_idx = np.zeros([2,NUM_COMPLEXITIES])
for idx, opt in enumerate(opt_data):
if not opt.num_images_training == 1e5:
continue
if opt.complexity>4:
continue
if opt.complexity_strict:
datasets_idx[0,opt.complexity] = idx
else:
datasets_idx[1,opt.complexity] = idx
# +
import seaborn
seaborn.set()
seaborn.set_style("whitegrid")
seaborn.set_context("poster")
dataset_train_labels = ['Polar','Spiral','Polar+Spiral']
dataset_train_compl = [4, 5, 6]
dataset_test_labels = ['Polar','Spiral','Digs']
dataset_train_idx = [49, 50, 53]
#fontbf = FontProperties()
#fontbf.set_weight('bold')
for idx_net, NET in enumerate(NETS):
res = results_data_generalization[idx_net]
if idx_net ==0:
plt.figure(figsize=(4,4))
ylab = ['Polar','Spiral', 'Digs']
else:
plt.figure(figsize=(4,4))
ylab = ['', '', '']
flag_v = False
if idx_net ==4:
plt.figure(figsize=(5,4))
flag_v = True
if NET == 'dilation':
nNET = 'Dilated'
if NET == 'multi_lstm_init':
nNET = "2-LSTM"
if NET == 'unet':
nNET = "UNet"
if NET == 'FF':
nNET = "Feed-Forward"
if NET == 'multi_lstm':
nNET = '2-LSTM,w/oInit.'
res_table = np.zeros([3,3])
for idx_res, comp in enumerate(dataset_train_compl):
for idx, lab in enumerate(dataset_train_idx):
for r in res:
if r["dataset_complexity"]==comp:
print(r["results"])
res_table[idx_res, idx] = (r["results"]["test_accuracy"][lab]-0.005)*100
plt.title(r'$\bf ' + nNET +'$')
sns.heatmap(res_table.T, annot=True, center=50, cmap='RdYlGn', xticklabels=['Polar','Spiral', 'Both'],
yticklabels=ylab, vmin=0, vmax=100, cbar=flag_v)
plt.yticks(rotation=0)
if idx_net ==0:
plt.ylabel(r'$\bf Test$ $\bf Set$')
plt.xlabel(r'$\bf Train$ $\bf Set$')
plt.savefig('./fig/quantitative/general_'+NET+'.pdf', format='pdf', bbox_inches='tight', dpi=1000)
# -
results_data_generalization[0][0]
results_data_generalization[1][1]
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (clean)
# language: python
# name: python3_clean
# ---
# # Export to file
# !pip install pandas openpyxl statsmodels seaborn
# # Import libraries -- a pink box with `FutureWarning` is normal and OK
import pandas as pd
import statsmodels.api as sm
import seaborn as sns
data = pd.read_excel("supermarket_marketing.xlsx")
data.sample(3).T
data.describe()
# # Creating a new variable based on existing variables
data['kids_teens_at_home'] = data['kids_at_home'] + data['teens_at_home']
data['kids_teens_at_home'].value_counts()
# # Drawing a scatterplot showing two (or three) variables
sns.scatterplot(data=data, x='sweets', y='kids_teens_at_home', alpha=.05)
# # Pivot tables
data.pivot(columns='kids_teens_at_home', values='sweets').mean()
# # Regressions
input_columns = ['wine','fruit', 'meat', 'fish','sweets',
'coupons_used','num_web_orders',
'num_phone_orders', 'num_store_orders',
'web_visits_monthly', 'days_since_purchase']
data[input_columns]
# +
data = data.dropna()
output = data['kids_teens_at_home']
inputs = data[input_columns]
# -
model = sm.OLS(output, inputs).fit()
model.summary()
predictions = model.predict(inputs)
predictions.hist()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mimsy Data Exploration
# and testing of config-driven local load function
# !pip install seaborn
# +
# %load_ext autoreload
# %autoreload 2
import sys
sys.path.append('..')
from heritageconnector.utils import data_loaders
from urllib.parse import urlsplit
import re
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', None)
# -
loader = data_loaders.local_loader()
data = loader.load_all()
cat_df = data['mimsy_catalogue']
people_df = data['mimsy_people']
# ## Data samples
len(people_df), len(cat_df)
cat_df.sample(2)
people_df.sample(2)
# ## URL Extraction
extract_domain_from_url("http://www.iwm.org.uk/collections/item/object/1030031461")
# +
def extract_urls_domains(mimsy_df, column="DESCRIPTION"):
"""
Create dataframe of URLs from Mimsy collection
"""
# extract urls
url_match = "((?:https?:\/\/|www\.|https?:\/\/|www\.)[a-z0-9\.:].*?(?=[\s;]|$))"
url_df = mimsy_df[column].str.extractall(url_match).rename(columns={0:'url'})
# extract domains
url_df['domain'] = url_df['url'].apply(extract_domain_from_url)
return url_df
def extract_domain_from_url(url):
"""
Extract domain from URL
"""
try:
# if the url doesn't start with http, the main bit of the url ends up in path
loc = urlsplit(url).netloc or urlsplit(url).path
# remove www. and slug if present
loc = re.sub("^www.", "", loc)
loc = re.sub("/.*$", "", loc)
return loc
except:
print(f"PARSING FAILED: {url}")
return ""
url_df = extract_urls_domains(people_df)
url_df
# -
domain_count = url_df['domain'].value_counts()
domain_count
filter_thresh = 50
domain_count_filtered = domain_count[domain_count >= filter_thresh]
domain_count_filtered = domain_count_filtered.append(pd.Series({"other": domain_count[domain_count<filter_thresh].sum()}))
domain_count_filtered
round(domain_count_filtered / domain_count_filtered.sum() * 100, 2)
# ## Create matrix to show distribution of URLs
people_url_df = extract_urls_domains(people_df)
cat_url_df = extract_urls_domains(cat_df)
len(people_url_df), len(cat_url_df)
top_people_domains = domain_count_filtered.index.tolist()
top_people_domains.remove('other')
people_url_df = people_url_df.reset_index()
people_url_df_filtered = people_url_df[people_url_df['domain'].isin(top_people_domains)]
people_matrix = pd.DataFrame(0, index=top_people_domains, columns=people_df.index)
# +
from tqdm import tqdm
for idx, row in tqdm(people_url_df_filtered.iterrows(), total=len(people_url_df_filtered)):
people_matrix.loc[row['domain'], row['level_0']] += 1
# -
matrix_plot = people_matrix.sort_values(top_people_domains, axis=1, ascending=False)
f, ax = plt.subplots(figsize=(20, 10))
nticks = 21
ax = sns.heatmap(matrix_plot, cbar=False, cmap="Blues", xticklabels=np.linspace(0,100,nticks))
ax.set_xticks(np.linspace(0, people_matrix.shape[1]-1, nticks));
ax.set_xlabel("% of records")
# ## Inspect URLs of individual domains
domain = 'en.wikipedia.org'
people_url_df[(people_url_df['domain'] == domain)]
people_df[people_df['DESCRIPTION'].astype(str).str.contains(domain)].head(5)
people_df.loc[14299]
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parse .srt files and insert in database
# The subtitle files are cleaned and then the movies, subtitles, countries, genres are added to the database
import pandas as pd
import pysrt
import os
import pymysql
import re
FOLDER_IMG = "/home/tanguy/data/movizz/img"
FOLDER_CSV = "/home/tanguy/data/movizz/csv"
FOLDER_SRT = '/home/tanguy/data/movizz/srt'
df_movies = pd.read_csv(os.path.join(FOLDER_CSV, 'metadata_movies.csv'), sep=';', index_col=0)
# ### Function to clean subtitles
# +
def striphtml(data):
p = re.compile(r'<.*?>')
return p.sub('', data)
def striphtml2(data):
p = re.compile(r'{.*?}')
return p.sub('', data)
def process_quote(s):
"""
Clean quotes
"""
s = s.replace("<i>","").replace("</i>","").replace("<b>","").replace("</b>","")
s = striphtml(s)
s = striphtml2(s)
return s
def get_filtered_subs(srt_file):
try:
subs = pysrt.open(srt_file)
except UnicodeDecodeError:
subs = pysrt.open(srt_file, encoding='iso-8859-1')
# Get text
subs_text = [process_quote(s.text) for s in subs]
# To remove "Translation by.." or movie title / director
subs_text = subs_text[3:-3]
# Filter opensubtitle ads
subs_text = [s for s in subs_text if not re.match('.*(www.OpenSubtitles.org).*', s)]
# Keep the 1/3 longest quotes
limit = int(len(subs_text)/3)
filtered_subs = sorted(subs_text, key=len, reverse=True)[:limit]
return filtered_subs
# -
# ### Insert in database
# +
# Connect to the database
connection = pymysql.connect(host='localhost',user='root',password='',db='quizz_db',
charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor)
for ttmovie_id, row in df_movies.iterrows():
print(ttmovie_id, end = ' ')
# Check if subtitles are available
list_folder_srt = [filename for filename in [files for root, dirs, files in os.walk(FOLDER_SRT)]][0]
if f'{ttmovie_id}.srt' not in list_folder_srt:
print('[ABORT - subtitles not available]')
continue
##########################################
# Format subtitles
##########################################
srt_file = os.path.join(FOLDER_SRT, f'{ttmovie_id}.srt')
subs = get_filtered_subs(srt_file)
if len(subs) == 0:
# TODO : handle {} format
print('[ABORT - bad format subtitles]')
continue
##########################################
# Add movie in database
##########################################
name = row['movie_title']
director = row['director']
year = int(row['title_year'])
popularity = int(row['rating_count'])
with connection.cursor() as cursor:
cursor.execute(f"SELECT id from quizz_movie WHERE imdb_id = '{ttmovie_id}'")
res = cursor.fetchall()
connection.commit()
if len(res) == 0:
with connection.cursor() as cursor:
# Create a new record
sql = "INSERT INTO `quizz_movie` (`imdb_id`, `name`, `director`, `year`, `popularity`, `image`, `has_quote`) VALUES (%s, %s, %s, %s, %s, %s, %s)"
cursor.execute(sql, (ttmovie_id, name, director, year, popularity, f'covers/{ttmovie_id}.jpg', 0))
id_movie_db = cursor.lastrowid
connection.commit()
else:
id_movie_db = res[0]['id']
##########################################
# Add genres
##########################################
list_genre = row['list_genre'].split(',')
for genre in list_genre:
# Add genre on database if doesn't exist
with connection.cursor() as cursor:
cursor.execute(f"SELECT id from quizz_genre WHERE name = '{genre}'")
res = cursor.fetchall()
connection.commit()
if len(res) == 0:
with connection.cursor() as cursor:
sql = "INSERT INTO `quizz_genre` (`name`) VALUES (%s)"
cursor.execute(sql, (genre))
genre_id = cursor.lastrowid
connection.commit()
else:
genre_id = res[0]['id']
# Add on moviegenre
with connection.cursor() as cursor:
sql = "INSERT INTO `quizz_moviegenre` (`movie_id`, `genre_id`) VALUES (%s, %s)"
cursor.execute(sql, (id_movie_db, genre_id))
connection.commit()
##########################################
# Add countries
##########################################
list_country = row['list_country'].split(',')
for country in list_country:
# Add genre on database if doesn't exist
with connection.cursor() as cursor:
cursor.execute(f"SELECT id from quizz_country WHERE name = '{country}'")
res = cursor.fetchall()
connection.commit()
if len(res) == 0:
with connection.cursor() as cursor:
sql = "INSERT INTO `quizz_country` (`name`) VALUES (%s)"
cursor.execute(sql, (country))
country_id = cursor.lastrowid
connection.commit()
else:
country_id = res[0]['id']
# Add on moviegenre
with connection.cursor() as cursor:
sql = "INSERT INTO `quizz_moviecountry` (`movie_id`, `country_id`) VALUES (%s, %s)"
cursor.execute(sql, (id_movie_db, country_id))
connection.commit()
##########################################
# Add subtitles in database
##########################################
for s in subs:
with connection.cursor() as cursor:
# Create a new record
try:
sql = "INSERT INTO `quizz_quote` (`movie_id`, `quote_text`) VALUES (%s, %s)"
cursor.execute(sql, (id_movie_db, s))
except:
print('E',end='')
connection.commit()
##########################################
# Add has_quote field
##########################################
with connection.cursor() as cursor:
cursor.execute(f"SELECT count(*) from quizz_quote WHERE movie_id = '{id_movie_db}'")
res = cursor.fetchall()
connection.commit()
if res[0]['count(*)'] != 0:
with connection.cursor() as cursor:
cursor.execute(f"UPDATE quizz_movie SET has_quote=1 WHERE id='{id_movie_db}'")
connection.commit()
print('[OK]')
connection.close()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Titanic exercise
#
# Titanic dataset. Mоже да се изтегли от тук:
# https://www.kaggle.com/wsj/college-salaries
#
# #### Изводи
#
# След направените анализи в/у feature-те по време на лекцията на курса можем да стигнем до следните наблюдения и изводи:
# - ...
# - ...
#
# Също така можем да опитаме някакъв feature engineering, "почиставне" и обработка на данните:
# - ...
# - ...
# #### Започваме
#
# ...
# !pip install numpy scipy matplotlib ipython scikit-learn pandas pillow mglearn
# ...
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import mglearn
from IPython.display import display
# %matplotlib inline
# -
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
# ...
original = pd.read_csv('data/titanic/train.csv', index_col='PassengerId')
original
original.isnull().sum().sort_values()
# ...
data = original.copy()
data
# ...
data['Title'] = data.Name.str.extract('([A-Za-z]+)\.', expand=False)
data.loc[data.Title == 'Mlle', 'Title'] = 'Miss'
data.loc[data.Title == 'Mme', 'Title'] = 'Mrs'
data.loc[data.Title == 'Ms', 'Title'] = 'Miss'
# +
other_titles = ['Dr', 'Rev', 'Col', 'Major', 'Countess', 'Don', 'Jonkheer', 'Capt', 'Lady', 'Sir']
data.Title = data.Title.replace(other_titles, 'Other')
# -
data.Title.value_counts()
age_by_title = data.groupby('Title').Age.mean()
age_by_title
data.loc[(data.Age.isnull()) & (data.Title == 'Mr'), 'Age'] = age_by_title['Mr']
data.loc[(data.Age.isnull()) & (data.Title == 'Mrs'), 'Age'] = age_by_title['Mrs']
data.loc[(data.Age.isnull()) & (data.Title == 'Master'), 'Age'] = age_by_title['Master']
data.loc[(data.Age.isnull()) & (data.Title == 'Miss'), 'Age'] = age_by_title['Miss']
data.loc[(data.Age.isnull()) & (data.Title == 'Other'), 'Age'] = age_by_title['Other']
data.Age.isnull().any()
data.isnull().sum().sort_values()
data.Embarked.fillna('S', inplace=True)
data.Embarked.isnull().any()
# +
# data = data.drop('Cabin', axis=1)
# -
data.isnull().sum()
data['Age_group'] = 0
data.head(5)
data.loc[data['Age'] <= 16, 'Age_group'] = 0
data.loc[(data['Age'] > 16) & (data['Age'] <= 32), 'Age_group'] = 1
data.loc[(data['Age'] > 32) & (data['Age'] <= 48), 'Age_group'] = 2
data.loc[(data['Age'] > 48) & (data['Age'] <= 64), 'Age_group'] = 3
data.loc[data['Age'] > 64, 'Age_group'] = 4
data.head(5)
# +
data['Family_size'] = 0
data['Family_size'] = data['Parch'] + data['SibSp']
data['IsAlone'] = (data.Family_size == 1).astype(float)
data['IsSmallFamily'] = ((2 <= data.Family_size) & (data.Family_size < 5)).astype(float)
data['IsLargeFamily'] = (5 <= data.Family_size).astype(float)
data = data.drop('Family_size', axis=1)
# -
data.head(5)
data = data.drop(['Name', 'Ticket', 'Age'], axis=1)
data.head(5)
data['Fare_range'] = pd.qcut(data['Fare'], 4)
data.groupby(['Fare_range']).Survived.mean()
data.head(2)
data['Fare_group'] = 0
data.loc[data['Fare'] <= 7.91, 'Fare_group'] = 0
data.loc[(data['Fare'] > 7.91) & (data['Fare'] <= 14.454), 'Fare_group'] = 1
data.loc[(data['Fare'] > 14.454) & (data['Fare'] <= 31), 'Fare_group'] = 2
data.loc[(data['Fare'] > 31) & (data['Fare'] <= 513), 'Fare_group'] = 3
data.head(2)
data = data.drop(['Fare', 'Fare_range'], axis=1)
# ...
data['Sex'].replace(['male', 'female'], [0, 1], inplace=True)
data['Embarked'].replace(['S', 'C', 'Q'], [0, 1, 2], inplace=True)
data['Title'].replace(['Mr', 'Mrs', 'Miss', 'Master', 'Other'], [0, 1, 2, 3, 4], inplace=True)
# +
# data['Alone'] = 0
# data.loc[data.Family_size == 0, 'Alone'] = 1
# -
# ...
# +
# data.loc[data['Age_group'] == 0, 'Sex'] = 2
data['IsChild'] = (data['Age_group'] == 0).astype(float)
data['IsAdult'] = (data['Age_group'] != 0).astype(float)
# -
data = data.drop(['SibSp', 'Parch'], axis=1)
# ... ... ...
# ...
# +
# data['Cabin'] = data['Cabin'].fillna('N')
# data['Cabin'] = data['Cabin'].astype(str)
# data['Cabin'] = data['Cabin'].astype(str).str[0]
# data.loc[data['Cabin'] == 'T', 'Cabin'] = 'N'
# data.groupby('Cabin').Survived.mean()
# data['Cabin'] = data.Cabin.apply(lambda x: ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'N'].index(x))
# -
data.head(5)
data.info()
data.columns
# +
# data = data.drop('Cabin', axis=1)
X = data.drop('Survived', axis=1)
y = data['Survived']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, stratify=y)
model = RandomForestClassifier(random_state=0).fit(X_train, y_train)
print("train score: ", model.score(X_train, y_train))
print("test score: ", model.score(X_test, y_test))
search = GridSearchCV(model, {'n_estimators': [ 10, 30, 50, 70, 100, 150, 180, 190, 200, 210, 220, 250, 300],
'max_depth': [2, 4, 5, 6, 8, 10, 12, 15],
'criterion': ['gini','entropy']
})
search.fit(X, y)
print(search.best_score_)
print(search.best_estimator_)
pd.DataFrame(search.cv_results_)[['rank_test_score', 'mean_test_score', 'params']].sort_values(by='rank_test_score').head(5)
# -
model = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=10, max_features='sqrt', max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=20, min_weight_fraction_leaf=0.0,
n_estimators=250, n_jobs=1, oob_score=False, random_state=0,
verbose=0, warm_start=False)
model.fit(X_train, y_train)
print("train score: ", model.score(X_train, y_train))
print("test score: ", model.score(X_test, y_test))
# +
# C = [0.005, 0.01, 0.05, 0.1, 0.2, 0.3, 0.25, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1.1, 1.15, 1.2, 1.3]
# gamma = [0.0005, 0.001, 0.005,0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1]
# kernel = ['rbf', 'linear']
# hyper = {'kernel': kernel, 'C': C, 'gamma': gamma}
# search = GridSearchCV(estimator=SVC(), param_grid=hyper, verbose=True)
# search.fit(X, y)
# print(search.best_score_)
# print(search.best_estimator_)
# pd.DataFrame(search.cv_results_)[['rank_test_score', 'mean_test_score', 'params']].sort_values(by='rank_test_score').head(5)
# +
# model = SVC(C=1.1, gamma=0.05, kernel='rbf')
# model.fit(X_train, y_train)
# print("train score: ", model.score(X_train, y_train))
# print("test score: ", model.score(X_test, y_test))
# -
# ...
# +
# test = pd.read_csv('data/titanic/test.csv', index_col=['PassengerId'])
# test['Title'] = test.Name.str.extract('([A-Za-z]+)\.', expand=False)
# test.loc[test.Title == 'Mlle', 'Title'] = 'Miss'
# test.loc[test.Title == 'Mme', 'Title'] = 'Mrs'
# test.loc[test.Title == 'Ms', 'Title'] = 'Miss'
# other_titles = ['Dr', 'Rev', 'Col', 'Major', 'Countess', 'Don', 'Jonkheer', 'Capt', 'Lady', 'Sir', 'Dona']
# test.Title = test.Title.replace(other_titles, 'Other')
# age_by_title = test.groupby('Title').Age.mean()
# test.loc[(test.Age.isnull()) & (test.Title == 'Mr'), 'Age'] = age_by_title['Mr']
# test.loc[(test.Age.isnull()) & (test.Title == 'Mrs'), 'Age'] = age_by_title['Mrs']
# test.loc[(test.Age.isnull()) & (test.Title == 'Master'), 'Age'] = age_by_title['Master']
# test.loc[(test.Age.isnull()) & (test.Title == 'Miss'), 'Age'] = age_by_title['Miss']
# test.loc[(test.Age.isnull()) & (test.Title == 'Other'), 'Age'] = age_by_title['Other']
# test.Embarked.fillna('S', inplace=True)
# test['Age_group'] = 0
# test.loc[test['Age'] <= 16, 'Age_group'] = 0
# test.loc[(test['Age'] > 16) & (test['Age'] <= 32), 'Age_group'] = 1
# test.loc[(test['Age'] > 32) & (test['Age'] <= 48), 'Age_group'] = 2
# test.loc[(test['Age'] > 48) & (test['Age'] <= 64), 'Age_group'] = 3
# test.loc[test['Age'] > 64, 'Age_group'] = 4
# test['Family_size'] = 0
# test['Family_size'] = test['Parch'] + test['SibSp']
# test['Fare_range'] = pd.qcut(test['Fare'], 4)
# test['Fare_group'] = 0
# test.loc[test['Fare'] <= 7.91, 'Fare_group'] = 0
# test.loc[(test['Fare'] > 7.91) & (test['Fare'] <= 14.454), 'Fare_group'] = 1
# test.loc[(test['Fare'] > 14.454) & (test['Fare'] <= 31), 'Fare_group'] = 2
# test.loc[(test['Fare'] > 31) & (test['Fare'] <= 513), 'Fare_group'] = 3
# test['Sex'].replace(['male', 'female'], [0, 1], inplace=True)
# test['Embarked'].replace(['S', 'C', 'Q'], [0, 1, 2], inplace=True)
# # test['Title'].replace(['Mr', 'Mrs', 'Miss', 'Master', 'Other'], [0, 1, 2, 3, 4], inplace=True)
# test['Title'] = test.Title.apply(lambda x: ['Mr', 'Mrs', 'Miss', 'Master', 'Other'].index(x))
# test['Alone'] = 0
# test.loc[test.Family_size == 0, 'Alone'] = 1
# test.loc[test['Age_group'] == 0, 'Sex'] = 2
# test['Cabin'] = test['Cabin'].fillna('N')
# test['Cabin'] = test['Cabin'].astype(str)
# test['Cabin'] = test['Cabin'].astype(str).str[0]
# test.loc[test['Cabin'] == 'T', 'Cabin'] = 'N'
# # test.groupby('Cabin').Survived.mean()
# test['Cabin'] = test.Cabin.apply(lambda x: ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'N'].index(x))
# test = test.drop(['Name', 'Ticket', 'Age', 'Fare', 'Fare_range', 'SibSp', 'Parch'], axis=1)
# test.info()
# test.head(2)
# # Best model for now, but without Cabin column
# # model = RandomForestClassifier(random_state=0, n_estimators=30, max_depth=4).fit(X_train, y_train)
# # print("train score: ", model.score(X_train, y_train))
# # print("test score: ", model.score(X_test, y_test))
# # model = RandomForestClassifier(random_state=0, n_estimators=300, max_depth=2).fit(X_train, y_train)
# +
# predictions = model.predict(test)
# frame = pd.DataFrame({
# 'PassengerId': pd.read_csv('data/titanic/test.csv').PassengerId,
# 'Survived': predictions
# })
# frame = frame.set_index('PassengerId')
# frame.to_csv('data/titanic/predictions.csv')
# frame.head()
# -
# ## Titanic leaderboard: #2435
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import kaggle
# !kaggle competitions download -c ds1-tree-ensembles
# !pwd
# "age";"job";"marital";"education";"default";"housing";"loan";"contact";"month";"day_of_week";"duration";"campaign";"pdays";"previous";"poutcome";"emp.var.rate";"cons.price.idx";"cons.conf.idx";"euribor3m";"nr.employed";"y
# 
#
import pandas as pd
pd.set_option('display.max_rows', 5000)
pd.set_option('display.max_columns', 5000)
from datetime import datetime
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
from sklearn.tree import DecisionTreeClassifier
from sklearn.impute import SimpleImputer
imputer = SimpleImputer()
import numpy as np
import category_encoders as ce
from sklearn.pipeline import Pipeline
from sklearn.pipeline import make_pipeline
from ipywidgets import interact
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import PCA
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
data_dir = '~/lambda/DS-Unit-4-Sprint-1-Tree-Ensembles/module1-decision-trees/'
emps = ['nurse', 'doctor', 'lead', 'plumber', 'coal', 'accountant', 'attorney']
manager = ['manager','vp', 'president', 'vice president''director', 'executive', 'superintendent', 'captain']
it = ['tech','computers', 'it', 'data', 'progammer', 'analyst']
df = pd.read_csv(data_dir + 'train_features.csv')
df.head()
earliest_cr_line = df.earliest_cr_line.apply(lambda d: datetime.strptime(d,'%b-%Y'))
df_earliest_cr_line_min = earliest_cr_line.min()
means = {}
c = 0.5
subgrade_means = {'A': c, 'B': c, 'C': c, 'D': c, 'E': c, 'F': c, 'G': c}
count_null_subgrade_and_grade = 0
# +
def wrangle(X, make_means=False):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
if make_means:
for i in range(65,72):
c = chr(i)
subgrade_means[c] = X[X.grade == c].sub_grade.apply(lambda s: float(s[1])).mean() / 10
print(subgrade_means)
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(o):
x = o[1]
grade = o[0]
if isinstance(x, float):
if isinstance(grade, float):
count_null_subgrade_and_grade += 1
return float(3)
return float(ord(grade[0]) - 64) + subgrade_means[grade[0]]
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
# X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
X.sec_app_earliest_cr_line.fillna(False, inplace=True)
X.earliest_cr_line = X.earliest_cr_line.apply(
lambda d: datetime.strptime(d, '%b-%Y'))
X.earliest_cr_line = (X.earliest_cr_line -
df_earliest_cr_line_min) / np.timedelta64(1, 'D')
X.term = X.term.apply(lambda t: int(t[1:3]))
X['int_rates'] = X.int_rate.apply(lambda r: float(r[:-1]))
X.drop(['int_rate'], axis=1, inplace=True)
X['ngrade'] = X[['grade', 'sub_grade']].apply(wrangle_sub_grade, axis=1)
X.drop(['grade', 'sub_grade'], axis=1, inplace=True)
def el(e):
if isinstance(e, float) or e[0] == '<':
return 0
return int(e[0:(2 if e[1] == '0' else 1)])
X.emp_length = X.emp_length.apply(el)
def wrangle_emp(x):
if isinstance(x, float):
return 'No'
for s in emps:
if x.find(s) >= 0:
return s
return 'Yes'
def wrangle_list(x, l):
if isinstance(x, float):
return False
for s in l:
if x.find(s) >= 0:
return True
return False
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_emp'] = X['emp_title'].apply(lambda x: wrangle_emp(x))
X['emp_title_manager'] = X['emp_title'].apply(
lambda x: wrangle_list(x, manager))
X['emp_title_it'] = X['emp_title'].apply(lambda x: wrangle_list(x, it))
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
try:
X[col] = X[col].apply(lambda x: 0 if isinstance(x, float) else x)
except:
print(col)
if make_means:
clist = X.select_dtypes(include=[np.number]).columns.tolist()
for col in clist:
means[col] = X[col].mean()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(means[col])
print('count_null_subgrade_and_grade', count_null_subgrade_and_grade)
# Return the wrangled dataframe
return X
# -
df_train_y = pd.read_csv(data_dir + 'train_labels.csv')
df_train_y.dtypes
df_ = wrangle(df, True)
df_.head()
clist = df_.select_dtypes(exclude=[np.number]).columns.tolist()
one_hot_columns = []
binary_columns = []
max_one = 15
for c in clist:
if len(df_[c].unique()) > max_one:
binary_columns.append(c)
else:
one_hot_columns.append(c)
testdf = pd.read_csv(data_dir + 'test_features.csv')
testdf.head()
testdf_ = wrangle(testdf)
df_train_y = pd.read_csv(data_dir + 'train_labels.csv')
y = df_train_y.charged_off.values
X_train = df_
y_train = y
# +
# pca = PCA()
pipe = Pipeline(steps = [
('be', ce.BinaryEncoder(cols=binary_columns)),
('one', ce.OneHotEncoder(use_cat_names=True,cols=one_hot_columns)),
# ('pca', pca),
('gb', GradientBoostingClassifier())]
)
cross_val_score(pipe, X_train, y_train, cv=5, scoring='accuracy', verbose=10, n_jobs=-1)
# -
# array([0.85403974, 0.85388793, 0.853623 , 0.85468274, 0.85426603])
param_grid = {
# 'pca__n_components': [28],
"gb__loss" : ['exponential'],
"gb__learning_rate" : [0.1],
"gb__n_estimators": [180],
"gb__min_samples_leaf": [3],
"gb__min_impurity_decrease": [1.2],
"gb__max_depth": [3]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=8,
scoring='roc_auc',
verbose=1)
gsf = gs.fit(X_train, y_train)
print('Best Parameter (roc_auc score=%0.3f):' % gsf.best_score_)
print(gsf.best_params_)
# Best Parameter (roc_auc score=0.734):
# {'gb__learning_rate': 0.1, 'gb__loss': 'exponential', 'gb__min_samples_leaf': 3, 'gb__n_estimators': 180}
#
# Best Parameter (CV score=0.734):
# {'gb__learning_rate': 0.1, 'gb__loss': 'exponential', 'gb__n_estimators': 150}
print(count_null_subgrade_and_grade)
# Best Parameter (roc_auc score=0.734):
# {'gb__learning_rate': 0.1, 'gb__loss': 'exponential', 'gb__min_samples_leaf': 3, 'gb__n_estimators': 180}
#
# Best Parameter (CV score=0.734):
# {'gb__learning_rate': 0.1, 'gb__loss': 'exponential', 'gb__n_estimators': 150}
# +
#Plot the PCA spectrum
pca.fit(X)
fig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True, figsize=(6, 6))
ax0.plot(pca.explained_variance_ratio_, linewidth=2)
ax0.set_ylabel('PCA explained variance')
print(pca.explained_variance_ratio_)
ax0.axvline(search.best_estimator_.named_steps['pca'].n_components,
linestyle=':', label='n_components chosen')
ax0.legend(prop=dict(size=12))
# For each number of components, find the best classifier results
results = pd.DataFrame(search.cv_results_)
components_col = 'param_pca__n_components'
best_clfs = results.groupby(components_col).apply(
lambda g: g.nlargest(1, 'mean_test_score'))
# -
xgb = XGBClassifier()
xgb
# ?xgb.gamma
# +
# pca = PCA()
pipe = Pipeline(steps = [
('be', ce.BinaryEncoder(cols=binary_columns)),
('one', ce.OneHotEncoder(use_cat_names=True,cols=one_hot_columns)),
# ('pca', pca),
('xgb', XGBClassifier())]
)
cross_val_score(pipe, X_train, y_train, cv=5, scoring='accuracy', verbose=10, n_jobs=-1)
# -
param_grid = {
# 'pca__n_components': [28],
"xgb__booster": ["dart"],
"xgb__gamma": [8],
"xgb__learning_rate": [0.1],
"xgb__n_estimators": [130],
# "gb__min_samples_leaf": [3],
# "gb__min_impurity_decrease": [1.2],
"xgb__max_depth": [4]
}
# Fit on the train set, with grid search cross-validation
gs = GridSearchCV(pipe, param_grid=param_grid, cv=60, n_jobs=-1,
scoring='roc_auc',
verbose=1)
gsf = gs.fit(df_, y)
print('Best Parameter (roc_auc score=%0.3f):' % gsf.best_score_)
print(gsf.best_params_)
py = gsf.predict_proba(testdf_)[:, 1]
with open('submit.csv', 'w') as file:
file.write('id,charged_off\n')
for id, charged_off in zip(testdf.id,py):
# if charged_off > 0:
# print('a charge off')
file.write(f"{id},{charged_off}")
file.write('\n')
X = df_
y = y
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y)
gsf = gs.fit(X_train, y_train)
print('Best Parameter train (roc_auc score=%0.3f):' % gsf.best_score_)
print(gsf.best_params_)
py = gsf.predict_proba(X_train)[:,1]
predicted_y_test = gsf.predict_proba(X_test)
roc_auc_score(y_test,predicted_y_test[:,1])
testdf = pd.read_csv(data_dir + 'test_features.csv')
testdf_ = wrangle(testdf)
py = gsf.predict_proba(testdf_)[:, 1]
with open('submit.csv', 'w') as file:
file.write('id,charged_off\n')
for id, charged_off in zip(testdf.id,py):
# if charged_off > 0:
# print('a charge off')
file.write(f"{id},{charged_off}")
file.write('\n')
# +
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.model_selection import cross_val_predict
y_pred_proba = cross_val_predict(pipe, X_train, y_train, cv=5, n_jobs=-1,
method='predict_proba')[:, 1]
fpr, tpr, thresholds = roc_curve(y_train, y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
print('Area under the Receiver Operating Characteristic curve:',
roc_auc_score(y_train, y_pred_proba))
# -
fprdf = pd.DataFrame({'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds})
fprdf.head()
# +
# Filenames of your submissions you want to ensemble
files = ['dart', 'xgb', 'gbc']
target_name = 'charged_off'
submissions = (pd.read_csv(f"submit.{file}.csv")[
[target_name]] for file in files)
ensemble = pd.concat(submissions, axis='columns')
majority_vote = ensemble.mode(axis='columns')[0]
sample_submission = pd.read_csv(data_dir + 'sample_submission.csv')
submission = sample_submission.copy()
submission[target_name] = majority_vote
submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
# -
df.columns.tolist()
df_ = wrangle(df, True)
encoders = Pipeline([
('binary', ce.BinaryEncoder(cols=binary_columns)),
('onehot', ce.OneHotEncoder(use_cat_names=True,cols=one_hot_columns))
# ('DecisionTree', DecisionTreeClassifier(max_depth=17, class_weight='balanced'))
])
dff = encoders.fit(df_)
dft_ = dff.transform(df_)
X_train, X_test, y_train, y_test = train_test_split(
dft_, y, test_size=0.2, random_state=42, stratify=y)
# +
import xgboost as xgb
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
# +
params = {
'boost': 'dart',
'gamma': 14,
"n_estimators": 140,
# 'monotone_constraints': '(0)', # no constraint
'max_depth': 4,
'eta': 0.1,
'silent': 1,
'n_jobs': -1,
'seed': 0,
'eval_metric': 'rmse'}
# +
# # With early stopping
# bst_cv = xgb.cv(params, dtrain, num_boost_round=1000, nfold=5, early_stopping_rounds=10, as_pandas=True)
# +
# len(bst_cv)
# -
r = 8.5
X_train['y'] = y_train
X_train.y.iloc[0:2]
def weight_row(_, row):
if row.mths_since_last_delinq > 0 and row.mths_since_last_delinq < 12:
return r if row.i == 1 else 0
return 1
dtrain.set_weight([weight_row(i, row) for i,row in X_train.iterrows()])
bst = xgb.train(params, dtrain, num_boost_round=326)
# +
# print(bst.eval(dtest))
# -
X_train.drop('y', index=1, inplace = True)
y_prob = bst.predict(dtrain)
# +
# y_prob[0:5]
# -
roc_auc_score(y_train, y_prob)
y_prob = bst.predict(dtest)
roc_auc_score(y_test, y_prob)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All the IPython Notebooks in Python Mini-Projects series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/91_Python_Mini_Projects)**
# </i></small></small>
# # Python Program to Create AudioBook from PDF
# +
'''
Python Program to Create AudioBook from PDF
'''
# Import the necessary modules!
import pyttsx3
import PyPDF2
# Open our file in reading format and store into book
book = open('demo.pdf','rb') # `rb` stands for reading mode
# Call PyPDF2's PdfFileReader method on book and store it into pdf_reader
pdf_reader = PyPDF2.PdfFileReader(book)
# Calculate the no of pages in our pdf by using numPages method
num_pages = pdf_reader.numPages
# Initialize pyttsx3 using init method and let's print playing Audiobook
play = pyttsx3.init()
print('Playing Audio Book')
# Run a loop for the number of pages in our pdf file.
# A page will get retrieved at each iteration.
for num in range(0, num_pages):
page = pdf_reader.getPage(num)
# Extract the text from our page using extractText method on our page and store it into data.
data = page.extractText()
# Call say method on data and finally we can call runAndWait method at the end.
play.say(data)
play.runAndWait()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Sixth exercice: Non-Cartesian spiral under-sampling
#
# In this notebook, you can play with the design parameters to regenerate different spiral in-out patterns (so, we draw as many spiral arches as the number of shots). You can play with the number of shots by changing the under-sampling factor.
#
# - Authors: Philippe Ciuciu (philippe.ciuciu@cea.fr)
# - Date: 04/02/2019
# - Target: [ISBI'19 tutorial](https://biomedicalimaging.org/2019/tutorials/) on **Recent advances in acquisition and reconstruction for Compressed Sensing MRI**
# - **Revision**: 01/06/2021 for ATSI MSc hands-on session at Paris-Saclay University.
# +
#DISPLAY T2* MR IMAGE
# %matplotlib inline
import numpy as np
import os.path as op
import os
import math ; import cmath
import matplotlib.pyplot as plt
import sys
from mri.operators import NonCartesianFFT
from mri.operators.utils import convert_locations_to_mask, \
gridded_inverse_fourier_transform_nd
from pysap.data import get_sample_data
from skimage import data, img_as_float, io, filters
from modopt.math.metrics import ssim
mri_img = get_sample_data('2d-mri')
img_size = mri_img.shape[0]
plt.figure()
plt.title("T2* axial slice, size = {}".format(img_size))
if mri_img.ndim == 2:
plt.imshow(mri_img, cmap=plt.cm.gray)
else:
plt.imshow(mri_img)
plt.show()
# +
# set up the first shot
rfactor = 8
num_shots = math.ceil(img_size/rfactor)
print("number of shots: {}".format(num_shots))
# define the regularly spaced samples on a single shot
#nsamples = (np.arange(0,img_size) - img_size//2)/(img_size)
num_samples = img_size
num_samples = (num_samples + 1) // 2
print("number of samples: {}".format(num_samples))
num_revolutions = 1
shot = np.arange(0, num_samples, dtype=np.complex_)
radius = shot / num_samples * 1 / (2 * np.pi) * (1 - np.finfo(float).eps)
angle = np.exp(2 * 1j * np.pi * shot / num_samples * num_revolutions)
# first half of the spiral
single_shot = np.multiply(radius, angle)
# add second half of the spiral
#single_shot = np.append(np.flip(single_shot, axis=0), -single_shot[1:])
single_shot = np.append(np.flip(single_shot, axis=0), -single_shot)
#print(single_shot)
print("number of samples per shot: {}".format(np.size(single_shot)))
# vectorize the nb of shots
#vec_shots = np.arange(0,nb_shots + 1)
k_shots = np.array([], dtype = np.complex_)
#for i in vec_shots:
for i in np.arange(0, num_shots):
shot_rotated = single_shot * np.exp(1j * 2 * np.pi * i / (num_shots * 2))
k_shots = np.append(k_shots, shot_rotated)
#np.append(k_shots, complex_to_2d(shot_rotated))
print(k_shots.shape)
kspace_loc = np.zeros((len(k_shots),2))
kspace_loc[:,0] = k_shots.real
kspace_loc[:,1] = k_shots.imag
#Plot full initialization
kspace = plt.figure(figsize = (8,8))
#plot shots
plt.scatter(kspace_loc[::4,0], kspace_loc[::4,1], marker = '.')
plt.title("Spiral undersampling R = %d" %rfactor)
axes = plt.gca()
plt.grid()
# -
print(np.arange(0, num_shots))
data=convert_locations_to_mask(kspace_loc, mri_img.shape)
fourier_op = NonCartesianFFT(samples=kspace_loc, shape=mri_img.shape,
implementation='cpu')
kspace_obs = fourier_op.op(mri_img.data)
grid_space = np.linspace(-0.5, 0.5, num=mri_img.shape[0])
grid2D = np.meshgrid(grid_space, grid_space)
grid_soln = gridded_inverse_fourier_transform_nd(kspace_loc, kspace_obs,
tuple(grid2D), 'linear')
plt.imshow(np.abs(grid_soln), cmap='gray')
# Calculate SSIM
base_ssim = ssim(grid_soln, mri_img)
plt.title('Gridded Solution\nSSIM = ' + str(base_ssim))
plt.show()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import pandas as pd
from itertools import combinations
with open('datajson.json', 'r') as f:
spdata = json.load(f)
df = pd.DataFrame(spdata)
# +
habitat_cols = ['habitat_ce','habitat_of','habitat_fh']
tcp_cols = ['tcp_te','tcp_lb','tcp_tr','tcp_hb','tcp_gr']
features = [ [habitat,tcp] for habitat in habitat_cols for tcp in tcp_cols ]
pre_df = dict()
for f in features:
pre_df[f[0].split('_')[1]+ '_' +f[1].split('_')[1]]=df[f].all(axis=1).astype(int)
df_habitat = pd.concat( [df['id'],pd.DataFrame(pre_df)],axis=1 )
# -
df_habitat
# Relative positions
df_habitat.columns
pos = {
'ce_te': (80,60),
'ce_lb': (73,57),
'ce_tr': (80,60),
'ce_hb': (68,63),
'ce_gr': (80,60),
'of_te': (75,71),
'of_lb': (56,76.5),
'of_tr': (75,71),
'of_hb': (58,73),
'of_gr': (75,71),
'fh_te': (82,22),
'fh_lb': (78,18),
'fh_tr': (82,22),
'fh_hb': (72,17),
'fh_gr': (82,22)
}
# +
spdict = dict()
for i,row in df_habitat.iterrows():
spdict[row['id']]= []
for k,v in row.items():
if v==1:
spdict[row['id']].append( list(pos[k]) )
# -
spdict
with open('../_data/species_habitat_pos.json', 'w') as f:
json.dump(spdict, f,indent=1, ensure_ascii=False)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import syft as sy
import numpy as np
from sqlalchemy import create_engine
from syft.core.node.common.node_table import Base
from nacl.encoding import HexEncoder
from nacl.signing import SigningKey
from syft.core.adp.scalar.gamma_scalar import GammaScalar
from syft.core.tensor.autodp.initial_gamma import InitialGammaTensor
from syft.core.tensor.autodp.intermediate_gamma import IntermediateGammaTensor
from syft.core.adp.entity import Entity
from syft.core.adp.entity import DataSubjectGroup
from syft.core.adp.adversarial_accountant import AdversarialAccountant
from syft.core.io.virtual import create_virtual_connection
from syft.core.adp.vm_private_scalar_manager import VirtualMachinePrivateScalarManager
# Data
child = np.array([1,2,3,4], dtype=np.int32)
# +
# Entities
traskmaster = Entity(name="Andrew")
amber = Entity(name="Amber")
puppy = Entity(name="Their puppy whose name I forgot")
DSG = DataSubjectGroup([traskmaster, amber])
SM = VirtualMachinePrivateScalarManager()
entity_trask = np.array([traskmaster, traskmaster, traskmaster, traskmaster], dtype=object)
entity_amber = np.array([amber, amber, amber, amber], dtype=object)
entity_puppies = np.array([puppy, puppy, puppy, puppy])
dsg_ent = np.array([DSG, DSG, DSG, DSG], dtype=object)
# -
# Various InitialGammaTensors to play with
tensor_trask = InitialGammaTensor(values=child, min_vals=np.zeros_like(child), max_vals=np.ones_like(child) * 5, entities=entity_trask, scalar_manager=SM)
tensor_amber = InitialGammaTensor(values=child, min_vals=np.zeros_like(child), max_vals=np.ones_like(child)* 5, entities=entity_amber, scalar_manager=SM)
tensor_dsg = InitialGammaTensor(values=child * 2, min_vals=np.zeros_like(child), max_vals=np.ones_like(child) * 10, entities=dsg_ent, scalar_manager=SM)
tensor_puppy = InitialGammaTensor(values=child * 3, min_vals = np.zeros_like(child), max_vals=np.ones_like(child) * 15, entities=entity_puppies, scalar_manager=SM)
# Create our reference tensor using Andrew's and Amber's tensors
reference_tensor = tensor_trask + tensor_amber
assert isinstance(reference_tensor, IntermediateGammaTensor)
print(reference_tensor._entities())
# +
# Test to see if this works for 2D arrays too
child_2d = np.random.randint(low=1, high=5, size=(3, 3), dtype=np.int32)
entities_2d = np.array([[traskmaster, traskmaster, traskmaster], [traskmaster, traskmaster, traskmaster], [traskmaster, traskmaster, traskmaster]], dtype=object)
tensor_2d = InitialGammaTensor(values=child_2d, min_vals=np.zeros_like(child_2d), max_vals=np.ones_like(child_2d) * 5, entities=entities_2d, scalar_manager=SM)
assert tensor_2d.shape == child_2d.shape
IGT_2D = tensor_2d + tensor_2d
assert IGT_2D.shape == IGT_2D._values().shape
assert IGT_2D.shape == IGT_2D._entities().shape
# +
# Use the reference tensor to check private_private equality, starting with gt
comparison_tensor = tensor_puppy + tensor_puppy
assert isinstance(comparison_tensor, IntermediateGammaTensor)
assert comparison_tensor.shape == reference_tensor.shape
comparison_result = reference_tensor > comparison_tensor
# -
# Now let's see if we were right.
print(reference_tensor._values())
print(comparison_tensor._values())
print(comparison_result._values())
comparison_result._entities()
lt_comparison_result = reference_tensor < comparison_tensor
print(lt_comparison_result._values())
eq_result = reference_tensor == comparison_tensor
eq_result._values()
eq_result2 = comparison_tensor == (reference_tensor * 3)
eq_result2._values()
(comparison_tensor*3)._values()
|
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Scala 2.12
// language: scala
// name: scala212
// ---
val replClassPathObj = os.Path("replClasses", os.pwd)
if (!os.exists(replClassPathObj)) os.makeDir(replClassPathObj)
val replClassPath = replClassPathObj.toString()
interp.configureCompiler(_.settings.outputDirs.setSingleOutput(replClassPath))
interp.configureCompiler(_.settings.Yreplclassbased)
interp.load.cp(replClassPathObj)
// +
// Import the Snowpark library from Maven.
import $ivy.`com.snowflake:snowpark:1.1.0`
import $ivy.`org.apache.logging.log4j:log4j-core:2.17.1`
import com.snowflake.snowpark._
import com.snowflake.snowpark.functions._
import com.snowflake.snowpark.functions.{col, lit}
import com.snowflake.snowpark.{DataFrame, RelationalGroupedDataFrame, Session}
import java.io.{File, InputStream}
import java.nio.file.Paths
import java.util.Properties
import scala.collection.JavaConverters._
import scala.collection.mutable
// +
import org.apache.logging.log4j.core.config.Configurator;
import org.apache.logging.log4j.Level;
Configurator.setRootLevel(Level.WARN);
// -
val session = Session.builder.configs(Map(
"URL" -> "",
"USER" -> "",
"PASSWORD" -> "",
"ROLE" -> "",
"WAREHOUSE" -> "",
"DB" -> "",
"SCHEMA" -> ""
)).create
def createDF(tableName : String) : DataFrame ={
val df: DataFrame = session.table(tableName)
df
}
def runProcess(tableName : String): Unit ={
val df: DataFrame = session.table(tableName)
val dfs = mutable.ArrayBuffer[DataFrame]()
val fieldCount = df.schema.fields.size
var iterCount = 0
println(s"Total records in table : "
+ df.count().toString)
df.schema.fields.foreach(i => {
iterCount = iterCount + 1
println(s"-- Processing column ${iterCount}/${fieldCount}: ${i.name}")
println(s"Datatype : "+i.dataType.toString())
val colAnalysis = createAnalysisDataframe(session,tableName, i.name, i.dataType.toString())
dfs += colAnalysis
})
val analysisOutput = dfs.reduce(_ union _)
analysisOutput.show(fieldCount)
analysisOutput.write.mode("overwrite").saveAsTable(s"SNOWPARK_DATA_PROFILING_TPCDS_STORE")
}
// Rules definition
def colValueDuplicates(df: DataFrame, columnName: String): String = {
val dataframe1 = df.groupBy(columnName).count()
val dataframe2 = dataframe1.filter(col("count") === 1)
val duplicates = (dataframe1.count() - dataframe2.count()).toString
println(s"No of duplicate records in column : ${duplicates}")
duplicates
}
def colValueUniqueness(df: DataFrame, columnName: String): Int = {
val dataframe1 = df.groupBy(col(columnName)).count()
val dataframe2 = dataframe1.filter(col("count") === 1).count()
println(s"No of unique values in column : "
+ dataframe2.toString)
dataframe2.toInt
}
def colNullCheck(df: DataFrame, columnName: String): String = {
val nulls = df.where(df.col(columnName).is_null).count()
println(s"No of null values in column : "
+ nulls.toString)
nulls.toString
}
def colValueCompleteness(df: DataFrame, columnName: String): String = {
val nulls : Double = df.where(df.col(columnName).is_null).count()
val count : Double = df.count().toDouble
val completeness = 1-(nulls / count)
println(s"Completeness of column : "
+ completeness.toString)
completeness.toString
}
def colStatistics(df: DataFrame, columnName: String, dataType : String): String = {
var stats = ""
if (dataType != "String" && dataType != "Date") {
val max = df.select(functions.max(df.col(columnName))).collect()(0)(0)
val min = df.select(functions.min(df.col(columnName))).collect()(0)(0)
val stdev = df.select(functions.stddev(df.col(columnName))).collect()(0)(0)
val mean = df.select(functions.mean(df.col(columnName))).collect()(0)(0)
stats = min + "/" + max + "/" + stdev + "/" + mean
println(s"Statistics of column : " + stats)
stats
} else{
stats
}
}
def colDatatype(df: DataFrame, columnName: String): String = {
val nulls : Double = df.where(df.col(columnName).is_null).count()
// val completeness = if (nulls>0) nulls/df.count() else 1.0
val count : Double = df.count().toDouble
val completeness = 1-(nulls / count)
println(s"Completeness of column : " + completeness.toString)
completeness.toString
}
def createAnalysisDataframe(session: Session, tableName : String, columnName : String, dataType : String): DataFrame = {
val df: DataFrame = session.table(tableName)
val dfSeq = session.createDataFrame(Seq(columnName)).toDF("columnName")
val dfOut = dfSeq.withColumn("completeness", lit(colValueCompleteness(df, columnName)))
.withColumn("uniqueValues", lit(colValueUniqueness(df, columnName)))
.withColumn("duplicates",lit(colValueDuplicates(df, columnName)))
.withColumn("nulls",lit(colNullCheck(df, columnName)))
.withColumn("datatype",lit(dataType))
.withColumn("MIN_MAX_MEAN_STDEV",lit(colStatistics(df, columnName, dataType)))
dfOut
}
def createPermanentUdf(session: Session): Unit = {
session.sql("drop FUNCTION if EXISTS udfPerm(Int)").show()
session.udf.registerPermanent("udfPerm", permanentUdfHandler _, "STG")
session.sql("select *, udfPerm(*) from values (10)").show()
}
def permanentUdfHandler(i: Int): Int = {
i * i
}
def createInlinePermanentUdf(session: Session): Unit = {
session.sql("drop FUNCTION if EXISTS udfInlinePerm(Int)").show()
session.udf.registerPermanent("udfInlinePerm", (i: Int) => {
i * 2
}, "STG")
session.sql("select *, udfInlinePerm(*) from values (10)").show()
}
private def addDeps(session: Session): Unit = {
val PATH = Paths.get(".", "target", "dependency").toAbsolutePath.toString
val lst = getListOfFiles(PATH)
val filteredLst = lst.filterNot(_.matches("^.*snowpark(_original)?-[0-9.]+\\.jar$"))
for (f <- filteredLst) {
System.out.println("Adding dep:" + f)
session.addDependency(f)
}
}
def getListOfFiles(dir: String): List[String] = {
val d = new File(dir)
if (d.exists && d.isDirectory) {
d.listFiles.filter(_.isFile).toList.map(_.getPath)
} else {
List[String]()
}
}
val df = createDF("SNOWFLAKE_SAMPLE_DATA.TPCDS_SF100TCL.STORE")
df.schema.fields.foreach(i => {println(i)})
runProcess("SNOWFLAKE_SAMPLE_DATA.TPCDS_SF100TCL.STORE")
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Master Data Science for Business - Data Science Consulting - Session 2
#
# # Notebook 3:
#
# # Web Scraping with Scrapy: Getting reviews from TripAdvisor
# ## 1. Importing packages
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.spiders import CrawlSpider, Rule
from scrapy.selector import Selector
import sys
from scrapy.http import Request
from scrapy.linkextractors import LinkExtractor
import json
import logging
import pandas as pd
# ## 2. Some class and functions
# +
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
class HotelreviewsItem(scrapy.Item):
# define the fields for your item here like:
rating = scrapy.Field()
review = scrapy.Field()
title = scrapy.Field()
trip_date = scrapy.Field()
trip_type = scrapy.Field()
published_date = scrapy.Field()
image_url = scrapy.Field()
hotel_type = scrapy.Field()
hotel_name = scrapy.Field()
hotel_adress = scrapy.Field()
price_range = scrapy.Field()
reviewer_id = scrapy.Field()
review_id = scrapy.Field()
review_language = scrapy.Field()
pid = scrapy.Field()
locid = scrapy.Field()
# -
def user_info_splitter(raw_user_info):
"""
:param raw_user_info:
:return:
"""
user_info = {}
splited_info = raw_user_info.split()
for element in splited_info:
converted_element = get_convertible_elements_as_dic(element)
if converted_element:
user_info[converted_element[0]] = converted_element[1]
return user_info
# ## 2. Creating the JSon pipeline
#JSon pipeline, you can rename the "trust.jl" to the name of your choice
class JsonWriterPipeline(object):
def open_spider(self, spider):
self.file = open('tripadvisor.jl', 'w')
def close_spider(self, spider):
self.file.close()
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
# ## 3. Spider
#
# Now you know how to get data from one page, we want to automate the spider so it will crawl through all pages of reviews, ending with a full spider able to scrape every reviews of the selected parc. You will modify here the parse function since this is where you tell the spider to get the links and to follow them. <br>
# <b>To Do</b>: Complete the following code, to scrape all the reviews of one parc.
class MySpider(CrawlSpider):
name = 'BasicSpider'
domain_url = "https://www.tripadvisor.com"
# allowed_domains = ["https://www.tripadvisor.com"]
start_urls = [
"https://www.tripadvisor.fr/Hotel_Review-g5555792-d7107948-Reviews-Center_Parcs_Le_Bois_aux_Daims-Les_Trois_Moutiers_Vienne_Nouvelle_Aquitaine.html"]
#Custom settings to modify settings usually found in the settings.py file
custom_settings = {
'LOG_LEVEL': logging.WARNING,
'ITEM_PIPELINES': {'__main__.JsonWriterPipeline': 1}, # Used for pipeline 1
'FEED_FORMAT':'json', # Used for pipeline 2
'FEED_URI': 'tripadvisor3.json' # Used for pipeline 2
}
def parse(self, response):
next_reviews_page_url = "https://www.tripadvisor.com" + response.xpath(
"//a[contains(@class,'nav') and contains(@class,'next') and contains(@class,'primary')]/@href").extract()[0]
all_review_pages = response.xpath(
"//a[contains(@class,'pageNum') and contains(@class,'last')]/@data-offset").extract()
next_reviews_page_url = "https://www.tripadvisor.com" + response.xpath(
"//a[contains(@class,'nav') and contains(@class,'next') and contains(@class,'primary')]/@href").extract()[0]
next_page_number = eval(response.xpath(
"//a[contains(@class,'nav') and contains(@class,'next') and contains(@class,'primary')]/@data-page-number").extract()[
0])
if next_page_number < 10:
yield scrapy.Request(next_reviews_page_url, callback=self.parse)
review_urls = []
for partial_review_url in response.xpath("//div[contains(@class,'quote')]/a/@href").extract():
review_url = response.urljoin(partial_review_url)
if review_url not in review_urls:
review_urls.append(review_url)
yield scrapy.Request(review_url, callback=self.parse_review_page)
def parse_review_page(self, response):
item = HotelreviewsItem()
item["reviewer_id"] = next(iter(response.xpath(
"//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-memberid").extract()),
None)
item["review_language"] = next(iter(response.xpath(
"//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-language").extract()),
None)
item["review_id"] = next(iter(response.xpath(
"//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-reviewid").extract()),
None)
item["review_id"] = next(iter(response.xpath(
"//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-reviewid").extract()),
None)
item["pid"] = next(iter(response.xpath(
"//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-pid").extract()),
None)
item["locid"] = next(iter(response.xpath(
"//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-locid").extract()),
None)
review_id = item["review_id"]
review_url_on_page = response.xpath('//script[@type="application/ld+json"]/text()').extract()
review = eval(review_url_on_page[0])
item["review"] = review["reviewBody"].replace("\\n", "")
item["title"] = review["name"]
item["rating"] = review["reviewRating"]["ratingValue"]
item["image_url"] = review["image"]
item["hotel_type"] = review["itemReviewed"]["@type"]
item["hotel_name"] = review["itemReviewed"]["name"]
item["price_range"] = review["itemReviewed"]["priceRange"]
item["hotel_adress"] = review["itemReviewed"]["address"]
try:
item["published_date"] = review["datePublished"]
except KeyError:
item["published_date"] = next(iter(response.xpath(
f"//div[contains(@id,'review_{review_id}')]/div/div/span[@class='ratingDate']/@title""").extract()),
None)
item["trip_type"] = next(iter(response.xpath("//div[contains(@class,"
"'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div"
"/div/div/div[contains(@class,'noRatings')]/text()").extract()),
None)
try:
item["trip_date"] = next(iter(response.xpath("//div[contains(@class,"
"'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div["
"contains(@class,'prw_reviews_stay_date_hsx')]/text()").extract(
)), None)
except:
item["trip_date"] = next(iter(response.xpath(
"//div[contains(@id,'review_538163624')]/div/div/div[@data-prwidget-name='reviews_stay_date_hsx']/text()").extract()),
None)
# user_info = response.xpath("//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]").extract()[0]
# item["unstructured"] = user_info_splitter(user_info)
yield item
# ## 4. Crawling
# +
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(MySpider)
process.start()
# -
# ## 5. Importing and reading data scraped
#
# If you've succeeded, you should see here a dataframe with 248 entries corresponding to the 248 reviews of the Center Parc you scraped. Congratulations !
dfjson = pd.read_json('tripadvisor3.json')
#Previewing DF
dfjson.head()
dfjson.info()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + language="html"
# <!--Script block to left align Markdown Tables-->
# <style>
# table {margin-left: 0 !important;}
# </style>
# -
# #### Notes
# Leave script block above in place to left justify the table.
# This problem can also be used as laboratory exercise in `matplotlib` lesson.
# Dependencies: `matplotlib` and `math`; could also be solved using `numpy` and/or `pandas`
# ## Problem X:
# A tracer gas is used to determine the air exchange rate in a room. By injecting a stable gas into the room, then monitoring the decay in concentration with time one can estimate the air exchange rate $I$ (air exchanges per unit time).
#
# The governing equation is given by
#
# \begin{equation}
# C(t)=C_0 e^{-I t}
# \label{eqn:air}
# \end{equation}
#
# A plot of the $ln(C(t))$ versus $t$, should produce a straight line with negative slope proportinal to the exchange rate $I$.
#
# Suppose a particular experiment produces the data below, estimate the exchange rate $I$ for these data.
#
# |Time (hours)|Concentration (ppm)|
# |---:|---:|
# |0.0|10.0|
# |0.5|8.0|
# |1.0|6.0|
# |1.5|5.0|
# |2.0|3.3|
# +
# Produce a labeled plot (title,xaxis,yaxis) of the experimental results in the table, e.g. plot C versus t
def plotAline(list1,list2,strx,stry,strtitle): # plot list1 on x, list2 on y, xlabel, ylabel, title
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
plt.plot( list1, list2, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis
plt.title(strtitle)# add a title
plt.ylabel(stry)# add a label to the x and y-axes
plt.xlabel(strx)
plt.show() # display the plot
return #null return
list1 = [0.0,0.5,1.0,1.5,2.0]
list2 = [10.0,8.0,6.0,5.0,3.3]
plotAline(list1,list2,'Time (hours)','Concentration (ppm)','Room Air Tracer Gas Concentration')
# -
# Produce a labeled plot (title,xaxis,yaxis) of the experimental results in the table, e.g. plot ln(C) versus t
import math
loglist2 = [] # null list to append
for i in range(0,len(list2)):
loglist2.append(math.log(list2[i]))
plotAline(list1,loglist2,'Time (hours)','Log Concentration (ppm)','Room Air Tracer Gas Concentration')
# +
# Define a function conc(t,c0,I) for the governing equation, test for values of t =0, c0=10, I=0.4
def conc(time,conc0,rate):
import math
conc = conc0*math.exp(-1.0*rate*time)
return conc
conc(0,10,0.4)
# -
# Make a plot using the function, with c0=10, I=0.4 for the same time values as the experiment set
modlist2 =[]
for i in range(0,len(list1)):
modlist2.append(conc(list1[i],10,0.4))
plotAline(list1,modlist2,'Time (hours)','Concentration (ppm)','Room Air Tracer Gas Model')
# Make a plot using the log of the function, with c0=10, I=0.4 for the same time values as the experiment set
logmod2 =[]
for i in range(0,len(list1)):
logmod2.append(math.log(conc(list1[i],10,0.5)))
plotAline(list1,logmod2,'Time (hours)','Log Concentration (ppm)','Room Air Tracer Gas Model')
# +
# Make a plot of both the model (markers and line) and the observed values (markers only) on the same plot,
# using trial and error find the value of I that produces best visual fit
def plot2lines(list11,list21,list12,list22,strx,stry,strtitle): # plot list1 on x, list2 on y, xlabel, ylabel, title
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
plt.plot( list11, list21, color ='green', marker ='o', linestyle ='none' , label = "Observed" ) # create a line chart, years on x-axis, gdp on y-axis
plt.plot( list12, list22, color ='red', marker ='o', linestyle ='solid' , label = "Model") # create a line chart, years on x-axis, gdp on y-axis
plt.legend()
plt.title(strtitle)# add a title
plt.ylabel(stry)# add a label to the x and y-axes
plt.xlabel(strx)
plt.show() # display the plot
return #null return
plot2lines(list1,loglist2,list1,logmod2,'Time (hours)','Log Concentration (ppm)','Room Air Tracer, Green == Observed, Red == Model')
# -
#
# What is your best visual fit for the room air tracer data ?
#
# C0 = 10 ppm
#
# I = 0.5 Room Volumes per hour
#
# gives best fit.
#
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <img src="../../images/banners/python-basics.png" width="600"/>
# # <img src="../../images/logos/python.png" width="23"/> Python Program Lexical Structure
#
# You have now covered Python variables, operators, and data types in depth, and you’ve seen quite a bit of example code. Up to now, the code has consisted of short individual statements, simply assigning objects to variables or displaying values.
#
# But you want to do more than just define data and display it! Let’s start arranging code into more complex groupings.
# ## <img src="../../images/logos/toc.png" width="20"/> Table of Contents
# * [Python Statements](#python_statements)
# * [Line Continuation](#line_continuation)
# * [Implicit Line Continuation](#implicit_line_continuation)
# * [Parentheses](#parentheses)
# * [Curly Braces](#curly_braces)
# * [Square Brackets](#square_brackets)
# * [Explicit Line Continuation](#explicit_line_continuation)
# * [Multiple Statements Per Line](#multiple_statements_per_line)
# * [Comments](#comments)
# * [Whitespace](#whitespace)
# * [Whitespace as Indentation](#whitespace_as_indentation)
# * [<img src="../../images/logos/checkmark.png" width="20"/> Conclusion](#<img_src="../../images/logos/checkmark.png"_width="20"/>_conclusion)
#
# ---
# <a class="anchor" id="python_statements"></a>
# ## Python Statements
#
# Statements are the basic units of instruction that the Python interpreter parses and processes. In general, the interpreter executes statements sequentially, one after the next as it encounters them. (You will see in the next tutorial on conditional statements that it is possible to alter this behavior.)
# In a REPL session, statements are executed as they are typed in, until the interpreter is terminated. When you execute a script file, the interpreter reads statements from the file and executes them until end-of-file is encountered.
# Python programs are typically organized with one statement per line. In other words, each statement occupies a single line, with the end of the statement delimited by the newline character that marks the end of the line. The majority of the examples so far in this tutorial series have followed this pattern:
print('Hello, World!')
x = [1, 2, 3]
print(x[1:2])
# <a class="anchor" id="line_continuation"></a>
# ## Line Continuation
#
# Suppose a single statement in your Python code is especially long. For example, you may have an assignment statement with many terms:
person1_age = 42
person2_age = 16
person3_age = 71
someone_is_of_working_age = (person1_age >= 18 and person1_age <= 65) or (person2_age >= 18 and person2_age <= 65) or (person3_age >= 18 and person3_age <= 65)
someone_is_of_working_age
# Or perhaps you are defining a lengthy nested list:
a = [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20], [21, 22, 23, 24, 25]]
a
# You’ll notice that these statements are too long to fit in your browser window, and the browser is forced to render the code blocks with horizontal scroll bars. You may find that irritating. (You have our apologies—these examples are presented that way to make the point. It won’t happen again.)
# It is equally frustrating when lengthy statements like these are contained in a script file. Most editors can be configured to wrap text, so that the ends of long lines are at least visible and don’t disappear out the right edge of the editor window. But the wrapping doesn’t necessarily occur in logical locations that enhance readability:
# <img src="./images/line-wrap.webp" alt="line-wrap" width=500 align="center" />
# Excessively long lines of code are generally considered poor practice. In fact, there is an official [Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008) put forth by the Python Software Foundation, and one of its stipulations is that the [maximum line length](https://www.python.org/dev/peps/pep-0008/#maximum-line-length) in Python code should be 79 characters.
# > **Note:** The **Style Guide for Python Code** is also referred to as **PEP 8**. PEP stands for Python Enhancement Proposal. PEPs are documents that contain details about features, standards, design issues, general guidelines, and information relating to Python. For more information, see the Python Software Foundation [Index of PEPs](https://www.python.org/dev/peps).
# As code becomes more complex, statements will on occasion unavoidably grow long. To maintain readability, you should break them up into parts across several lines. But you can’t just split a statement whenever and wherever you like. Unless told otherwise, the interpreter assumes that a newline character terminates a statement. If the statement isn’t syntactically correct at that point, an exception is raised:
someone_is_of_working_age = person1_age >= 18 and person1_age <= 65 or
# In Python code, a statement can be continued from one line to the next in two different ways: implicit and explicit line continuation.
# <a class="anchor" id="implicit_line_continuation"></a>
# ### Implicit Line Continuation
#
# This is the more straightforward technique for line continuation, and the one that is preferred according to PEP 8.
# Any statement containing opening parentheses (`'('`), brackets (`'['`), or curly braces (`'{'`) is presumed to be incomplete until all matching parentheses, brackets, and braces have been encountered. Until then, the statement can be implicitly continued across lines without raising an error.
# For example, the nested list definition from above can be made much more readable using implicit line continuation because of the open brackets:
a = [
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25]
]
a
# A long expression can also be continued across multiple lines by wrapping it in grouping parentheses. PEP 8 explicitly advocates using parentheses in this manner when appropriate:
someone_is_of_working_age = (
(person1_age >= 18 and person1_age <= 65)
or (person2_age >= 18 and person2_age <= 65)
or (person3_age >= 18 and person3_age <= 65)
)
someone_is_of_working_age
# If you need to continue a statement across multiple lines, it is usually possible to use implicit line continuation to do so. This is because parentheses, brackets, and curly braces appear so frequently in Python syntax:
# <a class="anchor" id="parentheses"></a>
# #### Parentheses
#
# - Expression grouping
x = (
1 + 2
+ 3 + 4
+ 5 + 6
)
x
# - Function call (functions will be covered later)
print(
'foo',
'bar',
'baz'
)
# - Method call (methods will be covered later)
'abc'.center(
9,
'-'
)
# - Tuple definition
t = (
'a', 'b',
'c', 'd'
)
# <a class="anchor" id="curly_braces"></a>
# #### Curly Braces
# - Dictionary definition
d = {
'a': 1,
'b': 2
}
# - Set definition
x1 = {
'foo',
'bar',
'baz'
}
# <a class="anchor" id="square_brackets"></a>
# #### Square Brackets
# - List definition
a = [
'foo', 'bar',
'baz', 'qux'
]
# - Indexing
a[
1
]
# - Slicing
a[
1:2
]
# - Dictionary key reference
d[
'b'
]
# > **Note:** Just because something is syntactically allowed, it doesn’t mean you should do it. Some of the examples above would not typically be recommended. Splitting indexing, slicing, or dictionary key reference across lines, in particular, would be unusual. But you can consider it if you can make a good argument that it enhances readability.
# Remember that if there are multiple parentheses, brackets, or curly braces, then implicit line continuation is in effect until they are all closed:
a = [
[
['foo', 'bar'],
[1, 2, 3]
],
{1, 3, 5},
{
'a': 1,
'b': 2
}
]
a
# Note how line continuation and judicious use of indentation can be used to clarify the nested structure of the list.
# <a class="anchor" id="explicit_line_continuation"></a>
# ### Explicit Line Continuation
# In cases where implicit line continuation is not readily available or practicable, there is another option. This is referred to as explicit line continuation or explicit line joining.
# Ordinarily, a newline character (which you get when you press _Enter_ on your keyboard) indicates the end of a line. If the statement is not complete by that point, Python will raise a SyntaxError exception:
s =
x = 1 + 2 +
# To indicate explicit line continuation, you can specify a backslash (`\`) character as the final character on the line. In that case, Python ignores the following newline, and the statement is effectively continued on next line:
s = \
'Hello, World!'
s
x = 1 + 2 \
+ 3 + 4 \
+ 5 + 6
x
# **Note that the backslash character must be the last character on the line. Not even whitespace is allowed after it:**
# You can't see it, but there is a space character following the \ here:
s = \
# Again, PEP 8 recommends using explicit line continuation only when implicit line continuation is not feasible.
# <a class="anchor" id="multiple_statements_per_line"></a>
# ## Multiple Statements Per Line
#
# Multiple statements may occur on one line, if they are separated by a semicolon (`;`) character:
x = 1; y = 2; z = 3
print(x); print(y); print(z)
# Stylistically, this is generally frowned upon, and [PEP 8 expressly discourages it](https://www.python.org/dev/peps/pep-0008/?#other-recommendations). There might be situations where it improves readability, but it usually doesn’t. In fact, it often isn’t necessary. The following statements are functionally equivalent to the example above, but would be considered more typical Python code:
x, y, z = 1, 2, 3
print(x, y, z, sep='\n')
# > The term **Pythonic** refers to code that adheres to generally accepted common guidelines for readability and “best” use of idiomatic Python. When someone says code is not Pythonic, they are implying that it does not express the programmer’s intent as well as might otherwise be done in Python. Thus, the code is probably not as readable as it could be to someone who is fluent in Python.
# If you find your code has multiple statements on a line, there is probably a more Pythonic way to write it. But again, if you think it’s appropriate or enhances readability, you should feel free to do it.
# <a class="anchor" id="comments"></a>
# ## Comments
#
# In Python, the hash character (`#`) signifies a comment. The interpreter will ignore everything from the hash character through the end of that line:
a = ['foo', 'bar', 'baz'] # I am a comment.
a
# If the first non-whitespace character on the line is a hash, the entire line is effectively ignored:
# I am a comment.
# I am too.
# Naturally, a hash character inside a string literal is protected, and does not indicate a comment:
a = 'foobar # I am *not* a comment.'
a
# A comment is just ignored, so what purpose does it serve? Comments give you a way to attach explanatory detail to your code:
# Calculate and display the area of a circle.
pi = 3.1415926536
r = 12.35
area = pi * (r ** 2)
print('The area of a circle with radius', r, 'is', area)
# Up to now, your Python coding has consisted mostly of short, isolated REPL sessions. In that setting, the need for comments is pretty minimal. Eventually, you will develop larger applications contained across multiple script files, and comments will become increasingly important.
# Good commenting makes the intent of your code clear at a glance when someone else reads it, or even when you yourself read it. Ideally, you should strive to write code that is as clear, concise, and self-explanatory as possible. But there will be times that you will make design or implementation decisions that are not readily obvious from the code itself. That is where commenting comes in. Good code explains how; good comments explain why.
# Comments can be included within implicit line continuation:
x = (1 + 2 # I am a comment.
+ 3 + 4 # Me too.
+ 5 + 6)
x
a = [
'foo', 'bar', # Me three.
'baz', 'qux'
]
a
# But recall that explicit line continuation requires the backslash character to be the last character on the line. Thus, a comment can’t follow afterward:
x = 1 + 2 + \ # I wish to be comment, but I'm not.
# What if you want to add a comment that is several lines long? Many programming languages provide a syntax for multiline comments (also called block comments). For example, in C and Java, comments are delimited by the tokens `/*` and `*/`. The text contained within those delimiters can span multiple lines:
#
# ```c
# /*
# [This is not Python!]
#
# Initialize the value for radius of circle.
#
# Then calculate the area of the circle
# and display the result to the console.
# */
# ```
# Python doesn’t explicitly provide anything analogous to this for creating multiline block comments. To create a block comment, you would usually just begin each line with a hash character:
# +
# Initialize value for radius of circle.
#
# Then calculate the area of the circle
# and display the result to the console.
pi = 3.1415926536
r = 12.35
area = pi * (r ** 2)
print('The area of a circle with radius', r, 'is', area)
# -
# However, for code in a script file, there is technically an alternative.
# You saw above that when the interpreter parses code in a script file, it ignores a string literal (or any literal, for that matter) if it appears as statement by itself. More precisely, a literal isn’t ignored entirely: the interpreter sees it and parses it, but doesn’t do anything with it. Thus, a string literal on a line by itself can serve as a comment. Since a triple-quoted string can span multiple lines, it can effectively function as a multiline comment.
# Consider this script file (name it `foo.py` for example):
# +
"""Initialize value for radius of circle.
Then calculate the area of the circle
and display the result to the console.
"""
pi = 3.1415926536
r = 12.35
area = pi * (r ** 2)
print('The area of a circle with radius', r, 'is', area)
# -
# When this script is run, the output appears as follows:
#
# ```bash
# python foo.py
# The area of a circle with radius 12.35 is 479.163565508706
# ```
# The triple-quoted string is not displayed and doesn’t change the way the script executes in any way. It effectively constitutes a multiline block comment.
# Although this works (and was once put forth as a Python programming tip by Guido himself), PEP 8 actually recommends against it. The reason for this appears to be because of a special Python construct called the **docstring**. A docstring is a special comment at the beginning of a user-defined function that documents the function’s behavior. Docstrings are typically specified as triple-quoted string comments, so PEP 8 recommends that other [block comments](https://www.python.org/dev/peps/pep-0008/?#block-comments) in Python code be designated the usual way, with a hash character at the start of each line.
# However, as you are developing code, if you want a quick and dirty way to comment out as section of code temporarily for experimentation, you may find it convenient to wrap the code in triple quotes.
# > You will learn more about docstrings in the upcoming tutorial on functions in Python.
# <a class="anchor" id="whitespace"></a>
# ## Whitespace
#
# When parsing code, the Python interpreter breaks the input up into tokens. Informally, tokens are just the language elements that you have seen so far: identifiers, keywords, literals, and operators.
# Typically, what separates tokens from one another is whitespace: blank characters that provide empty space to improve readability. The most common whitespace characters are the following:
#
# |Character| ASCII Code |Literal Expression|
# |:--|:--|:--|
# |space| `32` `(0x20)` |`' '`|
# |tab| `9` `(0x9)` |`'\t'`|
# |newline| `10` `(0xa)` |`'\n'`|
# There are other somewhat outdated ASCII whitespace characters such as line feed and form feed, as well as some very esoteric Unicode characters that provide whitespace. But for present purposes, whitespace usually means a space, tab, or newline.
x = 3
x=2
# Whitespace is mostly ignored, and mostly not required, by the Python interpreter. When it is clear where one token ends and the next one starts, whitespace can be omitted. This is usually the case when special non-alphanumeric characters are involved:
x=3;y=12
x+y
(x==3)and(x<y)
a=['foo','bar','baz']
a
d={'foo':3,'bar':4}
d
x,y,z='foo',14,21.1
(x,y,z)
z='foo'"bar"'baz'#Comment
z
# Every one of the statements above has no whitespace at all, and the interpreter handles them all fine. That’s not to say that you should write them that way though. Judicious use of whitespace almost always enhances readability, and your code should typically include some. Compare the following code fragments:
value1=100
value2=200
v=(value1>=0)and(value1<value2)
value1 = 100
value2 = 200
v = (value1 >= 0) and (value1 < value2)
# Most people would likely find that the added whitespace in the second example makes it easier to read. On the other hand, you could probably find a few who would prefer the first example. To some extent, it is a matter of personal preference. But there are standards for [whitespace in expressions and statements](https://www.python.org/dev/peps/pep-0008/?#whitespace-in-expressions-and-statements) put forth in PEP 8, and you should strongly consider adhering to them as much as possible.
x = (1, )
# > Note: You can juxtapose string literals, with or without whitespace:
# >
# ```python
# >>> s = "foo"'bar''''baz'''
# >>> s
# 'foobarbaz'
#
# >>> s = 'foo' "bar" '''baz'''
# >>> s
# 'foobarbaz'
# ```
# > The effect is concatenation, exactly as though you had used the + operator.
# In Python, whitespace is generally only required when it is necessary to distinguish one token from the next. This is most common when one or both tokens are an identifier or keyword.
# For example, in the following case, whitespace is needed to separate the identifier `s` from the keyword `in`:
s = 'bar'
s in ['foo', 'bar', 'baz']
sin ['foo', 'bar', 'baz']
# Here is an example where whitespace is required to distinguish between the identifier `y` and the numeric constant `20`:
y is 20
y is20
# In this example, whitespace is needed between two keywords:
'qux' not in ['foo', 'bar', 'baz']
'qux' notin ['foo', 'bar', 'baz']
# Running identifiers or keywords together fools the interpreter into thinking you are referring to a different token than you intended: `sin`, `is20`, and `notin`, in the examples above.
# Running identifiers or keywords together fools the interpreter into thinking you are referring to a different token than you intended: sin, is20, and notin, in the examples above.
# All this tends to be rather academic because it isn’t something you’ll likely need to think about much. Instances where whitespace is necessary tend to be intuitive, and you’ll probably just do it by second nature.
# You should use whitespace where it isn’t strictly necessary as well to enhance readability. Ideally, you should follow the guidelines in PEP 8.
# > **Deep Dive: Fortran and Whitespace**
# >
# >The earliest versions of Fortran, one of the first programming languages created, were designed so that all whitespace was completely ignored. Whitespace characters could be optionally included or omitted virtually anywhere—between identifiers and reserved words, and even in the middle of identifiers and reserved words.
# >
# >For example, if your Fortran code contained a variable named total, any of the following would be a valid statement to assign it the value 50:
# >
# ```fortran
# total = 50
# to tal = 50
# t o t a l=5 0
# ```
# >This was meant as a convenience, but in retrospect it is widely regarded as overkill. It often resulted in code that was difficult to read. Worse yet, it potentially led to code that did not execute correctly.
# >
# >Consider this tale from NASA in the 1960s. A Mission Control Center orbit computation program written in Fortran was supposed to contain the following line of code:
# >
# ```fortran
# DO 10 I = 1,100
# ```
# >In the Fortran dialect used by NASA at that time, the code shown introduces a loop, a construct that executes a body of code repeatedly. (You will learn about loops in Python in two future tutorials on definite and indefinite iteration).
# >
# >Unfortunately, this line of code ended up in the program instead:
# >
# ```fortran
# DO 10 I = 1.100
# ```
# > If you have a difficult time seeing the difference, don’t feel too bad. It took the NASA programmer a couple weeks to notice that there is a period between `1` and `100` instead of a comma. Because the Fortran compiler ignored whitespace, `DO 10 I` was taken to be a variable name, and the statement `DO 10 I = 1.100` resulted in assigning `1.100` to a variable called `DO10I` instead of introducing a loop.
# >
# > Some versions of the story claim that a Mercury rocket was lost because of this error, but that is evidently a myth. It did apparently cause inaccurate data for some time, though, before the programmer spotted the error.
# >
# >Virtually all modern programming languages have chosen not to go this far with ignoring whitespace.
# <a class="anchor" id="whitespace_as_indentation"></a>
# ## Whitespace as Indentation
#
# There is one more important situation in which whitespace is significant in Python code. Indentation—whitespace that appears to the left of the first token on a line—has very special meaning.
# In most interpreted languages, leading whitespace before statements is ignored. For example, consider this Windows Command Prompt session:
#
# ```bash
# $ echo foo
# foo
# $ echo foo
# foo
# ```
# > **Note:** In a Command Prompt window, the echo command displays its arguments to the console, like the `print()` function in Python. Similar behavior can be observed from a terminal window in macOS or Linux.
# In the second statement, four space characters are inserted to the left of the echo command. But the result is the same. The interpreter ignores the leading whitespace and executes the same command, echo foo, just as it does when the leading whitespace is absent.
# Now try more or less the same thing with the Python interpreter:
#
# ```python
# >>> print('foo')
# foo
# >>> print('foo')
#
# SyntaxError: unexpected indent
# ```
# > **Note:** Running the above code in jupyter notebook does not raise an error as jupyter notebook ignores the whitespaces at the start of a single line command.
# Say what? Unexpected indent? The leading whitespace before the second `print()` statement causes a `SyntaxError` exception!
# In Python, indentation is not ignored. Leading whitespace is used to compute a line’s indentation level, which in turn is used to determine grouping of statements. As yet, you have not needed to group statements, but that will change in the next tutorial with the introduction of control structures.
# Until then, be aware that leading whitespace matters.
# <a class="anchor" id="conclusion"></a>
# ## <img src="../../images/logos/checkmark.png" width="20"/> Conclusion
#
# This tutorial introduced you to Python program lexical structure. You learned what constitutes a valid Python **statement** and how to use **implicit** and **explicit line continuation** to write a statement that spans multiple lines. You also learned about commenting Python code, and about use of whitespace to enhance readability.
# Next, you will learn how to group statements into more complex decision-making constructs using **conditional statements**.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
text = "Hello World!"
chars = list(set(text))
indexer = {char: index for (index, char) in enumerate(chars)}
print(indexer)
encoded = []
for c in text:
encoded.append(indexer[c])
encoded = np.array(encoded).reshape(2,-1)
encoded
def index2onehot(batch):
batch_flatten = batch.flatten()
onehot_flat = np.zeros((batch.shape[0] * batch.shape[1], len(indexer)))
onehot_flat[range(len(batch_flatten)), batch_flatten] = 1
onehot = onehot_flat.reshape((batch.shape[0], batch.shape[1], -1))
return onehot
one_hot = index2onehot(encoded)
one_hot
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from glob import glob
from collections import Counter
import pandas as pd
from IPython.display import display
import re
import os
# +
DATA_DRIVE_PATH = "G:/AzureBackup/"
NER_FILES={
"Finin": {
"train": "/datadrive/Datasets/lowlands-data/LREC2014/twitter_ner/data/finin.train.tsv",
"test": "/datadrive/Datasets/lowlands-data/LREC2014/twitter_ner/data/finin.test.tsv.utf8",
},
"Hege": {
"test": "/datadrive/Datasets/lowlands-data/LREC2014/twitter_ner/data/hege.test.tsv",
},
"Ritter": {
"lowlands-test": "/datadrive/Datasets/lowlands-data/LREC2014/twitter_ner/data/ritter.test.tsv",
"train": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/ner.train.txt",
"dev": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/ner.dev.txt",
"test": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/ner.test.txt",
},
"YODIE": {
"train": "/datadrive/Datasets/Twitter/YODIE/data/training.conll",
"test": "/datadrive/Datasets/Twitter/YODIE/data/testing.conll"
},
"WNUT_2016": {
"train": "/datadrive/Codes/multi-task-nlp-keras/data/WNUT_NER/train.tsv",
"test": "/datadrive/Codes/multi-task-nlp-keras/data/WNUT_NER/test.tsv",
"dev": "/datadrive/Codes/multi-task-nlp-keras/data/WNUT_NER/dev.tsv",
},
"WNUT_2017": {
"train": "/datadrive/Codes/multi-task-nlp-keras/data/WNUT_2017/wnut17train.conll",
"dev": "/datadrive/Codes/multi-task-nlp-keras/data/WNUT_2017/emerging.dev.conll",
"test": "/datadrive/Codes/multi-task-nlp-keras/data/WNUT_2017/emerging.test.annotated",
},
"MSM_2013": {
"train": "/datadrive/Datasets/Twitter/MSM2013/data/msm2013-ce_challenge_gs/TweetsTrainingSetCH.tsv.conll",
"test": "/datadrive/Datasets/Twitter/MSM2013/data/msm2013-ce_challenge_gs/goldStandard.tsv.conll",
},
"NEEL2016": {
"train": "/datadrive/Datasets/Twitter/microposts-NEEL/processed/2016/microposts2016-neel-training_neel.gs.conll",
"dev": "/datadrive/Datasets/Twitter/microposts-NEEL/processed/2016/microposts2016-neel-dev_neel.gs.conll",
"test": "/datadrive/Datasets/Twitter/microposts-NEEL/processed/2016/microposts2016-neel-test_neel.gs.conll",
},
"BROAD": {
"a": "/datadrive/Datasets/Twitter/broad_twitter_corpus/a.conll",
"b": "/datadrive/Datasets/Twitter/broad_twitter_corpus/b.conll",
"e": "/datadrive/Datasets/Twitter/broad_twitter_corpus/e.conll",
"f": "/datadrive/Datasets/Twitter/broad_twitter_corpus/f.conll",
"g": "/datadrive/Datasets/Twitter/broad_twitter_corpus/g.conll",
"h": "/datadrive/Datasets/Twitter/broad_twitter_corpus/h.conll",
"split-train": "/datadrive/Datasets/Twitter/broad_twitter_corpus/data_splits/train.conll",
"split-dev": "/datadrive/Datasets/Twitter/broad_twitter_corpus/data_splits/dev.conll",
"split-test": "/datadrive/Datasets/Twitter/broad_twitter_corpus/data_splits/test.conll",
},
"MultiModal": {
"train": "/datadrive/Datasets/Twitter/NERmultimodal/data/train.conll",
"dev": "/datadrive/Datasets/Twitter/NERmultimodal/data/dev.conll",
"test": "/datadrive/Datasets/Twitter/NERmultimodal/data/test.conll",
}
}
POS_FILES={
"Owoputi_2013": {
"train": "/datadrive/Datasets/Twitter/TweeboParser/ark-tweet-nlp-0.3.2/data/twpos-data-v0.3/oct27.splits/oct27.train",
"traindev": "/datadrive/Datasets/Twitter/TweeboParser/ark-tweet-nlp-0.3.2/data/twpos-data-v0.3/oct27.splits/oct27.traindev",
"dev": "/datadrive/Datasets/Twitter/TweeboParser/ark-tweet-nlp-0.3.2/data/twpos-data-v0.3/oct27.splits/oct27.dev",
"test": "/datadrive/Datasets/Twitter/TweeboParser/ark-tweet-nlp-0.3.2/data/twpos-data-v0.3/oct27.splits/oct27.test",
"daily547": "/datadrive/Datasets/Twitter/TweeboParser/ark-tweet-nlp-0.3.2/data/twpos-data-v0.3/daily547.conll"
},
#"LexNorm_Li_2015": {
# "dev": "/datadrive/Datasets/Twitter/wnut-2017-pos-norm/data/1.dev.gold.noUserWWW",
# "test": "/datadrive/Datasets/Twitter/wnut-2017-pos-norm/data/test_L.gold",
# "test-Owoputi": "/datadrive/Datasets/Twitter/wnut-2017-pos-norm/data/test_O.gold",
# "train": "/datadrive/Datasets/Twitter/wnut-2017-pos-norm/data/train_pos.noUserWWW"
#},
## Next 3 use Universal POS mappings:
"Foster_2011": {
"test": "/datadrive/Datasets/lowlands-data/ACL2014/crowdsourced_POS/data/foster-twitter.test",
"twitie-dev": "/datadrive/Datasets/Twitter/twitter-pos-bootstrap/data/foster_dev.conll",
"twitie-test": "/datadrive/Datasets/Twitter/twitter-pos-bootstrap/data/foster_eval.conll"
},
"Ritter_2011": {
"lowlands-test": "/datadrive/Datasets/lowlands-data/ACL2014/crowdsourced_POS/data/ritter.test",
"train": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/pos.cleaned.train.txt",
"dev": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/pos.cleaned.dev.txt",
"test": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/pos.cleaned.test.txt",
# Save as above
#"twitie-train": "/datadrive/Datasets/Twitter/twitter-pos-bootstrap/data/ritter_train.conll",
#"twitie-dev": "/datadrive/Datasets/Twitter/twitter-pos-bootstrap/data/ritter_dev.conll",
#"twitie-test": "/datadrive/Datasets/Twitter/twitter-pos-bootstrap/data/ritter_eval.conll"
},
"lowlands": {
"test": "/datadrive/Datasets/lowlands-data/ACL2014/crowdsourced_POS/data/lowlands.test"
},
"Gimple_2012": {
"test": "/datadrive/Datasets/lowlands-data/ACL2014/crowdsourced_POS/data/gimpel.GOLD"
},
"Bootstrap_2013": {
# Full PTB tagset, plus four custom tags (USR, HT, RT, URL)
# Skipping because not relevant
#"train": "/datadrive/Datasets/Twitter/twitter-pos-bootstrap/data/bootstrap.conll"
},
"Tweetbankv2": {
"dev": "/datadrive/Datasets/Twitter/Tweebank/pos/en-ud-tweet-dev.txt",
"train": "/datadrive/Datasets/Twitter/Tweebank/pos/en-ud-tweet-train.txt",
"test": "/datadrive/Datasets/Twitter/Tweebank/pos/en-ud-tweet-test.txt",
}
}
CHUNKING_FILES = {
"Ritter": {
"train": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/chunk.train.conll",
"dev": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/chunk.dev.conll",
"test": "/datadrive/Datasets/Twitter/RitterNER/twitter_processed/chunk.test.conll",
}
}
SENTIMENT_FILES={
"SMILE": {
"train": "/datadrive/Datasets/Twitter/SMILE/smile-annotations-final.csv",
},
}
SUPERSENSE_TAGGING_FILES={
"Ritter": {
"train": "/datadrive/Datasets/Twitter/supersense-data-twitter/ritter-train.tsv",
"dev": "/datadrive/Datasets/Twitter/supersense-data-twitter/ritter-dev.tsv",
"test": "/datadrive/Datasets/Twitter/supersense-data-twitter/ritter-eval.tsv"
},
"Johannsen_2014": {
"test": "/datadrive/Datasets/Twitter/supersense-data-twitter/in-house-eval.tsv"
}
}
FRAME_SEMANTICS_FILE={
"Sogaard_2015": {
"gavin": "/datadrive/Datasets/lowlands-data/AAAI15/conll/all.gavin",
"maria": "/datadrive/Datasets/lowlands-data/AAAI15/conll/all.maria",
"sara": "/datadrive/Datasets/lowlands-data/AAAI15/conll/all.sara"
}
}
DIMSUM_FILES = {
# Following data is already part of dimsum
#"Lowlands": {
# "test": "/datadrive/Datasets/Twitter/dimsum-data/conversion/original/lowlands.UPOS2.tsv"
#},
#"Ritter": {
# "test": "/datadrive/Datasets/Twitter/dimsum-data/conversion/original/ritter.UPOS2.tsv"
#},
#"Streusle": {
# "test": "/datadrive/Datasets/Twitter/dimsum-data/conversion/original/streusle.upos.tags"
#},
"DiMSUM_2016": {
# Made in combination with ritter, streusle, lowlands
# 55579 ewtb
# 3062 lowlands
# 15185 ritter
"train": "/datadrive/Datasets/Twitter/dimsum-data/conll/dimsum16.train",
# 3516 ted
# 6357 trustpilot
# 6627 tweebank
"test": "/datadrive/Datasets/Twitter/dimsum-data/conll/dimsum16.test"
}
}
PARSING_FILES={
"Kong_2014": {
"train": "/datadrive/Datasets/Twitter/TweeboParser/Tweebank/Train_Test_Splited/train",
"test": "/datadrive/Datasets/Twitter/TweeboParser/Tweebank/Train_Test_Splited/test",
}
}
WEB_TREEBANK={
"DenoisedWebTreebank": {
"dev": "/datadrive/Datasets/Twitter/DenoisedWebTreebank/data/DenoisedWebTreebank/dev.conll",
"test": "/datadrive/Datasets/Twitter/DenoisedWebTreebank/data/DenoisedWebTreebank/test.conll"
}
}
NORMALIZED={
"DenoisedWebTreebank": {
"dev": "/datadrive/Datasets/Twitter/DenoisedWebTreebank/data/DenoisedWebTreebank/dev.normalized",
"test": "/datadrive/Datasets/Twitter/DenoisedWebTreebank/data/DenoisedWebTreebank/test.normalized"
}
}
PARAPHRASE_SEMANTIC_FILES={
"SemEval-2015 Task 1": {
# Topic_Id | Topic_Name | Sent_1 | Sent_2 | Label | Sent_1_tag | Sent_2_tag |
# Map labels as follows
# paraphrases: (3, 2) (4, 1) (5, 0)
# non-paraphrases: (1, 4) (0, 5)
# debatable: (2, 3) which you may discard if training binary classifier
"train": "/datadrive/Datasets/Twitter/SemEval-PIT2015-github/data/train.data",
"dev": "/datadrive/Datasets/Twitter/SemEval-PIT2015-github/data/dev.data",
"test": "/datadrive/Datasets/Twitter/SemEval-PIT2015-github/data/test.data"
}
}
# -
SEQ_SPLITTER = re.compile(r'\n\s*\n', flags=re.M)
SEQ_SPLITTER.split("""shubh POS
mish pos
she pos
shubh POS
mish pos
she pos
shubh POS
mish pos
she pos
""")
def read_conll_data(filename, ncols=2):
with open(filename, encoding='utf-8') as fp:
for seq in SEQ_SPLITTER.split(fp.read()):
seq_ = []
for line in seq.splitlines():
line = line.rstrip()
if not line:
continue
values = line.split("\t")
if len(values) < ncols:
# Skip invalid lines
continue
seq_.append(values)
if not seq_:
seq_ = []
continue
yield seq_
# +
def get_ner_label(label, idx=1):
label = label.strip().upper()
if label == "O":
return label
if idx is None:
return label
return label.split('-', 1)[idx].strip()
def get_simple_label(label):
if label:
return label
return "O"
def get_file_stats(
filename,
label_processor=None,
label_col_id=-1,
skip_other=True,
ncols=2
):
if label_processor is None:
label_processor = lambda x: x
total_seq = 0
total_tokens = 0
token_types = Counter()
for i, seq in enumerate(read_conll_data(filename, ncols=ncols)):
total_seq += 1
total_tokens += len(seq)
try:
for item in seq:
label = label_processor(item[label_col_id])
if skip_other and label == "O":
continue
token_types.update([
label
])
except IndexError:
print(i, seq)
raise
return total_seq, total_tokens, token_types
# +
def make_conll_dataset_tables(files, **kwargs):
all_stats = []
for datakey in files:
for datatype, filepath in files[datakey].items():
print("{}-{}: {}".format(datakey, datatype, filepath))
# replace datadrive path with current data drive
filepath = filepath.replace("/datadrive/", "")
filepath = os.path.join(DATA_DRIVE_PATH, filepath)
total_seq, total_tokens, token_types = get_file_stats(filepath, **kwargs)
print(total_seq, total_tokens, token_types)
all_stats.append((datakey, datatype, total_seq, total_tokens, token_types))
return all_stats
def generate_tables(files, display_df=False, show_labels=True, **kwargs):
all_stats = make_conll_dataset_tables(files, **kwargs)
df = pd.DataFrame(all_stats, columns=[
"datakey", "datatype", "total_seq", "total_tokens", "labels"])
if show_labels:
df = df.assign(
all_labels=df["labels"].apply(lambda x: (", ".join(sorted(x.keys()))).upper())
)
df = df.assign(
num_labels=df["labels"].apply(len),
).sort_values(["datakey", "datatype"])
if display_df:
display(df)
with pd.option_context("display.max_colwidth", -1):
print(df.drop("labels", 1).set_index(["datakey", "datatype"]).to_latex())
display(df.drop("labels", 1).set_index(["datakey", "datatype"]))
# -
generate_tables(NER_FILES, display_df=True, label_processor=lambda x: get_ner_label(x, idx=1))
# ## POS datasets
generate_tables(POS_FILES, display_df=False)
# ## Chunking
generate_tables(CHUNKING_FILES, label_processor=lambda x: get_ner_label(x, idx=1))
# ## Supersense tagging
generate_tables(SUPERSENSE_TAGGING_FILES, display_df=False)
generate_tables(SUPERSENSE_TAGGING_FILES, label_processor=lambda x: get_ner_label(x, idx=1))
# ## DimSUM
#
# https://dimsum16.github.io/
generate_tables(DIMSUM_FILES, label_col_id=7, label_processor=get_simple_label, skip_other=True)
# ## Frame Semantics
#
#
#
# ```
# @paper{AAAI159349,
# author = {Anders Søgaard and Barbara Plank and Hector Alonso},
# title = {Using Frame Semantics for Knowledge Extraction from Twitter},
# conference = {AAAI Conference on Artificial Intelligence},
# year = {2015},
# keywords = {frame semantics; knowledge bases; twitter},
# abstract = {Knowledge bases have the potential to advance artificial intelligence, but often suffer from recall problems, i.e., lack of knowledge of new entities and relations. On the contrary, social media such as Twitter provide abundance of data, in a timely manner: information spreads at an incredible pace and is posted long before it makes it into more commonly used resources for knowledge extraction. In this paper we address the question whether we can exploit social media to extract new facts, which may at first seem like finding needles in haystacks. We collect tweets about 60 entities in Freebase and compare four methods to extract binary relation candidates, based on syntactic and semantic parsing and simple mechanism for factuality scoring. The extracted facts are manually evaluated in terms of their correctness and relevance for search. We show that moving from bottom-up syntactic or semantic dependency parsing formalisms to top-down frame-semantic processing improves the robustness of knowledge extraction, producing more intelligible fact candidates of better quality. In order to evaluate the quality of frame semantic parsing on Twitter intrinsically, we make a multiply frame-annotated dataset of tweets publicly available.},
#
# url = {https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9349}
# }
#
# ```
generate_tables(FRAME_SEMANTICS_FILE, show_labels=False, label_col_id=3, label_processor=get_simple_label, skip_other=True)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cost Function
#
# Let's first define a few variables that we will need to use:
#
# - L = total number of layers in the network
# - $s_l$ = number of units (not counting bias unit) in layer l
# - K = number of output units/classes
#
# Recall that in neural networks, we may have many output nodes. We denote $h_\Theta(x)_k$ as being a hypothesis that results in the $k^{th}$ output. Our cost function for neural networks is going to be a generalization of the one we used for logistic regression. Recall that the cost function for regularized logistic regression was:
#
# $$J(\theta) = - \frac{1}{m} \sum_{i=1}^m [ y^{(i)}\ \log (h_\theta (x^{(i)})) + (1 - y^{(i)})\ \log (1 - h_\theta(x^{(i)}))] + \frac{\lambda}{2m}\sum_{j=1}^n \theta_j^2$$
#
# For neural networks, it is going to be slightly more complicated:
#
# $$\begin{gather*} J(\Theta) = - \frac{1}{m} \sum_{i=1}^m \sum_{k=1}^K \left[y^{(i)}_k \log ((h_\Theta (x^{(i)}))_k) + (1 - y^{(i)}_k)\log (1 - (h_\Theta(x^{(i)}))_k)\right] + \frac{\lambda}{2m}\sum_{l=1}^{L-1} \sum_{i=1}^{s_l} \sum_{j=1}^{s_{l+1}} ( \Theta_{j,i}^{(l)})^2\end{gather*}$$
#
# We have added a few nested summations to account for our multiple output nodes. In the first part of the equation, before the square brackets, we have an additional nested summation that loops through the number of output nodes.
#
# In the regularization part, after the square brackets, we must account for multiple theta matrices. The number of columns in our current theta matrix is equal to the number of nodes in our current layer (including the bias unit). The number of rows in our current theta matrix is equal to the number of nodes in the next layer (excluding the bias unit). As before with logistic regression, we square every term.
#
# ### Note:
# - the double sum simply adds up the logistic regression costs calculated for each cell in the output layer
# - the triple sum simply adds up the squares of all the individual Θs in the entire network.
# - the i in the triple sum does **not** refer to training example i
# # Backpropagation Algorithm
#
# "Backpropagation" is neural-network terminology for minimizing our cost function, just like what we were doing with gradient descent in logistic and linear regression. Our goal is to compute:
#
# $\min_\Theta J(\Theta)$
#
# That is, we want to minimize our cost function $J$ using an optimal set of parameters in theta. In this section we'll look at the equations we use to compute the partial derivative of $J(\Theta)$:
#
#
# $\dfrac{\partial}{\partial \Theta_{i,j}^{(l)}}J(\Theta)$
#
# To do so, we use the following algorithm:
#
# 
#
# ### Back propagation Algorithm
#
# Given training set $\lbrace (x^{(1)}, y^{(1)}) \cdots (x^{(m)}, y^{(m)})\rbrace$
#
# - Set $ \Delta^{(l)}_{i,j}:= 0$ for all $(l,i,j)$, (hence you end up having a matrix full of zeros)
#
# For training example $t =1$ to $m$:
#
# 1.Set $a^{(1)}:=x^{(t)}$
#
# 2.Perform forward propagation to compute $a^{(l)}$ for $l=2,3,…,L$
# 
# 3.Using $y^{(t)}$, compute $\delta^{(L)} = a^{(L)} - y^{(t)}$
#
# Where $L$ is our total number of layers and $a^{(L)}$ is the vector of outputs of the activation units for the last layer. So our "error values" for the last layer are simply the differences of our actual results in the last layer and the correct outputs in $y$. To get the delta values of the layers before the last layer, we can use an equation that steps us back from right to left:
#
# 4.Compute $\delta^{(L-1)}, \delta^{(L-2)},\dots,\delta^{(2)}$ using $\delta^{(l)} = ((\Theta^{(l)})^T \delta^{(l+1)})\ .*\ a^{(l)}\ .*\ (1 - a^{(l)})$
#
# The delta values of layer l are calculated by multiplying the delta values in the next layer with the theta matrix of layer l. We then element-wise multiply that with a function called g', or g-prime, which is the derivative of the activation function g evaluated with the input values given by $z^{(l)}$.
#
# The g-prime derivative terms can also be written out as:
#
# $g'(z^{(l)}) = a^{(l)}\ .*\ (1 - a^{(l)})$ (มีพิสูจน์อยู่ข้างล่าง)
#
# 5.$\Delta^{(l)}_{i,j} := \Delta^{(l)}_{i,j} + a_j^{(l)} \delta_i^{(l+1)}$ or with vectorization, $\Delta^{(l)} := \Delta^{(l)} + \delta^{(l+1)}(a^{(l)})^T$
#
# Hence we update our new Δ matrix.
# - $D^{(l)}_{i,j} := \dfrac{1}{m}\left(\Delta^{(l)}_{i,j} + \lambda\Theta^{(l)}_{i,j}\right)$
# - $D^{(l)}_{i,j} := \dfrac{1}{m}\Delta^{(l)}_{i,j}$
#
# The capital-delta matrix D is used as an "accumulator" to add up our values as we go along and eventually compute our partial derivative. Thus we get $\frac \partial {\partial \Theta_{ij}^{(l)}} J(\Theta)$
# ### พิสูจน์ $g'(z) = g(z)(1 - g(z))$
# เมื่อ $g(z) = \frac{1}{1+e^{-z}}$
# ให้ $g(z) = f(u) = \frac{1}{u}$ เมื่อ $u = 1+e^{-z}$
# จาก chain rule
#
# $$\frac{dy}{dx} = \frac{dy}{du}\cdot \frac{du}{dx}$$
# ดังนั้น
#
# $$\frac{dg(z)}{dz} = \frac{dg(z)}{df(u)}\cdot\frac{df(u)}{du}\cdot\frac{du}{dz}$$
# $$\frac{dg(z)}{dz} = 1\cdot(\frac{-1}{u^2})\cdot(-e^{-z})$$
# $$\frac{dg(z)}{dz} = 1\cdot\frac{e^{-z}}{(1+e^{-z})^2}$$
# $$g'(z) = \frac{1}{(1+e^{-z})}\cdot\frac{e^{-z}}{(1+e^{-z})}$$
# $$g'(z) = \frac{1}{(1+e^{-z})}\cdot\frac{1+e^{-z}-1}{(1+e^{-z})}$$
# $$g'(z) = \frac{1}{(1+e^{-z})}\cdot(1 - \frac{1}{(1+e^{-z})})$$
# # Backpropagation Intuition
#
# Recall that the cost function for a neural network is:
#
# $$\begin{gather*}J(\Theta) = - \frac{1}{m} \sum_{t=1}^m\sum_{k=1}^K \left[ y^{(t)}_k \ \log (h_\Theta (x^{(t)}))_k + (1 - y^{(t)}_k)\ \log (1 - h_\Theta(x^{(t)})_k)\right] + \frac{\lambda}{2m}\sum_{l=1}^{L-1} \sum_{i=1}^{s_l} \sum_{j=1}^{s_l+1} ( \Theta_{j,i}^{(l)})^2\end{gather*}$$
#
# If we consider simple non-multiclass classification ($k = 1$) and disregard regularization, the cost is computed with:
#
# $$cost(t) =y^{(t)} \ \log (h_\Theta (x^{(t)})) + (1 - y^{(t)})\ \log (1 - h_\Theta(x^{(t)}))$$
#
# Intuitively, $\delta^{(l)}_j$ is the "error" for $a^{(l)}_j$ (unit $j$ in layer $l$). More formally, the delta values are actually the derivative of the cost function:
#
# $\delta_j^{(l)} = \dfrac{\partial}{\partial z_j^{(l)}} cost(t)$
#
# Recall that our derivative is the slope of a line tangent to the cost function, so the steeper the slope the more incorrect we are. Let us consider the following neural network below and see how we could calculate some $\delta_j^{(l)}$:
#
# 
#
# In the image above, to calculate $\delta_2^{(2)}$, we multiply the weights $\Theta_{12}^{(2)}$ and $\Theta_{22}^{(2)}$ by their respective $\delta$ values found to the right of each edge. So we get $\delta^{(2)}_2 = \Theta_{12}^{(2)}*\delta^{(3)}_1 + \Theta_{22}^{(2)}*\delta^{(3)}_2 $. To calculate every single possible $\delta_j^{(l)}$, we could start from the right of our diagram. We can think of our edges as our $\Theta_{ij}$. Going from right to left, to calculate the value of $\delta_j^{(l)}$, you can just take the over all sum of each weight times the $\delta$ it is coming from. Hence, another example would be $\delta^{(3)}_2 = \Theta_{12}^{(3)}*\delta^{(4)}_1$.
# # Implementation Note: Unrolling Parameters
#
# With neural networks, we are working with sets of **matrices**:
#
# $\begin{align*} \Theta^{(1)}, \Theta^{(2)}, \Theta^{(3)}, \dots \newline D^{(1)}, D^{(2)}, D^{(3)}, \dots \end{align*}$
#
# In order to use optimizing functions such as `"fminunc()"`, we will want to "unroll" all the elements and put them into **one long vector**:
#
# `thetaVector = [ Theta1(:); Theta2(:); Theta3(:); ]
# deltaVector = [ D1(:); D2(:); D3(:) ]`
#
# If the dimensions of Theta1 is 10x11, Theta2 is 10x11 and Theta3 is 1x11, then we can get back our original matrices from the "unrolled" versions as follows:
#
# `Theta1 = reshape(thetaVector(1:110),10,11)
# Theta2 = reshape(thetaVector(111:220),10,11)
# Theta3 = reshape(thetaVector(221:231),1,11)`
#
# To summarize:
#
# 
# # Gradient Checking
#
# เอาไว้ตรวจสอบผลของโปรแกรม `Gradient` แบบ backpropagation (หรืออื่นๆที่ซับซ้อนมากๆ) เนื่องจากแม่งซับซ้อนมาก วิธีนี้จะช่วย debug code ของเราได้อย่างดี
#
# Gradient checking will assure that our backpropagation works as intended. We can approximate the derivative of our cost function with:
#
# $$\dfrac{\partial}{\partial\Theta}J(\Theta) \approx \dfrac{J(\Theta + \epsilon) - J(\Theta - \epsilon)}{2\epsilon}$$
#
# With multiple theta matrices, we can approximate the derivative with respect to $\Theta_j$ as follows:
#
# $$\dfrac{\partial}{\partial\Theta_j}J(\Theta) \approx \dfrac{J(\Theta_1, \dots, \Theta_j + \epsilon, \dots, \Theta_n) - J(\Theta_1, \dots, \Theta_j - \epsilon, \dots, \Theta_n)}{2\epsilon}$$
#
# A small value for ${\epsilon}$ (epsilon) such as ${\epsilon}=10^{−4}$, guarantees that the math works out properly. If the value for ϵ is too small, we can end up with numerical problems.
#
# Hence, we are only adding or subtracting epsilon to the $\Theta_j$ matrix. In octave we can do it as follows:
#
# `epsilon = 1e-4;
# for i = 1:n,
# thetaPlus = theta;
# thetaPlus(i) += epsilon;
# thetaMinus = theta;
# thetaMinus(i) -= epsilon;
# gradApprox(i) = (J(thetaPlus) - J(thetaMinus))/(2*epsilon)
# end;
# `
#
# We previously saw how to calculate the deltaVector. So once we compute our gradApprox vector, we can check that gradApprox ≈ deltaVector.
#
# Once you have verified **once** that your backpropagation algorithm is correct, you don't need to compute gradApprox again. The code to compute gradApprox can be very slow.
#
# ก็คือหลังจาก check ผล ของ backpropagation algorithm เรียบร้อยแล้ว ก็ยกเลิก code ส่วน gradApprox เลย แม่งช้า
# # Random Initialization
#
# Initializing all **theta weights to zero does not work with neural networks**. When we backpropagate, all nodes will update to the same value repeatedly.
#
# Instead we can randomly initialize our weights for our $\Theta$ matrices using the following method:
#
# 
#
# **Hence, we initialize each $\Theta^{(l)}_{ij}$ to a random value between $[-\epsilon,\epsilon]$. Using the above formula guarantees that we get the desired bound**. The same procedure applies to all the Θ's. Below is some working code you could use to experiment.
#
# `#If the dimensions of Theta1 is 10x11, Theta2 is 10x11 and Theta3 is 1x11.
# Theta1 = rand(10,11) * (2 * INIT_EPSILON) - INIT_EPSILON;
# Theta2 = rand(10,11) * (2 * INIT_EPSILON) - INIT_EPSILON;
# Theta3 = rand(1,11) * (2 * INIT_EPSILON) - INIT_EPSILON;`
#
# rand(x,y) is just a function in octave that will initialize a matrix of random real numbers between 0 and 1.
#
# **(Note: the epsilon used above is unrelated to the epsilon from Gradient Checking)**
# # Putting it Together (สรุปขั้นตอนการทำ NN)
#
# First, pick a network architecture; **choose the layout** of your neural network, including **how many hidden units** in each layer and **how many layers** in total you want to have.
#
# - Number of input units = dimension of features $x^{(i)}$ (**รู้อยู่แล้ว**)
# - Number of output units = number of classes (**รู้อยู่แล้ว**)
# - Number of hidden units per layer = usually more the better (must balance with cost of computation as it increases with more hidden units)
# - Defaults: 1 hidden layer. If you have more than 1 hidden layer, then it is recommended that you have the **same number of units in every hidden layer**.
#
# ### Training a Neural Network
#
# 1. Randomly **initialize** the weights
# 2. Implement **forward propagation** to get $h_\Theta(x^{(i)})$ for any $x^{(i)}$
# 3. Implement the **cost function**
# 4. Implement **backpropagation** to compute partial derivatives
# 5. Use **gradient checking** to confirm that your backpropagation works. Then disable gradient checking.
# 6. Use **gradient descent** or a **built-in optimization function** to minimize the cost function with the weights in theta.
#
# When we perform forward and back propagation, we loop on every training example:
#
# `
# for i = 1:m,
# Perform forward propagation and backpropagation using example (x(i),y(i))
# (Get activations a(l) and delta terms d(l) for l = 2,...,L
# `
#
# The following image gives us an intuition of what is happening as we are implementing our neural network:
#
# 
#
# Ideally, you want $h_\Theta(x^{(i)}) \approx y^{(i)}$. This will **minimize our cost function**. However, keep in mind that $J(\Theta)$ is **not convex** and thus we can end up in a **local minimum instead**. (ซึ่งเชคได้ว่าจุดนี้น่าจะ global หรือเป็นจุด local ที่ใช้ได้ก็ต้องลอง random initialize ใหม่เรื่อยๆ ดูค่าที่ $h_\Theta(x^{(i)}) \approx y^{(i)}$ ที่สุด )
# # ================= CODE ====================
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.io import loadmat
data = loadmat('programing/machine-learning-ex4/ex4/ex4data1.mat')
# มีรูปขนาด 20x20 pixel 5000 รูป โดยเก็บสีของแต่ละรูปแบบ gray scale ไว้แต่ละ row ของ X (400 pixel)
# และเก็บผลเฉลยว่ารูปแสดงถึงเลขอะไร (1-10 โดยที่ 10 แทนเลข 0) ไว้ใน y
X = data['X']
y = data['y']
# -
X.shape, y.shape
# # แสดง sample รูปบางส่วน
# สุ่มหยิบรูปจาก 5000 รูป เอาแค่ 100
number_of_img = 100
def displayData(img_row,img_number):
rand_indices = np.random.permutation(len(img_row)) # rand 0-5000
sample = img_row[rand_indices[0:img_number]] # get sample
side = img_number**(1/2)
if (side).is_integer() :
side = (int)(side)
space_3d = np.zeros((side*20,side*20))
for i in range(side):
row_num = side*i
for j in range(side):
space_3d[20*i:20*i+20,j*20:20*j+20] = sample[row_num+j].reshape((20,20)).T
plt.imshow(space_3d, cmap='gray')
plt.show()
else:
print("Sqrt of number should be integer")
displayData(X,100)
# # โจทย์
#
# **ใช้ Neural Network ทำ Classification กับข้อมูลรูป 0-9 ที่มีให้**
#
# #### Hint
#
# จากแบบฝึกข้อที่แล้ว เราได้ลอง feedforword propagation สำหรับ neural networks ไปแล้ว แต่พารามิเตอร์นั้นเราใช้ของที่แบบฝึกหัดหามาให้ ในข้อนี้เราจะหาเองจาก **backpropagation algorithm**
#
# แต่ละรูปที่โจทย์ให้มา มีขนาด 20x20 pixel = 400 pixel โดยแต่ละ pixel ก็คือ ตัวแปรต้น $x_1,x_2,x_3,\cdots,x_{400}$
#
# เมื่อมี sample 5000 รูป เก็บไว้ใน matrix จะได้ $X \in \mathbb{R}^{5000x400}$
# $$X = \begin{bmatrix}x_1^{(1)} & x_2^{(1)} & \cdots & x_{400}^{(1)} \newline
# x_1^{(2)} & x_2^{(2)} & \cdots & x_{400}^{(2)} \newline
# \vdots & \vdots & \cdots & \vdots \newline
# x_1^{(5000)} & x_2^{(5000)} & \cdots & x_{400}^{(5000)} \newline
# \end{bmatrix}$$
# และ $y$ คือ vector ที่เก็บผล label รูปภาพแต่ละรูปไว้ ซึ่งมีค่า 0-9 จะได้ $y \in \mathbb{R}^{5000x1}$
# $$y = \begin{bmatrix}0 \newline 0 \newline 1 \newline \vdots \newline 9 \end{bmatrix}$$
# เราจะเริ่มทีละ Step ตามที่เค้าสอนข้างบนดังนี้
# *ฟังก์ชั่นที่ต้องใช้เสมอ*
def sigmoid(z):
g = 1/(1+np.exp(-z))
return g
# ## 0 - Pick a network architecture
# - เลือกจำนวน hidden layer : ในที่นี้ขอเลือก 1 hidden layer เมื่อรวมกับ input และ output layer จะมีทั้งหมด **3 layer**.
# - Number of input units = dimension of features $x^{(i)}$ : ดังนั้นข้อนี้มี 400 unit + 1 bias unit = **401 unit**
# - Number of output units = number of classes : ในที่นี้มี output ที่เป็นไปได้คือ 0-9 ดังนั้น มี **10 unit**
# - Number of hidden units per layer = usually more the better (ยิ่งเยอะยิ่งดี แต่ก็ทำให้โมเดลซับซ้อนขึ้น) : **25 unit**
#
# สุดท้ายได้รูปประมาณนี้
# 
input_layer_size = 400 # 20x20 Input Images of Digits
hidden_layer_size = 25 # 25 hidden units
num_labels = 10 # 10 labels, from 1 to 10 (note that we have mapped "0" to label 10)
# ## 1 - Randomly initialize the weights
# สร้างฟังก์ชั่น random ค่า $\Theta$ ตั้งต้นสำหรับ train function
#
# โดย $\Theta^{(l)} \in \mathbb{R}^{(S^{l+1})x(S^{l})}$
#
# เมื่อ $S^{l}$ คือจำนวน unit ที่ layer $l$
#
# จากปัญหาข้อนี้ มีการแปลง 2 ครั้ง ดังนั้นจะมี $\Theta^{(1)}, \Theta^{(2)}$
#
# สำหรับการ dev ครั้งแรก ขอ fix ค่า $\Theta^{(1)}, \Theta^{(2)}$ ไว้ก่อน เพื่อตรวจสอบความถูกต้องของ function ดังนั้นจะได้
# +
nn_params = loadmat('programing/machine-learning-ex4/ex4/ex4weights.mat')
# Unrolling nn_params to Vector
ThetaVec1 = nn_params['Theta1'].reshape(-1)
ThetaVec2 = nn_params['Theta2'].reshape(-1)
nn_params = np.concatenate((ThetaVec1,ThetaVec2))
# -
ThetaVec1.shape , ThetaVec2.shape, nn_params.shape
# ในส่วนของการใช้งานจริง เราต้อง random ค่า initialize ของ $\Theta$ ก่อนแทนไปใน optimize function ดังนี้
# เมื่อ L_in คือ จำนวน unit ขาเข้า L_out คือ จำนวน unit ขาออก
# อยากให้ค่า Theta ที่ random มานี้ ค่ามากสุดคือ epsilon น้อยสุดคือ -epsilon
def randInitializeWeights(L_in, L_out):
# Randomly initialize the weights to small values
epsilon_init = 0.12
W = np.random.random_sample((L_out,1+L_in))*2*epsilon_init - epsilon_init
return W
ThetaVecRand1 = randInitializeWeights(400,25).reshape(-1)
ThetaVecRand2 = randInitializeWeights(25,10).reshape(-1)
nn_paramsRand = np.concatenate((ThetaVecRand1,ThetaVecRand2))
ThetaVecRand1.shape , ThetaVecRand2.shape, nn_paramsRand.shape
# นำค่าตรงนี้ไปใช้ในการ train จริงๆ
# ## 2 - Implement forward propagation to get $h_\Theta(x^{(i)})$ for any $x^{(i)}$
#
#
# หาค่า $z^{(l)}$ $a^{(l)}$ ให้ครบทุก layer เหมือนที่ทำในข้อก่อนหน้า
# $\begin{bmatrix}x_0 \newline x_1 \newline x_2 \newline x_3\end{bmatrix}\rightarrow\begin{bmatrix}a_1^{(2)} \newline a_2^{(2)} \newline a_3^{(2)} \newline \end{bmatrix}\rightarrow h_\theta(x)$
# $z^{(2)} = \Theta^{(1)}(x + \text{bias unit})$
#
# $a^{(2)} = g(z^{(2)})$
#
# $z^{(3)} = \Theta^{(2)}(a^{(2)} + \text{bias unit})$
#
# $a^{(3)} = g(z^{(3)})$
# 
#
# ใส่ Code ส่วนนี้ลงไปใน Cost Function ข้างล่าง
# ## 3 - Implement the Cost function (feedforward)
#
# Recall that the cost function for a neural network is:
#
# $$\begin{gather*}J(\Theta) = - \frac{1}{m} \sum_{t=1}^m\sum_{k=1}^K \left[ y^{(t)}_k \ \log (h_\Theta (x^{(t)}))_k + (1 - y^{(t)}_k)\ \log (1 - h_\Theta(x^{(t)})_k)\right] + \frac{\lambda}{2m}\sum_{l=1}^{L-1} \sum_{i=1}^{s_l} \sum_{j=1}^{s_l+1} ( \Theta_{j,i}^{(l)})^2\end{gather*}$$
# โดยที่ $y^{(t)} = \begin{bmatrix} y^{(t)}_1 \newline y^{(t)}_2 \newline \vdots \newline y^{(t)}_K \end{bmatrix}$ เมื่อ $K$ คือจำนวน class output ทั้งหมด อย่างในข้อนี้ $K = 10$ ดังนั้น output มีโอกาสเป็นไปได้ทั้งหมด 10 ค่า (vector ที่มีค่าเป็น 0 หรือ 1) ดังนี้
#
# $$y^{(t)} = \begin{bmatrix} 1 \newline 0 \newline 0 \newline \vdots \newline 0 \end{bmatrix} , \begin{bmatrix} 0 \newline 1 \newline 0 \newline \vdots \newline 0 \end{bmatrix}, \cdots or \begin{bmatrix} 0 \newline 0 \newline 0 \newline \vdots \newline 1 \end{bmatrix} \in \mathbb{R}^{10} $$
# ดังนั้นต้องทำ function ที่แปลงค่า $y^{(t)} \in \{0,1,2,\cdots,9\}$ เป็น $y^{(t)} \in \begin{bmatrix} 1 \newline 0 \newline 0 \newline \vdots \newline 0 \end{bmatrix} , \begin{bmatrix} 0 \newline 1 \newline 0 \newline \vdots \newline 0 \end{bmatrix}, \cdots or \begin{bmatrix} 0 \newline 0 \newline 0 \newline \vdots \newline 1 \end{bmatrix}$ ด้วย
# ฟังก์ชั่นแปลง $y$ เช่นจาก เดิม $y^{(t)} = 5$ ก็ต้องแปลงเป็น vector ที่ตำแหน่งที่ 5 เป็น 1 นอกนั้นเป็น 0 หมด ได้ฟังก์ชั่นดังนี้
def createYmat(y,num_labels):
m = len(y)
mat = np.zeros((m,num_labels))
for index in range(len(y)):
mat[index,y[index]-1] = 1
return mat
# ในส่วนของ
#
# $$\sum_{l=1}^{L-1} \sum_{i=1}^{s_l} \sum_{j=1}^{s_l+1} ( \Theta_{j,i}^{(l)})^2$$
#
# แม่งเหมือนการเอา $\Theta$ ทุก Layer ที่ยกกำลังสองแล้วมารวมกันให้หมด
# อย่างในข้อนี้มี $\Theta^{(1)} \in \mathbb{R}^{25x401}$ และ $\Theta^{(2)} \in \mathbb{R}^{10x26}$ โดยที่
#
# $$\Theta^{(1)} = \begin{bmatrix}\Theta_{1,0}^{(1)} & \Theta_{1,1}^{(1)} & \cdots & \Theta_{1,400}^{(1)} \newline
# \Theta_{2,0}^{(1)} & \Theta_{2,1}^{(1)} & \cdots & \Theta_{2,400}^{(1)} \newline
# \vdots & \vdots & \cdots & \vdots \newline
# \Theta_{25,0}^{(1)} & \Theta_{25,1}^{(1)} & \cdots & \Theta_{25,400}^{(1)} \newline
# \end{bmatrix} \in \mathbb{R}^{25x401}$$
# $$\Theta^{(2)} = \begin{bmatrix}\Theta_{1,0}^{(2)} & \Theta_{1,1}^{(2)} & \cdots & \Theta_{1,25}^{(2)} \newline
# \Theta_{2,0}^{(2)} & \Theta_{2,1}^{(2)} & \cdots & \Theta_{2,25}^{(2)} \newline
# \vdots & \vdots & \cdots & \vdots \newline
# \Theta_{10,0}^{(2)} & \Theta_{10,1}^{(2)} & \cdots & \Theta_{10,25}^{(2)} \newline
# \end{bmatrix} \in \mathbb{R}^{10x26}$$
# ดังนั้น $\sum_{l=1}^{L-1} \sum_{i=1}^{s_l} \sum_{j=1}^{s_l+1} ( \Theta_{j,i}^{(l)})^2$ แม่งคือให้แต่ละค่าใน $\Theta^{(1)},\Theta^{(2)}$ ยกกำลังสองแล้วรวมทุกค่าใน matrix แล้วรวมทุก matrix กัน
# จะได้ Cost Function ที่รวมผลของ Regularization ด้วยแล้วดังนี้
def nnCostFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y,nlambda):
# Implements the NN cost function for a two layer NN which performs classification
# [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels,X, y, lambda)
# computes the cost and gradient of the neural network. The
# parameters for the neural network are "unrolled" into the vector
# nn_params and need to be converted back into the weight matrices.
# Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
# for our 2 layer neural network
Theta1 = nn_params[0:hidden_layer_size*(input_layer_size + 1)].reshape((hidden_layer_size,input_layer_size + 1))
Theta2 = nn_params[hidden_layer_size*(input_layer_size+1):len(nn_params)].reshape((num_labels,(hidden_layer_size + 1)))
# Setup some useful variables
m = X.shape[0]
# You need to return the following variables correctly
J = 0
Theta1_grad = np.zeros(Theta1.shape)
Theta2_grad = np.zeros(Theta2.shape)
# ====================== YOUR CODE HERE ======================
# Instructions: You should complete the code by working through the following parts.
# Part 1: Feedforward the neural network and return the cost in the variable J.
# After implementing Part 1, you can verify that your
# cost function computation is correct by verifying the cost
# computed in ex4.m
# Part 3: Implement regularization with the cost function and gradients.
#
# Hint: You can implement this around the code for
# backpropagation. That is, you can compute the gradients for
# the regularization separately and then add them to Theta1_grad
# and Theta2_grad from Part 2.
# Part 1 : Feedforward { Theta1(25x401), Theta2(10x26), X(m,401), a2(m,26), h(m,10) }
# -- Compute h
ones_m = np.ones((m,1)) # mx1
X = np.hstack((ones_m,X)) # mx401
z2 = X.dot(Theta1.T) # mx25
a2 = sigmoid(z2) # mx25
a2 = np.hstack((ones_m,a2)) # mx26
z3 = a2.dot(Theta2.T) # mx10
h = sigmoid(z3) # mx10
y = createYmat(y,num_labels)# mx10
left = np.sum(np.multiply(-y,np.log(h)))
right = np.sum(np.multiply(-(1-y),np.log(1-h)))
J = J + (1/m)*(left+right) # Correct!!!
# Part 3: Implement regularization to Cost Function
costReg = (nlambda/(2*m))*(np.sum(np.power(Theta1[:,1:input_layer_size+1],2)) + np.sum(np.power(Theta2[:,1:hidden_layer_size+1],2)))
J = J + costReg
return J
# ทดสอบค่า J ด้วย Input ที่โจทย์ให้มา (ยังไม่รวม Regularization)
nlambda = 0
nnCostFunction(nn_params,input_layer_size,hidden_layer_size,num_labels,X,y,nlambda)
# ถูกต้องตามเฉลย
# ทดสอบค่า J ด้วย Input ที่โจทย์ให้มา (รวม Regularization)
nlambda = 1
nnCostFunction(nn_params,input_layer_size,hidden_layer_size,num_labels,X,y,nlambda)
# ถูกต้องตามเฉลย
# ## 4 - Compute partial derivatives (backpropagation)
# ปกติแล้วเราจะหา gradient ของ cost เพื่อหา $\theta$ ที่ทำให้ $J(\theta)$ มีค่าน้อยที่สุด สมมติว่า $\theta = \begin{bmatrix} \theta_0 \newline \theta_1 \newline \theta_2 \end{bmatrix}$ จะได้ gradient คือ $\dfrac{d}{d\theta}J(\theta) = \begin{bmatrix} \dfrac{\partial}{\partial \theta_{0}}J(\theta) \newline \dfrac{\partial}{\partial \theta_{1}}J(\theta) \newline \dfrac{\partial}{\partial \theta_{2}}J(\theta) \end{bmatrix}$
# สำหรับปัญหา NN เราจะหา $\min_\Theta J(\Theta)$ เช่นในโจทย์ข้อนี้ เรามี $\Theta^{(1)} \in \mathbb{R}^{25x401}$ และ $\Theta^{(2)} \in \mathbb{R}^{10x26}$ ดังนั้น gradient ของปัญหา NN ก็จะล้อกับ gradient ของปัญหาข้างบนด้วย กล่าวคือ
# $\dfrac{d}{d\Theta}J(\Theta) = \begin{bmatrix} \dfrac{\partial}{\partial \Theta^{(1)}}J(\Theta) \newline \dfrac{\partial}{\partial \Theta^{(2)}}J(\Theta) \end{bmatrix} $
#
# เมื่อ
# $\Theta^{(1)} = \begin{bmatrix}\Theta_{1,0}^{(1)} & \Theta_{1,1}^{(1)} & \cdots & \Theta_{1,400}^{(1)} \newline
# \Theta_{2,0}^{(1)} & \Theta_{2,1}^{(1)} & \cdots & \Theta_{2,400}^{(1)} \newline
# \vdots & \vdots & \cdots & \vdots \newline
# \Theta_{25,0}^{(1)} & \Theta_{25,1}^{(1)} & \cdots & \Theta_{25,400}^{(1)} \newline
# \end{bmatrix}$ จะได้ $\dfrac{\partial}{\partial \Theta^{(1)}}J(\Theta) = \begin{bmatrix} \dfrac{\partial}{\partial \Theta_{1,0}^{(1)}}J(\Theta) & \dfrac{\partial}{\partial \Theta_{1,1}^{(1)}}J(\Theta) & \cdots & \dfrac{\partial}{\partial \Theta_{1,400}^{(1)}}J(\Theta) \newline
# \dfrac{\partial}{\partial \Theta_{2,0}^{(1)}}J(\Theta) & \dfrac{\partial}{\partial \Theta_{2,1}^{(1)}}J(\Theta) & \cdots & \dfrac{\partial}{\partial \Theta_{2,400}^{(1)}}J(\Theta) \newline
# \vdots & \vdots & \cdots & \vdots \newline
# \dfrac{\partial}{\partial \Theta_{25,0}^{(1)}}J(\Theta) & \dfrac{\partial}{\partial \Theta_{25,1}^{(1)}}J(\Theta) & \cdots & \dfrac{\partial}{\partial \Theta_{25,400}^{(1)}}J(\Theta) \newline
# \end{bmatrix} \in \mathbb{R}^{25x401}$
# และ
# $\Theta^{(2)} = \begin{bmatrix}\Theta_{1,0}^{(2)} & \Theta_{1,1}^{(2)} & \cdots & \Theta_{1,25}^{(2)} \newline
# \Theta_{2,0}^{(2)} & \Theta_{2,1}^{(2)} & \cdots & \Theta_{2,25}^{(2)} \newline
# \vdots & \vdots & \cdots & \vdots \newline
# \Theta_{10,0}^{(2)} & \Theta_{10,1}^{(2)} & \cdots & \Theta_{10,25}^{(2)} \newline
# \end{bmatrix}$ จะได้ $\dfrac{\partial}{\partial \Theta^{(2)}}J(\Theta) = \begin{bmatrix} \dfrac{\partial}{\partial \Theta_{1,0}^{(2)}}J(\Theta) & \dfrac{\partial}{\partial \Theta_{1,1}^{(2)}}J(\Theta) & \cdots & \dfrac{\partial}{\partial \Theta_{1,25}^{(2)}}J(\Theta) \newline
# \dfrac{\partial}{\partial \Theta_{2,0}^{(2)}}J(\Theta) & \dfrac{\partial}{\partial \Theta_{2,1}^{(2)}}J(\Theta) & \cdots & \dfrac{\partial}{\partial \Theta_{2,25}^{(2)}}J(\Theta) \newline
# \vdots & \vdots & \cdots & \vdots \newline
# \dfrac{\partial}{\partial \Theta_{10,0}^{(2)}}J(\Theta) & \dfrac{\partial}{\partial \Theta_{10,1}^{(2)}}J(\Theta) & \cdots & \dfrac{\partial}{\partial \Theta_{10,25}^{(2)}}J(\Theta) \newline
# \end{bmatrix} \in \mathbb{R}^{10x26}$
# จะเห็นว่าขนาดของ matrix gradient = ขนาดของ matrix $\Theta$
# ### backpropagation
# กำหนดให้ $\dfrac{\partial}{\partial \Theta_{i,j}^{(l)}}J(\Theta) = D^{(l)}_{ij}$
#
# จาก
#
# $D^{(l)}_{ij} = \frac{1}{m}\Delta^{(l)}_{ij}$
# $\Delta^{(l)}_{ij} = a^{(l)}_j\delta^{(l+1)}_i$ โดยที่ remove $\delta^{(l)}_0$ ด้วยนะ เช่น `delta2 = delta2(2:end)`
# $a^{(l)}_j$ หาได้จากการทำ feedforward ในข้อ 3 ข้างบนแล้ว เมื่อ $j$ คือหมายเลขของแต่ละ unit ที่ layer $l$
# ที่ layer output : $l = L$
# $\delta^{(L)}_{i} = a_{i}^{L}-y_i \in \mathbb{R}^{m \times \text{จำนวน class ของ output}}$
# ที่ Hidden Layer : $1 <l < L$
# $\delta^{(l)} = (\Theta^{(l)})^T\delta^{l+1}.*g^{'}(z^{(l)}) \in \mathbb{R}^{m \times \text{จำนวนหลักของ}\Theta^{(l)}}$
# (ให้คิดว่า $\delta^{(l)}$ เป็น label ของ $a^{(l)}$ และคิดว่า grad เป็น label ของ $\Theta$)
# ดังนั้นในการใช้งานจริง เราจะเริ่มจากการ
# - หา $a^{(l)},\delta^{(l)}$ ให้ครบทุก layer ก่อน
# - หา $\Delta^{(l)}$
# - หา $D^{(l)}$
# - ได้ $\dfrac{d}{d\Theta}J(\Theta)$
# #### Initial Function
# จาก $g^{'}(z) = \frac{d}{dz}g(z) = g(z)(1-g(z))$
#
# เมื่อ $\text{sigmoid}(z) = g(z) = \frac{1}{1+e^{-z}}$
def sigmoidGradient(z):
gradG = (1/(1+np.exp(-z)))*(1-1/(1+np.exp(-z)))
return gradG
sigmoidGradient(0)
sigmoidGradient(np.array([-1,-0.5,0,0.5,1]))
# ถูกต้องตามเฉลย
# Gradient Function
def nnGradFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y,nlambda):
# The returned parameter grad should be a "unrolled" vector of the
# partial derivatives of the neural network.
# Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
# for our 2 layer neural network
Theta1 = nn_params[0:hidden_layer_size*(input_layer_size + 1)].reshape((hidden_layer_size,input_layer_size + 1))
Theta2 = nn_params[hidden_layer_size*(input_layer_size+1):len(nn_params)].reshape((num_labels,(hidden_layer_size + 1)))
# Setup some useful variables
m = X.shape[0]
# You need to return the following variables correctly
J = 0
Theta1_grad = np.zeros(Theta1.shape)
Theta2_grad = np.zeros(Theta2.shape)
# ====================== YOUR CODE HERE ======================
# Part 2: Implement the backpropagation algorithm to compute the gradients
# Theta1_grad and Theta2_grad. You should return the partial derivatives of
# the cost function with respect to Theta1 and Theta2 in Theta1_grad and
# Theta2_grad, respectively. After implementing Part 2, you can check
# that your implementation is correct by running checkNNGradients
#
# Note: The vector y passed into the function is a vector of labels
# containing values from 1..K. You need to map this vector into a
# binary vector of 1's and 0's to be used with the neural network
# cost function.
#
# Hint: We recommend implementing backpropagation using a for-loop
# over the training examples if you are implementing it for the
# first time.
#
# Part 3: Implement regularization with the cost function and gradients.
#
# Hint: You can implement this around the code for
# backpropagation. That is, you can compute the gradients for
# the regularization separately and then add them to Theta1_grad
# and Theta2_grad from Part 2.
# Part 1 : Feedforward { Theta1(25x401), Theta2(10x26), X(m,401), a2(m,26), h(m,10) }
# -- Compute z2, a2, z3, a3, y
ones_m = np.ones((m,1)) # mx1
X = np.hstack((ones_m,X)) # mx401
z2 = X.dot(Theta1.T) # mx25
a2 = sigmoid(z2) # mx25
a2 = np.hstack((ones_m,a2)) # mx26
z3 = a2.dot(Theta2.T) # mx10
h = sigmoid(z3) # mx10
y = createYmat(y,num_labels)# mx10
# Part 2: Implement the backpropagation algorithm
# --- delta เหมือน label ของ a
delta3 = h - y # mx10
delta2 = delta3.dot(Theta2) # [(mx10)x(10x26)] ยังไม่ลบ d20
delta2 = np.multiply(delta2[:,1:hidden_layer_size+1],sigmoidGradient(z2)) # (mx25).*(mx25) ไม่คิด d20 แล้ว
# --- Delta(สามเหลี่ยม) เหมือน label ของ Theta
Del2 = (delta3.T).dot(a2) # (10xm)x(mx26) = 10x26
Del1 = (delta2.T).dot(X) # (25xm)x(mx401) = 25x401
Theta1_grad = Theta1_grad + (1/m)*Del1
Theta2_grad = Theta2_grad + (1/m)*Del2
# Part 3: Implement regularization with the gradients.
gradReg1 = np.zeros(Theta1.shape)
gradReg1 = gradReg1+(nlambda/m)*Theta1
gradReg1[:,0] = 0
gradReg2 = np.zeros(Theta2.shape)
gradReg2 = gradReg2+(nlambda/m)*Theta2
gradReg2[:,0] = 0
Theta1_grad = Theta1_grad + gradReg1
Theta2_grad = Theta2_grad + gradReg2
# Unrolling gradients to vector
Theta1_grad = Theta1_grad.reshape(-1)
Theta2_grad = Theta2_grad.reshape(-1)
grad = np.concatenate((Theta1_grad,Theta2_grad))
return grad
nnGradFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y,nlambda).shape
nnGradFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y,nlambda)
# ผลไม่ error แต่เรายังไม่รู้ว่าถูกป่าว ต้อง check กับ gradient checking
# ## 5 - Use gradient checking
# ฟังก์ชั่นนี้มีไว้ check ผลของ `nnGradFunction(..)`
# จาก
# $$f_i(\theta) \approx \frac{J(\theta^{(i+)})-J(\theta^{(-i)})}{2\epsilon}$$
# ปกตินิยมให้ $\epsilon = 10^{-4}$
# เมื่อ $\theta$ คือ "unrolling" ของ $\Theta$
# สร้างฟังก์ชั่นแสดงค่า $J(\theta)$ ก่อน
# theta คือ vector ของ Theta1 + Theta2 = nn_paramsRand
def J(theta):
return nnCostFunction(theta,input_layer_size,hidden_layer_size,num_labels,X,y,nlambda)
# หาค่าประมาณของ Gradient
def computeNumericalGradient(theta):
# -----------------------
#COMPUTENUMERICALGRADIENT Computes the gradient using "finite differences"
#and gives us a numerical estimate of the gradient.
# numgrad = COMPUTENUMERICALGRADIENT(J, theta) computes the numerical
# gradient of the function J around theta. Calling y = J(theta) should
# return the function value at theta.
# Notes: The following code implements numerical gradient checking, and
# returns the numerical gradient.It sets numgrad(i) to (a numerical
# approximation of) the partial derivative of J with respect to the
# i-th input argument, evaluated at theta. (i.e., numgrad(i) should
# be the (approximately) the partial derivative of J with respect
# to theta(i).)
#
numgrad = np.zeros(theta.shape)
perturb = np.zeros(theta.shape)
e = 1e-4
for p in range(theta.size):
# Set perturbation vector
perturb[p] = e
loss1 = J(theta - perturb)
loss2 = J(theta + perturb)
# Compute Numerical Gradient
numgrad[p] = (loss2 - loss1)/(2*e)
perturb[p] = 0
return numgrad
computeNumericalGradient(nn_paramsRand)
# ให้ผลออกมาละ ลองตรวจสอบดูกับข้อมูลจากโจยท์ดูดีกว่า ว่าค่าใกล้เคียงกันจริงหรือเปล่า (น้อยกว่า $10^{-9}$) จะได้
sum(nnGradFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y,nlambda)-computeNumericalGradient(nn_params))
# น้อยกว่า $10^{-9}$ จริงแสดงว่า `nnGradFunction(...)` ถูกแล้ว ไปทำข้อต่อไปได้เลย
# ## 6 - Use gradient descent or a built-in optimization function
#
# ใช้ optimize function เพื่อ train NN ลองกับ TensorFlow, Scikit-Learn ดู
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gradient Boosting
# Let's set up our problem, have a training set $X$, a test set $y$, and a differentiable error function $E$ which we consider a function $E(f(x_i), y_i)$ for some model $f$ and a training example $x_i \in X$ and corresponding $y_i\in y$.
# So the total error is given by $\frac{1}{n} \sum_i E(f(x_i), y_i)$.
# The idea is to have an iterative approximation to a perfect function, mimicking gradient descent in the function space.
# So what does this look like? We start with some initial model, $f_0$, for example could be the training mean for a regression problem, or majority class output in a classification.
# We then consider the error for that function. Since $E$ is differentiable, can compute $$r_i = -\frac{\partial E(f_0(x_i), y_i)}{\partial f_0(x_i)}$$, the residuals for each training example. Let's give a conrete example, mean square error.
#
# $$E(f(x_i), y_i) = \tfrac{1}{2}(f(x_i)-y_i)^2$$
#
# We differentiate $E$ considering it a function of $f(x_i)$, so compute
#
# $$-\frac{\partial E(f(x_i), y_i)}{\partial f(x_i)} = -(f(x_i)-y_i) = y_i-f(x_i)$$
# So we have computed these residuals for each example in our training set. But we need to extend this a function for every input, so we fit some function to these values. This can be any function, in the case of XGBoost and LightGBM this would be a decision tree.
# So we fit some function (a so-called "weak learner") to these residuals, and get some $h_0$. Then the last step is to combine this to our next iteration of the model. We take some real number $\alpha>0$, the learning rate, then compute
#
# $$f_1(x_i) = f_0(x_i) + \alpha h_0(x_i)$$
#
# Note this learning rate can be fixed, but we can also choose it in other ways, for example choose $\alpha$ that minimizes our total error (this is a one dimensional optimisation problem, so can use variety of solvers that exist to find a solution).
#
# The key point is that this look like gradient boosting, as
#
# $$ f_0(x_i) + \alpha h_0(x_i) \approx f_0(x_i) - \alpha \frac{\partial E(f_0(x_i), y_i)}{\partial f_0(x_i)} $$
# Then we continue- notice we have used no special properties of $f_0$. So do the same thing starting with $f_1$ to get $f_2$, and continue up until some sensible stopping criteria. This criteria could be a fixed number of iterations, or early stopping based on computing the errors on some validation set.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Implementation of Anomaly detection using Autoencoders
# Dataset used here is Credit Card Fraud Detection from Kaggle.
#
# ### Import required libraries
# +
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,normalize, MinMaxScaler
from sklearn.metrics import confusion_matrix, recall_score, accuracy_score, precision_score
from keras import backend as K
print("-------------------------------------------")
print("GPU available: ", tf.config.list_physical_devices('GPU'))
print("Keras backend: ", K.backend())
print("-------------------------------------------")
# Load layers from keras
from keras.layers import Dense, Input, Concatenate, Flatten, BatchNormalization, Dropout, LeakyReLU
from keras.models import Sequential, Model
from keras.losses import binary_crossentropy
from Disco_tf import distance_corr
from keras.optimizers import Adam
from sklearn.metrics import roc_auc_score
RANDOM_SEED = 2021
TEST_PCT = 0.3
LABELS = ["Normal","Fraud"]
# -
#input Layer
input_layer = tf.keras.layers.Input(shape=(input_dim, ))
#Encoder
encoder = tf.keras.layers.Dense(encoding_dim, activation="tanh",
activity_regularizer=tf.keras.regularizers.l2(learning_rate))(input_layer)
encoder=tf.keras.layers.Dropout(0.2)(encoder)
encoder = tf.keras.layers.Dense(hidden_dim_1, activation='relu')(encoder)
encoder = tf.keras.layers.Dense(hidden_dim_2, activation=tf.nn.leaky_relu)(encoder)
# Decoder
decoder = tf.keras.layers.Dense(hidden_dim_1, activation='relu')(encoder)
decoder=tf.keras.layers.Dropout(0.2)(decoder)
decoder = tf.keras.layers.Dense(encoding_dim, activation='relu')(decoder)
decoder = tf.keras.layers.Dense(input_dim, activation='tanh')(decoder)
#Autoencoder
autoencoder = tf.keras.Model(inputs=input_layer, outputs=decoder)
autoencoder.summary()
# +
# build one block for each dense layer
def get_block(L, size):
L = BatchNormalization()(L)
L = Dense(size)(L)
L = Dropout(0.5)(L)
L = LeakyReLU(0.2)(L)
return L
# baseline correlation function
def binary_cross_entropy(y_true, y_pred):
return binary_crossentropy(y_true, y_pred)
# define new loss with distance decorrelation
def decorr(var_1, var_2, weights,kappa):
def loss(y_true, y_pred):
#return binary_crossentropy(y_true, y_pred) + distance_corr(var_1, var_2, weights)
#return distance_corr(var_1, var_2, weights)
return binary_crossentropy(y_true, y_pred) + kappa * distance_corr(var_1, var_2, weights)
#return binary_crossentropy(y_true, y_pred)
return loss
# -
allX = { feat : np.genfromtxt('%s' % feat,delimiter = ',')[1:522467,:] for feat in ["Input_Background_1.csv",
"Input_Signal_1.csv"] }
# +
X = list(allX.values())
y = np.ones((522466))
y[0:2000] = 0
from sklearn.model_selection import train_test_split
split = train_test_split(*X,y , test_size=0.1, random_state=42)
train = [ split[ix] for ix in range(0,len(split),2) ]
test = [ split[ix] for ix in range(1,len(split),2) ]
X_train, y_train = train[0:2], train[-1]
X_test, y_test = test[0:2], test[-1]
X_train.append(np.ones(len(y_train)))
X_test.append(np.ones(len(y_train)))
# Setup network
# make inputs
jets = Input(shape=X_train[0].shape[1:])
f_jets = Flatten()(jets)
leps = Input(shape=X_train[1].shape[1:])
f_leps = Flatten()(leps)
i = Concatenate(axis=-1)([f_jets, f_leps])
sample_weights = Input(shape=(1,))
#setup trainable layers
d1 = get_block(i, 1024)
d2 = get_block(d1, 1024)
d3 = get_block(d2, 512)
d4 = get_block(d3, 256)
d5 = get_block(d4, 128)
o = Dense(1, activation="sigmoid")(d5)
model = Model(inputs=[jets,leps, sample_weights], outputs=o)
model.summary()
# -
# Compile model
from keras.optimizers import Adam
opt = Adam(lr=0.001)
model.compile(optimizer=opt, loss=decorr(jets[:,0], o[:,0], sample_weights[:,0],0.5))
#model.compile(optimizer=opt, loss="binary_crossentropy")
# +
# Train model
model.fit(x=X_train, y=y_train, epochs=20, batch_size=10000, validation_split=0.1)
# Evaluate model
y_train_predict = model.predict(X_train, batch_size=10000)
y_test_predict = model.predict(X_test, batch_size=10000)
from sklearn.metrics import roc_auc_score
auc_train = roc_auc_score(y_train, y_train_predict)
auc_test = roc_auc_score(y_test, y_test_predict)
print("area under ROC curve (train sample): ", auc_train)
print("area under ROC curve (test sample): ", auc_test)
# plot correlation
x = X_test[0][:,0,0]
y = y_test_predict[:,0]
corr = np.corrcoef(x, y)
print("correlation ", corr[0][1])
# -
from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generative Adversarial Network (GAN)
#
# GANs are based on an adversarial process, in which two models are trained simultaneously: a generator network G which learns to synthesize samples from the distribution of provided data, and a discriminator network D which measures the probability that a sample came from the original data, rather than was
# generated by the generator. During the training stage, both models compete with each other,
# by G trying to synthesise samples so real, that D can no longer differentiate between real and fake. Implementation presented in this notebook also includes a classifier as a third model. It penalizes the generator for generating samples from wrong class. This addition is particularly useful, because it later allows the user to indicate from which class (distribution) one would like to synthesize the samples.
# +
import os
import torch
import numpy as np
from torch import nn
from keras.models import load_model
from keras.callbacks import EarlyStopping, ModelCheckpoint
from torch.utils.data import DataLoader
from python_research.augmentation.GAN.WGAN import WGAN
from python_research.augmentation.GAN.classifier import Classifier
from python_research.augmentation.GAN.discriminator import Discriminator
from python_research.augmentation.GAN.generator import Generator
from python_research.keras_models import build_1d_model
from python_research.dataset_structures import OrderedDataLoader, \
HyperspectralDataset, BalancedSubset
from python_research.augmentation.GAN.samples_generator import SamplesGenerator
DATA_DIR = os.path.join('..', '..', 'hypernet-data')
RESULTS_DIR = os.path.join('..', '..', 'hypernet-data', 'results', 'gan_augmentation')
DATASET_PATH = os.path.join(DATA_DIR, '')
GT_PATH = os.path.join(DATA_DIR, '')
os.makedirs(RESULTS_DIR, exist_ok=True)
# -
# # Prepare the data
#
# Extract the training, validation and test sets. Trainig set will be balanced (each class will have equal number of samples)
# +
# Number of samples to be extracted from each class as training samples
SAMPLES_PER_CLASS = 100
# Percentage of the training set to be extracted as validation set
VAL_PART = 0.1
# Load dataset
test_data = HyperspectralDataset(DATASET_PATH, GT_PATH)
test_data.normalize_labels()
# Extract training and validation sets
train_data = BalancedSubset(test_data, SAMPLES_PER_CLASS)
val_data = BalancedSubset(train_data, VAL_PART)
# -
# # Data normalization
#
# Data is normalized using Min-Max feature scaling. Min and max values are extracted from train and test sets.
# Normalize data
max_ = train_data.max if train_data.max > val_data.max else val_data.max
min_ = train_data.min if train_data.min < val_data.min else val_data.min
train_data.normalize_min_max(min_=min_, max_=max_)
val_data.normalize_min_max(min_=min_, max_=max_)
test_data.normalize_min_max(min_=min_, max_=max_)
# # Data loaders and models initialization
#
# GAN is composed of three models: generator, discriminator and classifier. All of them have an identical topology (2 hidden layers with 512 neurons each).
# +
# Number of epochs without improvement on validation set after which the
# training will be terminated for the GAN classifier
CLASSIFIER_PATIENCE = 30
# GAN learning rate
LEARNING_RATE = 0.00001
# Number of classes in the dataset
CLASSES_COUNT = 16
BATCH_SIZE = 64
# Initialize pytorch data loaders
custom_data_loader = OrderedDataLoader(train_data, BATCH_SIZE)
data_loader = DataLoader(train_data, batch_size=BATCH_SIZE,
shuffle=True, drop_last=True)
cuda = True if torch.cuda.is_available() else False
input_shape = bands_count = train_data.shape[-1]
classifier_criterion = nn.CrossEntropyLoss()
# Initialize generator, discriminator and classifier
generator = Generator(input_shape, CLASSES_COUNT)
discriminator = Discriminator(input_shape)
classifier = Classifier(classifier_criterion, input_shape, CLASSES_COUNT,
use_cuda=cuda, patience=CLASSIFIER_PATIENCE)
# Optimizers
optimizer_G = torch.optim.Adam(generator.parameters(),
lr=LEARNING_RATE,
betas=(0, 0.9))
optimizer_D = torch.optim.Adam(discriminator.parameters(),
lr=LEARNING_RATE,
betas=(0, 0.9))
optimizer_C = torch.optim.Adam(classifier.parameters(),
lr=LEARNING_RATE,
betas=(0, 0.9))
# Use GPU if possible
if cuda:
generator = generator.cuda()
discriminator = discriminator.cuda()
classifier = classifier.cuda()
classifier_criterion = classifier_criterion.cuda()
# -
# # Classifier pre-training
#
# The classifier has to be trained beforehand, so it gives valuable feedback to the generator regarding the classes of samples that it generates.
# +
# Number of classifier training epochs
CLASSIFIER_EPOCHS = 200
# Train classifier
classifier.train_(data_loader, optimizer_C, CLASSIFIER_EPOCHS)
# -
# # GAN training
# +
# Number of epochs without improvement on discriminator loss after
# after which the GAN training will be terminated
GAN_PATIENCE = 200
# Gradient penalty
LAMBDA_GP = 10
# Number of GAN epochs
GAN_EPOCHS = 2000
# Initialize GAN
gan = WGAN(generator, discriminator, classifier, optimizer_G, optimizer_D,
use_cuda=cuda, lambda_gp=LAMBDA_GP, patience=GAN_PATIENCE)
# Train GAN
gan.train(custom_data_loader, GAN_EPOCHS, bands_count,
BATCH_SIZE, CLASSES_COUNT, os.path.join(RESULTS_DIR, "generator_model"))
# -
# # Generating samples
#
# When the training is complete, the generator is used to synthesize new samples. Generation process is performed by the **`SamplesGenerator`** class, using the `generate` method. It accepts training set (in order to calculate number of samples in each class) and the pre-trained generator model.
# +
# Generate samples using trained Generator
generator = Generator(input_shape, CLASSES_COUNT)
generator_path = os.path.join(RESULTS_DIR, "generator_model")
generator.load_state_dict(torch.load(generator_path))
if cuda:
generator = generator.cuda()
train_data.convert_to_numpy()
device = 'gpu' if cuda is True else 'cpu'
samples_generator = SamplesGenerator(device=device)
generated_x, generated_y = samples_generator.generate(train_data,
generator)
# Convert generated Tensors back to numpy
generated_x = np.reshape(generated_x.detach().cpu().numpy(),
generated_x.shape + (1, ))
# Add one dimension to convert row vectors to column vectors (keras
# requirement)
train_data.expand_dims(axis=-1)
test_data.expand_dims(axis=-1)
val_data.expand_dims(axis=-1)
# Add generated samples to original dataset
train_data.vstack(generated_x)
train_data.hstack(generated_y)
# -
# # Training and evaluation
# +
# Number of epochs without improvement on validation set after which the
# training will be terminated
PATIENCE = 15
# Number of kernels in the first convolutional layer
KERNELS = 200
# Size of the kernel in the first convolutional layer
KERNEL_SIZE = 5
# Number of training epochs
EPOCHS = 200
# Keras Callbacks
early = EarlyStopping(patience=PATIENCE)
checkpoint = ModelCheckpoint(os.path.join(RESULTS_DIR, "GAN_augmentation") +
"_model",
save_best_only=True)
# Build 1d model
model = build_1d_model((test_data.shape[1:]), KERNELS,
KERNEL_SIZE, CLASSES_COUNT)
# Train model
history = model.fit(x=train_data.get_data(),
y=train_data.get_one_hot_labels(CLASSES_COUNT),
batch_size=BATCH_SIZE,
epochs=EPOCHS,
verbose=2,
callbacks=[early, checkpoint],
validation_data=(val_data.get_data(),
val_data.get_one_hot_labels(CLASSES_COUNT)))
# Load best model
model = load_model(os.path.join(RESULTS_DIR, "GAN_augmentation") + "_model")
# Calculate test set score with GAN augmentation
test_score = model.evaluate(x=test_data.get_data(),
y=test_data.get_one_hot_labels(CLASSES_COUNT))
print("Test set score with GAN offline augmentation: {}".format(test_score[1]))
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from numba import jit
def pairwise_distance(X):
"""
Returns the pairwise distance matrix for a matrix X using negative squared Euclidean distance.
"""
sum_X = (X ** 2).sum(axis=1)
return -np.add(np.add(-2 * np.dot(X, X.T), sum_X).T, sum_X)
def prob_matrix(dist_X, sigmas):
"""
Returns the matrix of conditional probabilities p_j|i.
:param dist_X: the pairwise distance matrix
:param sigmas: a vector of sigma values corresponding to each row of the distance matrix
"""
two_sig_sq = np.asarray(2*np.square(sigmas)).reshape((-1, 1))
x = dist_X / two_sig_sq
expx = np.exp(x)
# Since we do not sum over k = l, we set the diagonals to zero
np.fill_diagonal(expx, 0.)
# Avoid division by 0 errors
expx = expx + 1e-8
# Calculate Normalized Exponential
rowsums = expx.sum(axis=1).reshape((-1, 1))
normalized_exp = expx / rowsums
return normalized_exp
def perplexity(prob_X):
"""
Calculates the perplexity of each row of the probability matrix.
:param prob_X: the conditional probability matrix
:return: a vector of perplexity values
"""
entropy = -np.sum(prob_X * np.log2(prob_X), axis=1)
return 2**entropy
# add jit decorators
pairwise_distance_nb = jit(pairwise_distance, nopython=True, cache=True)
prob_matrix_nb = jit(prob_matrix)
perplexity_nb = jit(perplexity, nopython=True, cache=True)
def binary_search(f, target_perplexity, lower=1e-10, upper=1000, tol=1e-8, max_iter=10000):
"""
Performs a binary search for the value of sigma that corresponds to the target perplexity.
:param f: function to calculate perplexity
:param target_perplexity: the specified perplexity value
:param lower: initial lower bound
:param upper: initial upper bound
:param tol: tolerance to determine if the perplexity value is close enough to the target
:param max_iter: maximum number of iterations of the loop
:return: the optimal value of sigma
"""
for i in range(max_iter):
sigma = (lower + upper) / 2
perp = f(sigma)
if abs(perp - target_perplexity) < tol:
return sigma
if perp > target_perplexity:
upper = sigma
else:
lower = sigma
return sigma
def get_sigmas(dist_X, target_perplexity):
"""
Finds the sigma for each row of the distance matrix based on the target perplexity.
:param dist_X: the pairwise distance matrix
:param target_perplexity: the specified perplexity value
:return: a vector of sigma values corresponding to each row of the distance matrix
"""
nrows = dist_X.shape[0]
sigmas = np.zeros(nrows)
for i in range(nrows):
f = lambda sigma: perplexity_nb(prob_matrix_nb(dist_X[i:i + 1, :], np.asarray(sigma)))
best_sigma = binary_search(f, target_perplexity)
sigmas[i] = best_sigma
return sigmas
def get_pmatrix(X, perplexity):
"""
Calculates the final probability matrix using the pairwise affinities/conditional probabilities.
:param M: the matrix of data to be converted
:param perplexity: the specified perplexity
:return: the joint probability matrix p_ij
"""
# get the pairwise distances
dist = pairwise_distance_nb(X)
# get the sigmas
sigmas = get_sigmas(dist, perplexity)
# get the matrix of conditional probabilities
prob = prob_matrix_nb(dist, sigmas)
p = (prob + prob.T) / (2*prob.shape[0])
return p
def get_qmatrix(Y):
"""
Calculates the low dimensional affinities joint matrix q_ij.
:param Y: low dimensional matrix representation of high dimensional matrix X
:return: the joint probability matrix q_ij
"""
q = 1 / (1 - pairwise_distance_nb(Y))
np.fill_diagonal(q, 0)
return q / q.sum()
def gradient(P, Q, Y):
"""
Calculates the gradient of the Kullback-Leibler divergence between P and Q
:param P: the joint probability matrix
:param Q: the Student-t based joint probability distribution matrix
:param Y: low dimensional matrix representation of high dimensional matrix X
:return: a 2d array of the gradient values with the same dimensions as Y
"""
pq_diff = np.expand_dims(P - Q, axis=2)
y_diff = np.expand_dims(Y, axis=1) - np.expand_dims(Y, axis=0)
dist = 1 - pairwise_distance_nb(Y)
inv_distance = np.expand_dims(1 / dist, axis=2)
# multiply and sum over each row
grad = 4 * (pq_diff * y_diff * inv_distance).sum(axis=1)
return grad
def TSNE(X, perplexity=40, num_iter=1000, learning_rate=100, momentum_initial=0.5, momentum_final=0.8):
"""
Performs t-SNE on a given matrix X.
:param X: matrix of high dimensional data
:param perplexity: cost function parameter
:param num_iter: number of iterations
:param learning_rate: learning rate
:param momentum_initial: initial momentum for first 250 iterations
:param momentum_final: final momentum for remaining iterations
:return: matrix of 2-dimensional data representing X
"""
# calculate joint probability matrix
joint_p = get_pmatrix(X, perplexity)
# early exaggeration to improve optimization
joint_p = 4 * joint_p
# initialize Y by sampling from Gaussian
Y_t = np.random.RandomState(1).normal(0, 10e-4, [X.shape[0], 2])
# initialize past iteration Y_{t-1}
Y_t1 = Y_t.copy()
# initialize past iteration Y_{t-2}
Y_t2 = Y_t.copy()
for i in range(num_iter):
# compute low dimensional affinities matrix
joint_q = get_qmatrix(Y_t)
# compute gradient
grad = gradient(joint_p, joint_q, Y_t)
# update momentum
if i < 250:
momentum = momentum_initial
else:
momentum = momentum_final
# update current Y
Y_t = Y_t1 - learning_rate * grad + momentum * (Y_t1 - Y_t2)
# update past iterations
Y_t1 = Y_t.copy()
Y_t2 = Y_t1.copy()
# conclude early exaggeration and revert joint probability matrix back
if i == 50:
joint_p = joint_p / 4
return Y_t
# -
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# %matplotlib inline
X = np.loadtxt("mnist2500_X.txt")
labels = np.loadtxt("mnist2500_labels.txt")
# ## Basic TSNE Test (5 Dimensions)
grp1 = np.random.normal(loc=-10, scale=0.25, size=(12, 5))
grp2 = np.random.normal(loc=10, scale=0.25, size=(12, 5))
fake = np.r_[grp1, grp2]
lab = np.concatenate((np.full(12, fill_value = 0), np.full(12, fill_value = 1)), axis=None)
resfake = TSNE(fake, perplexity = 10)
df2 = pd.DataFrame(data=np.c_[resfake, lab], columns=("Coord1", "Coord2", "label"))
sn.FacetGrid(df2, hue="label", height=6).map(plt.scatter, "Coord1", "Coord2").add_legend()
plt.show()
# ### Changing the perplexity
resfake = TSNE(fake, perplexity = 20)
df2 = pd.DataFrame(data=np.c_[resfake, lab], columns=("Coord1", "Coord2", "label"))
sn.FacetGrid(df2, hue="label", height=6).map(plt.scatter, "Coord1", "Coord2").add_legend()
plt.show()
resfake = TSNE(fake, perplexity = 15)
df2 = pd.DataFrame(data=np.c_[resfake, lab], columns=("Coord1", "Coord2", "label"))
sn.FacetGrid(df2, hue="label", height=6).map(plt.scatter, "Coord1", "Coord2").add_legend()
plt.show()
resfake = TSNE(fake, perplexity = 25)
df2 = pd.DataFrame(data=np.c_[resfake, lab], columns=("Coord1", "Coord2", "label"))
sn.FacetGrid(df2, hue="label", height=6).map(plt.scatter, "Coord1", "Coord2").add_legend()
plt.show()
# ### 3 Dimensions
grp1 = np.random.normal(loc=-10, scale=0.25, size=(12, 3))
grp2 = np.random.normal(loc=10, scale=0.25, size=(12, 3))
fake = np.r_[grp1, grp2]
lab = np.concatenate((np.full(12, fill_value = 0), np.full(12, fill_value = 1)), axis=None)
resfake = TSNE(fake, perplexity = 10)
df2 = pd.DataFrame(data=np.c_[resfake, lab], columns=("Coord1", "Coord2", "label"))
sn.FacetGrid(df2, hue="label", height=6).map(plt.scatter, "Coord1", "Coord2").add_legend()
plt.show()
grp1 = np.random.normal(loc=-10, scale=0.25, size=(12, 3))
grp2 = np.random.normal(loc=10, scale=0.25, size=(12, 3))
fake = np.r_[grp1, grp2]
lab = np.concatenate((np.full(12, fill_value = 0), np.full(12, fill_value = 1)), axis=None)
resfake = TSNE(fake, perplexity = 20)
df2 = pd.DataFrame(data=np.c_[resfake, lab], columns=("Coord1", "Coord2", "label"))
sn.FacetGrid(df2, hue="label", height=6).map(plt.scatter, "Coord1", "Coord2").add_legend()
plt.show()
grp1 = np.random.normal(loc=-10, scale=0.25, size=(12, 3))
grp2 = np.random.normal(loc=10, scale=0.25, size=(12, 3))
fake = np.r_[grp1, grp2]
lab = np.concatenate((np.full(12, fill_value = 0), np.full(12, fill_value = 1)), axis=None)
resfake = TSNE(fake, perplexity = 30)
df2 = pd.DataFrame(data=np.c_[resfake, lab], columns=("Coord1", "Coord2", "label"))
sn.FacetGrid(df2, hue="label", height=6).map(plt.scatter, "Coord1", "Coord2").add_legend()
plt.show()
grp1 = np.random.normal(loc=-10, scale=0.25, size=(12, 3))
grp2 = np.random.normal(loc=10, scale=0.25, size=(12, 3))
fake = np.r_[grp1, grp2]
lab = np.concatenate((np.full(12, fill_value = 0), np.full(12, fill_value = 1)), axis=None)
resfake = TSNE(fake, perplexity = 40)
df2 = pd.DataFrame(data=np.c_[resfake, lab], columns=("Coord1", "Coord2", "label"))
sn.FacetGrid(df2, hue="label", height=6).map(plt.scatter, "Coord1", "Coord2").add_legend()
plt.show()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score
# +
train_features = pd.read_csv('dataset1_train_features.csv')
test_features = pd.read_csv('dataset1_test_features.csv')
train_types = []
for row in train_features['Type']:
if row == 'Class':
train_types.append(1)
else:
train_types.append(0)
train_features['Type_encode'] = train_types
test_types = []
for row in test_features['Type']:
if row == 'Class':
test_types.append(1)
else:
test_types.append(0)
test_features['Type_encode'] = test_types
X_train = train_features.loc[:, 'Ngram1_Entity':'Type_encode']
y_train = train_features['Match']
X_test = test_features.loc[:, 'Ngram1_Entity':'Type_encode']
y_test = test_features['Match']
df_train = train_features.loc[:, 'Ngram1_Entity':'Type_encode']
df_train['Match'] = train_features['Match']
df_test = test_features.loc[:, 'Ngram1_Entity':'Type_encode']
df_test['Match'] = test_features['Match']
X_train = X_train.fillna(value=0)
X_test = X_test.fillna(value=0)
# +
n_estimators_s = [10, 100, 200, 500, 1000]
max_features_s = ['auto', 'sqrt', 'log2', None]
max_depth_s = [2, 3, 4]
class_weight_s = ['balanced', 'balanced_subsample', None]
n_estimators_s = [10]
max_features_s = [None]
max_depth_s = [2]
class_weight_s = [None]
data = []
for n_estimator in n_estimators_s:
for max_features in max_features_s:
for max_depth in max_depth_s:
for class_weight in class_weight_s:
print(n_estimator, max_features, max_depth, class_weight)
rfc = RandomForestClassifier(n_jobs=-1,
n_estimators=n_estimator,
max_features=max_features,
max_depth=max_depth,
class_weight=class_weight)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
f1 = f1_score(y_test, y_pred)
data.append((n_estimator, max_features, max_depth, class_weight, f1))
dataset = pd.DataFrame(data, columns=['n_estimator', 'max_features', 'max_depth',
'class_weight', 'f1'])
dataset.to_csv('dataset1_randomforest.csv', index=False)
# +
rfc = RandomForestClassifier(n_jobs=-1)
param_grid = {
'n_estimators': [10, 100, 200, 500, 1000],
'max_features': ['auto', 'sqrt', 'log2', None],
'max_depth': [2, 3, 4],
'class_weight': ['balanced', 'balanced_subsample', None]
}
CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv=5)
CV_rfc.fit(X_train, y_train)
print CV_rfc.best_params_
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# !ls
# # Загрузка данных
# !ls data
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
gender_submission = pd.read_csv('data/gender_submission.csv')
train.head()
test.head()
# Пример файла с результатом
gender_submission.head()
# # Подготовка данных
y = train.Survived
X_raw = train.copy()
X_raw.drop('Survived', axis=1, inplace=True)
X_raw.columns == test.columns
X_raw.info()
X_raw.describe()
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import StandardScaler
# +
imputer = None
def extract_features(data):
global imputer, scaler
X = data.copy()
X["isMale"] = X.Sex.replace({"male": 1, "female":0})
X.drop(["Sex", "Cabin", "Ticket", "Name", "PassengerId"], axis=1, inplace=True)
X = pd.get_dummies(X, columns=['Pclass', 'Embarked'])
columns = X.columns
if imputer is None:
imputer = Imputer(missing_values='NaN', strategy='mean', axis=0, verbose=0, copy=True)
imputer.fit(X)
X = pd.DataFrame(imputer.transform(X), columns=columns)
return X
# -
X = extract_features(X_raw)
X.head()
# # Обучение
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
grid = {'max_depth': np.arange(5, 200, 5)}
gridsearch = GridSearchCV(DecisionTreeClassifier(), grid, scoring='accuracy', cv=5)
# %%time
gridsearch.fit(X, y)
gridsearch.best_params_
clf = DecisionTreeClassifier(max_depth=gridsearch.best_params_['max_depth'])
clf.fit(X, y)
# # Граф классификатора
# http://www.webgraphviz.com
# +
from sklearn.tree import export_graphviz
print(export_graphviz(clf, out_file=None, filled=True, feature_names=list(X.columns)))
# -
# 
# # Предсказания
X_test = extract_features(test)
X_test.head()
predictions = clf.predict(X_test)
results = pd.DataFrame(list(zip(test.PassengerId, predictions)), columns=['PassengerId', 'Survived'])
results.head()
results.to_csv('submission.csv', index=False)
# 
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
# <div>
# <strong>Inputs for circuit</strong>
# <p>
# To use this circuit, you simply specify people and their skills as a dictionary below called <i>person_to_skill</i>. The skills needed in the groups are specified as the list <i>skill_in_group</i>.
# </p>
# <p>
# There are also certain other parameters you are able to tweak. For one we you can controll the number of itterations the Grover Search performs. If youre results does not yield any amplified groups, you might consider raising or lowering the <i>iterations</i>. Note that it is also possible that a too high number of iterations will cause the algorithm to overshoot.
# </p>
# <p>
# Also note that currently this problem is not being run on a quantum computer, but rather it is being simulated. This means that speed-up advantages of quantum computers will not be seen here. Results for large inputs might take very long as well, so try not to either add too many people or too many skills as this will cause exponential slowdowns. Unless of course you have a very fast computer or you don't mind waiting some time.
# </p>
# <p>
# If the number of inputs gets too high, it might also be an idea to turn <i>print_circuit</i> off as well, as this can take a long time too.
# </p>
# </div>
# +
print_circuit = False
iterations = 2
person_to_skill = {
#'A': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'],
#'B': ['d', 'e', 'f', 'g', 'h', 'i', 'j', 'k'],
'C': ['g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'],
'D': ['k', 'l', 'm', 'n', 'o', 'p', 'q', 'r'],
'E': ['n', 'o', 'p', 'q', 'r', 's', 't', 'u'],
'F': ['a', 'b', 'c', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'],
#'G': ['a', 'e', 'i', 'o', 'u', 'y'],
'H': ['b', 'c', 'e', 'g', 'k', 'm', 'q', 's', 'x'],
}
skill_in_group = ['l', 'q', 's']
# -
person_order = person_to_skill.keys()
def invert(qc, qubits):
for qubit in qubits:
qc.x(qubit)
def collect(qc, creg, output, person_to_skill, person_to_qubit, skill_in_group, exclude):
for position, skill in enumerate(skill_in_group):
qubit_with_skill = [ qubit for person, qubit in person_to_qubit.items() if skill in person_to_skill[person] and person != exclude ]
if qubit_with_skill:
qc.mct(qubit_with_skill, creg[position])
else:
qc.x(creg[position])
# Collect on output
collect_qubits = [ qubit for qubit in creg ]
if exclude:
exclude_qubit = person_to_qubit[exclude]
qc.x(exclude_qubit)
collect_qubits += [ exclude_qubit ]
qc.mct(collect_qubits, output)
if exclude:
qc.x(exclude_qubit)
# Uncompute on constraints
for position, skill in enumerate(skill_in_group):
qubit_with_skill = [ qubit for person, qubit in person_to_qubit.items() if skill in person_to_skill[person] and person != exclude ]
if qubit_with_skill:
qc.mct(qubit_with_skill, creg[position])
else:
qc.x(creg[position])
# Create the selective function - This will tie together all the skills and such
def oracle(qc, registers, person_to_skill, person_order, skill_in_group):
# Extract registers
preg = registers[0]
creg = registers[1]
ereg = registers[2]
oreg = registers[3]
# We surround all pregs with reverses
invert(qc, preg)
qc.barrier()
# Create initial validation
person_to_qubit = { person: qubit for qubit, person in zip(preg, person_order)}
collect(qc, creg, oreg[0], person_to_skill, person_to_qubit, skill_in_group, None)
qc.barrier()
# Create excess validations
for position, person in enumerate(person_order):
collect(qc, creg, ereg[position], person_to_skill, person_to_qubit, skill_in_group, person)
qc.barrier()
# Collect excess to output
invert(qc, ereg)
qc.x(oreg[0])
qc.mct(ereg, oreg[0])
invert(qc, ereg)
qc.barrier()
# Uncompute excess validations
for position, person in enumerate(person_order):
collect(qc, creg, ereg[position], person_to_skill, person_to_qubit, skill_in_group, person)
qc.barrier()
# Uncompute initial inversion
invert(qc, preg)
def diffuser(num_person):
qc = QuantumCircuit(num_person)
# Apply hadamards and nots
for qubit in range(num_person):
qc.h(qubit)
qc.x(qubit)
# Multi controlled Z
qc.h(num_person - 1)
qc.mct(list(range(num_person - 1)), num_person - 1)
qc.h(num_person - 1)
#Apply nots and hadamards
for qubit in range(num_person):
qc.x(qubit)
qc.h(qubit)
U_s = qc.to_gate()
U_s.name = "U$_s$"
return U_s
def extract_result(counts):
average = sum(counts.values()) / len(counts.values())
# Extract everyone over the average
wins = [ comb for comb in counts.keys() if counts[comb] > average ]
return wins
def comb_to_group(comb, person_order):
return ''.join([ person_order[position] for position, letter in enumerate(reversed(str(comb))) if int(letter) ])
# +
# Setup Circuit
p_len = len(person_order)
c_len = len(skill_in_group)
e_len = len(person_order)
preg = QuantumRegister(p_len, 'p') # For storing person - will be superpositioned
creg = QuantumRegister(c_len, 'c') # For combination storage
ereg = QuantumRegister(e_len, 'e') # For computing excess people
oreg = QuantumRegister(1, 'out')
#mreg = ClassicalRegister(num_person + num_excess + 1, 'm')
mreg = ClassicalRegister(p_len, 'm')
qc = QuantumCircuit(preg, creg, ereg, oreg, mreg)
# +
#Initialize Super positions
for qubit in preg:
qc.h(qubit)
# Set our combination qubits
for qubit in creg:
qc.x(qubit)
# Initialize output state in |->
qc.initialize([1, -1]/np.sqrt(2), oreg[0])
for _ in range(iterations):
oracle(qc, [preg, creg, ereg, oreg], person_to_skill, person_order, skill_in_group)
qc.append(diffuser(p_len), preg)
qc.barrier()
#qc.measure([qubit for qubit in preg] + [ qubit for qubit in ereg] + [ oreg[0] ], [cbit for cbit in mreg])
qc.measure([qubit for qubit in preg], [cbit for cbit in mreg])
if print_circuit:
qc.draw('mpl')
# +
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Execute and get counts
result = execute(qc, simulator, shots=256).result()
counts = result.get_counts(qc)
plot_histogram(counts, title='Group Shots')
# -
comb_res = extract_result(counts)
group_res = [ comb_to_group(comb, list(person_order)) for comb in comb_res ]
print(group_res)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Intro - Web Scraping
# In the following notebooks, we will write some code in order to obtain the latest news from the following newspapers:
#
# * El País
# * The Guardian
# * Daily Mail
# * The Mirror
#
# Since each newspaper has its own web code, we'll have to cover them one by one.
# The objective of this step is to obtain, for each newspaper:
#
# * A dataframe with the news content (input)
# * A dataframe with the article title, link and predicted category
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _uuid="be4bd8642b9a4c19bccfa7b7e64e9581d64708ad"
# # Super Heroes Dataset
#
# The goal of the task is to **predict whether a superhero is Human or not** based on their characteristics and super powers.
# + [markdown] _uuid="b2cce87bc25fe8fdd468aef10bb04bf73cafc3f2"
# ## Outline
#
# - [Import Libraries and Data]
# - [Feature Engineering]
# - [Data Overview]
# - [Repeated Heroes]
# - [Handling Null Values]
# - [Categorical Variables and One-hot encoding]
# - [All data together]
# - [Modeling]
# - [Plan]
# - [Training Classifiers]
# - [Feature Importance]
# - [Dimensionality Reduction]
# + [markdown] _uuid="5af9727789caf6c8eafe17a89f333d02dd48e401"
# ----------------
#
# ## Import Libraries and Data
# + _uuid="aaa63d45975c7e8a22083c71dece107a53bad3ce"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import pyplot
from matplotlib import cm
import seaborn as sns
import tqdm
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
sns.set(rc={"figure.figsize": (10, 12)})
np.random.seed(sum(map(ord, "palettes")))
# + [markdown] _uuid="cb43f8e50cbb7707bb9633316868755f2db46d23"
# Let's call the datasets in the following way throughout the Notebook:
#
# - Heroes Information : **`metadata`**
# - Heroes Superpowers : **`powers`**
# + _uuid="c4a24a3dcc519115773883490bdd7dc109bb0b80"
metadata = pd.read_csv("../data/heroes_information.csv", index_col=0)
powers = pd.read_csv("../data/super_hero_powers.csv")
# + [markdown] _uuid="573f18a6000ee1331f4944df83f2b877f28d0ec4"
# -------------------
#
# ## Feature Engineering
#
# ### Data Overview
# + _uuid="6dd1600dac09cb0c3ef14966b69be8a75d57ff11"
print("Heroes information data shape: ", metadata.shape)
print("Hero super powers data shape: ", powers.shape)
# + _uuid="af24f1783c95e1bc5467aeb4c775e6ec034f8362"
metadata.head()
# + _uuid="573fefc1e14bf4f393b1eb090df3677eca7794c3"
powers.head()
# + [markdown] _uuid="6763b30ff957fbfedaee44b27d32c92cc4d9fe2c"
# ### Repeated Heroes
# + [markdown] _uuid="26c166beb729218e6e65bdf4160ab3e927487485"
# As the process of dealing with Repeated Heroes is quite extensive, I created a function in the cell below that does all the work in a single step. If you would like to know the why of each action, please continue scrolling down, if not, jump to the [next section](#Handling-Null-values) .
# + _uuid="97a910233c3f63f88e4a840bde898672e8f27017"
def clean_repeated_heroes(metadata, powers):
print("Initial shape of metadata and powers: ")
print("Powers:", powers.shape)
print("Metadata", metadata.shape)
print("\nStart cleaning...")
powers.drop_duplicates(inplace=True)
metadata.drop_duplicates(inplace=True)
# Handle Goliath
goliath_idxs_to_drop = [100, 289, 290] # not dropping Goliath IV, it will be used to join powers
metadata.drop(goliath_idxs_to_drop, inplace=True)
metadata.loc[metadata.name == "Goliath IV", "Race"] = "Human"
# Avoid outersected entries. i.e. appearing in metadata, but not in powers. And viceversa.
metadata = metadata[metadata.name.isin(powers.hero_names)]
powers = powers[powers.hero_names.isin(metadata.name)]
# Spider-Man
metadata.loc[metadata.name.str.contains("Spider-Man")] = metadata[metadata.name.str.contains("Spider-Man")].mode().values[0]
metadata.drop(623, inplace=True)
metadata.drop(624, inplace=True)
# Nova
metadata.drop(497, inplace=True)
# Angel
metadata.loc[metadata.name == "Angel", "Race"] = "Vampire"
metadata.drop(23, inplace=True)
# Blizzard
metadata.loc[metadata.name == "Blizzard"] = metadata.loc[metadata.name == "Blizzard II"].values
metadata.at[115, 'name'] = "Blizzard"
metadata.at[116, 'Race'] = "Human"
metadata.at[115, 'Race'] = "Human"
metadata.drop(117, inplace=True)
# Black Canary
metadata.drop(97, inplace=True)
# Captain Marvel
metadata.at[156, 'Race'] = "Human"
metadata.drop(155, inplace=True)
# Blue Beettle
metadata.at[122, 'Race'] = "Human"
metadata.at[124, 'Race'] = "Human"
metadata.at[122, 'Height'] = 183.0
metadata.at[125, 'Height'] = 183.0
metadata.at[122, 'Weight'] = 86.0
metadata.at[125, 'Weight'] = 86.0
metadata.drop(123, inplace=True)
# Vindicator
metadata.drop(696, inplace=True)
# Atlas
metadata.drop(48, inplace=True)
# Speedy
metadata.drop(617, inplace=True)
# Firestorm
metadata.drop(260, inplace=True)
# Atom
metadata.drop(50, inplace=True)
metadata.at[49, 'Race'] = "Human"
metadata.at[53, 'Race'] = "Human"
metadata.at[54, 'Race'] = "Human"
metadata.at[49, 'Race'] = "Human"
metadata.at[54, 'Height'] = 183.0
metadata.at[49, 'Height'] = 183.0
metadata.at[53, "Weight"] = 72.0
# Batman
metadata.drop(69, inplace=True)
# Toxin
metadata.drop(673, inplace=True)
# Namor
metadata.drop(481, inplace=True)
# Batgirl
metadata.drop(62, inplace=True)
print("Final shape of metadata and powers: ")
print("Powers:", powers.shape)
print("Metadata", metadata.shape)
print("\nCleaning done")
return metadata, powers
# + _uuid="143c3df3b26f74d8783555e482dcf960f1cfde12"
# if you run it twice, it won't work due to hard-coded indexers won't match.
# you need to get the data and run it again
metadata, powers = clean_repeated_heroes(metadata, powers)
# + [markdown] _uuid="051cfc32a052959497b5c2209cc120ee536252d2"
# -------------------------
# Let's try to quickly get a glance of this by
#
# - Getting repeated values from `metadata`, or `powers`, in case of existing.
# - Try to make sense of the difference between `metadata` and `powers` number of rows by summing the entries of the repeated rows from `metadata`.
# + _uuid="2633906e2760c4b486f250554fcf0dce240f17f1"
powers.drop_duplicates(inplace=True)
metadata.drop_duplicates(inplace=True)
# + _uuid="e73aa40e5fce995e4d9543b79bb238d8de555c94"
print("Number of rows with more than 1 entry per hero name in metadata ", (metadata.name.value_counts() > 1).sum() )
print("Number of rows with more than 1 entry per hero name in powers ", (powers.hero_names.value_counts() > 1).sum() )
# + _uuid="4c12b3e451ebbb7349b3f4dbfed76b2547f6d525"
mask = metadata.name.value_counts() > 1
metadata.name.value_counts()[mask].sum() - mask.sum() # get excessive number of rows from repeated names
# + _uuid="df8c73d9348cb0e3ca85994f5d53ca1ee0442f19"
# Do we have the same shape in the two datasets now?
metadata.shape[0] - powers.shape[0]
if metadata.shape[0] - powers.shape[0] == 0:
print("Yes.")
# + [markdown] _uuid="f5495f2d865f97ff154bf61f5babca75928a416e"
# Checking all repeated heroes in powers dataset
# + _uuid="16ecb7d266a0551645d2921b005cf86577a91088"
repeated_heroes = mask.index[mask]
repeated_heroes[repeated_heroes.isin(powers.hero_names)]
# + _uuid="54c6c2fdb0a4aad8241f6813fea536396e608851"
powers[powers.hero_names.str.contains("Goliath")]
# + _uuid="8d924eef612e510f015aee700d15537925eb9d98"
metadata[metadata.name.str.contains("Goliath")]
# + [markdown] _uuid="bbb0597fa7d1451c45a0bf8b2b69490160b236bd"
# The data contains many heroes that should be merged because they are either the same version of hero or one of it's evolutions, therefore we will unify them all into "Goliath".
# + _uuid="a5955a3459e20a4ab3803f6615ede44dd65494b1"
metadata[metadata.name.str.contains("Goliath")]
# + [markdown] _uuid="4e3a4218c76e3fc795deb29a0d28762aef091677"
# Looks like there are some superheroes that need to be removed from both tables as we won't have their entire information.
# + _uuid="1d766ef83f1d2b585ab76afc37db9b4ddca18fcb"
metadata = metadata[metadata.name.isin(powers.hero_names)]
powers = powers[powers.hero_names.isin(metadata.name)]
# + _uuid="607cc6344bd79ac273fc53c73de0da17af133dba"
metadata.shape
# + _uuid="d3a61862d9a4e0190b1958269a51adb2d8383c3d"
powers.shape
# + [markdown] _uuid="fa3f04c50a9f4ec4d3e3cd76b194195fc5aa3fc5"
# Going in the good direction.
# + [markdown] _uuid="0b8c801f6d4facfbc6bce861b489422417e87cd5"
# #### Merge repeated heroes into one
# + _uuid="b4e5f6770bf1b3d7e80b0e8d59d375eee07b5d46"
metadata.shape, powers.shape
# + [markdown] _uuid="d8c31f188f2a011ea6aff72e77fde0461edb6e63"
# *Voila!*, 1-1 correspondence between both dataframes.
# + [markdown] _uuid="ee82bf21fd02eb7382df2a41072c007c7cfb062a"
# ### Handling Null values
# + [markdown] _uuid="f91b003a0fe2c909ea697309824c13aff45aaa20"
# `metadata` has null values, represented as either '-' for all columns except for height and weight, which is `-99.0`.
#
# Let's take a look at how many null values there are per each column.
# + _uuid="913831e7cbfb2ac8a24ca54493e3adb95cac8a3c"
metadata = metadata.replace('-', np.nan)
metadata = metadata.replace(-99, np.nan)
metadata.isnull().sum()
# + [markdown] _uuid="b0a834374cd72a699c9fa948da4f55f7534bc142"
# Race, which is our target in this exercise, has 232 (out of 634 rows) rows with nulls. But as for the exercise, we will ignore these. Therefore, let's drop them and then see how many null values are there still in the dataframe.
# + _uuid="34cb383394ec2c50020dae2b372edd60f0791248"
metadata.dropna(subset=['Race'], inplace=True)
# + _uuid="57c5899aa0835a3dc5ce7b968df5689ec70cf617"
metadata.isnull().sum()
# + [markdown] _uuid="399fa71cdc46c70d130ae3e349828dd8fd0707bd"
# It decreased drastically. But
#
# - Eye color and hair color still have some percentage of values missing.
# - Skin color has too many values being null. It could be a potential feature to be removed. So let's just remove it.
# - Height and weight have quite some null values, but they can be filled by different techniques, like the median/mean of same race and gender.
#
# for that we will:
#
# - Look at feature distribution of each variable to see if there can be applied any quick wins for the features with many null values.
# + _uuid="367a64d4790d74f92a618c4be7b82c07b86ab5a6"
# drop Skin color because it has too many null values
metadata.drop("Skin color", axis=1, inplace=True)
# + [markdown] _uuid="6172542198b57181af4a2718cee34ab4c1b502ce"
# #### Handle Weight and Height null values
#
# The idea is to set, for those rows which height and weight are null, the mean of the same gender and race (human / no-human). That way we will provide a good value yet still having more training data.
#
# But first, **convert Race into label column with -> Human / No-Human**. The steps followed are:
#
# - Everything that is Human, will be considered human.
# - Those Races that are Human-\* will also be considered human.
# - All the rest, No-Human.
# + _uuid="dd66a263777cfe755b06adf9aebe50916b381481"
# transform Human- race into Human (as they are not mutations)
metadata.loc[:, "Race"] = metadata.apply(lambda x: "Human" if(x.Race.startswith("Human-")) else x.Race, axis=1)
# add label for modeling
metadata['label'] = metadata.apply(lambda x: "No-Human" if(x.Race != "Human") else x.Race, axis=1)
# + [markdown] _uuid="8858293a70796f7e9c88bb60ec392746b7821e86"
# As seen, most values lie on the same range for height, for weight the deviation is a bit higher, but I would say that it is still valid to use the mean.
# + _uuid="c70a016928c45ea11598be19462a927980212a1f"
# height and weight can be replaced by the mean of the same gender and race
w_means = metadata.groupby(["label", "Gender"])["Weight"].mean().unstack()
h_means = metadata.groupby(["label", "Gender"])["Height"].mean().unstack()
w_fh = w_means.loc["Human","Female"]
w_mh = w_means.loc["Human","Male"]
w_fn = w_means.loc["No-Human","Female"]
w_mn = w_means.loc["No-Human","Male"]
h_fh = h_means.loc["Human","Female"]
h_mh = h_means.loc["Human","Male"]
h_fn = h_means.loc["No-Human","Female"]
h_mn = h_means.loc["No-Human","Male"]
# Fill null values with means
metadata.loc[(metadata.label == "Human") & (metadata.Gender == "Female") & (metadata.Weight.isnull()), "Weight"] = w_fh
metadata.loc[(metadata.label == "Human") & (metadata.Gender == "Male") & (metadata.Weight.isnull()), "Weight"] = w_mh
metadata.loc[(metadata.label == "No-Human") & (metadata.Gender == "Female") & (metadata.Weight.isnull()), "Weight"] = w_fn
metadata.loc[(metadata.label == "No-Human") & (metadata.Gender == "Male") & (metadata.Weight.isnull()), "Weight"] = w_mn
metadata.loc[(metadata.label == "Human") & (metadata.Gender == "Female") & (metadata.Height.isnull()), "Height"] = h_fh
metadata.loc[(metadata.label == "Human") & (metadata.Gender == "Male") & (metadata.Height.isnull()), "Height"] = h_mh
metadata.loc[(metadata.label == "No-Human") & (metadata.Gender == "Female") & (metadata.Height.isnull()), "Height"] = h_fn
metadata.loc[(metadata.label == "No-Human") & (metadata.Gender == "Male") & (metadata.Height.isnull()), "Height"] = h_mn
# plot to see clearer differences
fig, (ax1,ax2) = pyplot.subplots(1,2, figsize=(12,6))
ax1.set_title("Weight")
ax2.set_title("Height")
w_means.plot(kind="bar", ax=ax1)
h_means.plot(kind="bar", ax=ax2)
# + [markdown] _uuid="34229f3ba12c30f2321a2a3ac6811791ca791baf"
# Check if we made a difference in terms of null values.
# + _uuid="cc5698f834c1eca85ca2d3bed45c4daffc130db1"
metadata.isnull().sum()
# + [markdown] _uuid="44708452ea395b204cbb4b128389d1173f179c61"
# As we can see, we went from 72 and 89 values for height and weight, respectivelly, to 5 and 6. And, most likely, those 5 and 6 values are due to missing Gender.
#
# For now we could say that we can drop those rows from Gender, Height, and Weight that are null, as it will most likely be the same. Let's check it out quickly:
# + _uuid="63f5190d9ca2687ac314c78873af3497a535e47a"
metadata[metadata.Gender.isnull()]
# + [markdown] _uuid="a10ea268fb00418120e41c391b094f6aafb931b9"
# Indeed, as I said. Let's then delete those rows.
# + _uuid="349888b4dde3ca504b63f2ed0e76d87d65ec57e5"
metadata.drop(metadata[metadata.Gender.isnull()].index, axis=0, inplace=True)
# + [markdown] _uuid="fd7b3531b7319d2c00457d87219f6048ef5b09f3"
# Let's check how many null values do we have missing:
# + _uuid="554a82ab18b6f03a0f28d953fe82e6f44dc591f1"
metadata.isnull().sum()
# + [markdown] _uuid="2dcf8dd549726c260406fcd29223830d023bc035"
# Let's get a final check of how the values of height and weight correlate to each other, to see if there are still some weird things happening:
# + _uuid="00b8547e7f14c7ac2450fda64e5744ced1245ae9"
metadata[(metadata.Height > 400)]
# + [markdown] _uuid="2e8a078371640ec6d52b30814d2f442f2319cbc6"
# Some non-humans are really tall, but weight not too much. As they are only non-humans, let's just leave them because it won't confuse the classifier.
#
# What might confuse the classifier, though, it's the weight to be more than 450kg, as there are the same number of human and non-human instances. Let's take them out of the data.
# + _uuid="46f7134c89ddb7eef3352756194dfcf373c5a9e7"
metadata = metadata[(metadata.Weight < 450)]
# + [markdown] _uuid="ddc5c5ea2d5c36d0084731995a0a75205a2bc78e"
# #### Handling Eye and Hair color
#
# The idea would be similar as of in Height and Weight. Let's take the most common eye and hair color
# + _uuid="0f3ab67ac24eb8fc0c1dea7f1b2f870e55987f20"
aux_eyes_colors = ["blue", "brown", "green"]
aux_hair_colors = ["Black", "Brown", "Blond", "Red", "No Hair"]
len_aux_eyes_colors = len(aux_eyes_colors)
len_aux_hair_colors = len(aux_hair_colors)
for ix in metadata[metadata["Eye color"].isnull()].index:
metadata.at[ix, "Eye color"] = aux_eyes_colors[np.random.choice(len_aux_eyes_colors)]
for ix in metadata[metadata["Hair color"].isnull()].index:
metadata.at[ix, "Hair color"] = aux_hair_colors[np.random.choice(len_aux_hair_colors)]
# + [markdown] _uuid="b6da4b8c69e29a9da64864139ea7dc7371037782"
# Practically no difference between the barplots shown before the methodology.
#
# Let's now finally see what is left to handle in terms of null values:
# + _uuid="8b680f51c86b022d04e8efff9274de7829245679"
metadata.isnull().sum()
# + [markdown] _uuid="3496a75340aac4153513c2d9559c485deaa5392f"
# Let's just forget about the remaining rows as the number is insignificant.
# + _uuid="0982f11eca59510d484b4f21a13360d3d09724dd"
metadata.drop(metadata[metadata.Publisher.isnull()].index, axis=0, inplace=True)
metadata.drop(metadata[metadata.Alignment.isnull()].index, axis=0, inplace=True)
# + _uuid="476f8c2122b9964f96869586fb2b2d476ffdcbeb"
metadata.isnull().sum()
# + [markdown] _uuid="6c10898bbb43b77f04cacd8800fcdd085e496597"
# **Finally** Let's merge the two datasets we have after cleaning them
# + [markdown] _uuid="7dda9c4de68ba805ee3ba9ab52150fbb832dd9b3"
# ### Categorical Variables and One-Hot Encoding
#
# High cardinality categorical features, we will just convert them to their corresponding code (0,1,2,...).
#
# - Gender
# - Alignment
#
# For low cardinality features, one-hot encoding will be implemented.
#
# - Eye color
# - Hair color
# - Publisher
#
# + _uuid="dfc83aae49fa970c36aacf63e232dd95f19f44e8"
metadata = metadata.drop(['Race'], axis=1)
high_card = ["Gender", "Alignment"]
low_card = ["Eye color", "Hair color", "Publisher"]
for hc in high_card:
one_hot = pd.get_dummies(metadata[hc])
metadata.drop(hc, axis=1, inplace=True)
metadata = metadata.join(one_hot)
for lc in low_card:
metadata[lc] = metadata[lc].astype('category').cat.codes
# transform label into 0 (Human) or 1 (No-Human)
metadata['label'] = metadata['label'].astype('category').cat.codes
# + _uuid="e9fcd76328ba3da99de9ed284258d3a73bd6eec4"
# transform powers data into 0,1 binary features
cols = powers.select_dtypes(['bool']).columns
for col in cols:
powers[col] = powers[col].astype(int)
# + _uuid="759d5144af1004c50b2361cddaeeed58fd54e4cb"
metadata.head()
# + _uuid="ed6115ccc25d88e9ba4aeafe0336dbe60ce9f68f"
powers.head()
# + [markdown] _uuid="1f863eeacf85c81cf1a0b6bee3cfa058e8e357c5"
# ### All data together
# + _uuid="c30e4e5acc90c819ed29e04acb931bd78a5ad542"
heroes = pd.merge(metadata, powers, how='inner', left_on = 'name', right_on = 'hero_names')
heroes.drop(["hero_names","name"], axis=1, inplace=True)
powers_cols = powers.columns.drop("hero_names")
metadata_cols = metadata.columns.drop("name")
# + _uuid="1252c8b4ef2fc9cc64c29a97d1f8433631d277d9"
heroes.shape
# + _uuid="83823aa4c63c3cc5af2faa0deda2152036c6825a"
heroes.head()
# -
# And finally here, we achieved one dataset that represents both of our datasets into one.
#
# Let's save it in case we needed it.
# Saving the data into a csv file
heroes.to_csv("../data/heroes-data.csv")
# + [markdown] _uuid="e8fb838fda8d3085258e07ea9f3eabbc803cacc9"
# ## Modeling
#
# ### Plan
#
# #### Target
#
# The idea is to use several classifiers to predict whether the hero is Human or No-Human.
#
# #### Input Data
#
# As **input data**, I will use the processed and cleaned `heroes` dataframe, which contains 388 superheroes and 179 different characteristics of each. The target is balanced between rows.
#
#
# #### Models
#
# As a baseline, we use **Logistic Regression** as it is easy to implement, understand, and should provide an already decent predictive power.
#
# We will continue with using **SVM**, **Random Forest Classifier**, and **XGBoost**. The reason behind using those is because they are known and proven to be the most powerful conventional machine learning algorithms.
#
# #### Feature Reduction
#
# As our dataset has little amount of examples in comparison with the amount of features, it would be interesting to try to reduce the complexity of the dataset, to see if we predict better or not. One of the main assumptions behind dimensionality reduction is that the high complex datasets capture a lot of unnecessary information which can be reduced into a arbitrable amount of principal components that explain the variance of the underlying data, with the ideal case that the model will be trained with more rellevant information. To get a glampse of whether the dimensionality reduction will work or not, we can use t-SNE algorithm to plot high dimensionality data into two dimensions, colored by the label value (Human / Not-Human) and see if we can already infer a way to discriminate our dataset.
#
# For the sake of reducing dimensionality, we will use the classic PCA with different amount of components and study what is the impact that it does to our dataset.
# + _uuid="3a8458dda9204cd7c641e07d8334dd604a798abe"
from sklearn.preprocessing import MinMaxScaler
X = heroes.drop(["label"], axis=1).values
y = heroes["label"].values
scaler = MinMaxScaler()
# transform data
X = scaler.fit_transform(X)
# X = preprocessing.scale(X)
# + [markdown] _uuid="d7778ef070c431dc7a86ba32a6b4e90ef0682424"
# ### Training classifiers
# + _uuid="43edf10da64c22b80c533f7094c33a45835b6aee"
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score, StratifiedKFold, GridSearchCV
from sklearn.metrics import accuracy_score
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
# + _uuid="27811e27b57731e0fea58af9e1a2474a31092f12"
# Initialize a stratified split of our dataset for the validation process
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
# + _uuid="7ff0a93db2524e7cdb9482b2bdb05bb3a47ae89b"
models = ["LogReg", "SVM", "RF"]
for model in models:
print( "Training ", model)
if model == "LogReg":
clf = LogisticRegression(random_state=0, solver='liblinear')
elif model == "SVM":
clf = svm.SVC(kernel='linear',C=1)
elif model == 'RF':
clf = RandomForestClassifier(n_estimators=50, max_depth=8, random_state=1)
results = cross_val_score(clf, X, y, cv=5).mean()
print( model, " CV accuracy score: {:.2f}%".format(results.mean()*100) )
# + [markdown] _uuid="bdb2249fd8c16475675a2cb458cbd52461388469"
# ### Dimensionality Reduction
# + _uuid="e98b968e478d558984d20830e5fcec38b9b83f7a"
from sklearn.decomposition import PCA
# -
heroes.columns
# + _uuid="0a3d4f175a706c37530ea794219f274ce54db6c4"
models = ["LogReg", "SVM", "RF"]
reductions = [30, 40]
for red in reductions:
print( "Applying PCA on ", red, " components")
pca = PCA(n_components=red)
X_reduced = pca.fit_transform(X)
for model in models:
print( "Training ", model )
if model == "LogReg":
clf = LogisticRegression(random_state=0, solver='liblinear')
elif model == "SVM":
clf = svm.SVC(kernel='linear',C=1)
elif model == 'RF':
clf = RandomForestClassifier(n_estimators=50, max_depth=8, random_state=1)
results = cross_val_score(clf, X_reduced, y, cv=5).mean()
print( model, " CV accuracy score: {:.2f}%".format(results.mean()*100) )
print( "\n\n" )
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.0
# language: julia
# name: julia-0.4
# ---
cd("/Users/elfflorin/Documents/Projects/julia.hw/jpie-v0.4")
include("setup.jl")
using brml
Gx = 5 # two dimensional grid size
Gy = 5
H = Gx * Gy # number of states on grid
# matrix representing the possible states of the system
st = reshape(1:H, Gx, Gy) # assign each grid point a state
# make a deterministic state transition matrix HxH on a 2D grid:
phgh = zeros(H, H) # transition from state j to state i
for x = 1:Gx
for y = 1:Gy
# from the current state-cell (j coord in state transition matrix)
# to next state-cell on (i coord in state transition matrix):
# the next row, same column
if validgridposition(x + 1, y, Gx, Gy) # sample for x = 1, y = 2
phgh[st[x + 1, y], st[x, y]] = 1 # 2,2=7 1,2=6
end
# the previous row, same column
if validgridposition(x - 1, y, Gx, Gy)
phgh[st[x - 1, y], st[x, y]] = 1 # 0,2 1,2
end
# the same row, next column
if validgridposition(x, y + 1, Gx, Gy)
phgh[st[x, y + 1], st[x, y]] = 1 # 1,3=11 1,2=6
end
# the same row, previous column
if validgridposition(x, y - 1, Gx, Gy)
phgh[st[x, y - 1], st[x, y]] = 1 # 1,1=1 1,2=6
end
end
end
# conditional distribution from state transition matrix
phghm = condp(phgh) # matrix with sum(phghm, 1) = 1 with phghm[i, j] = p(hg=i | hm=j)
ph1=condp(ones(H,1)) # initialise probabilities for the states of the hidden variable at timestep 1
pvgh=zeros(4,H) # initialise emision matrix
pv1gh = 0.01 * ones(1,H); r = randperm(H); pv1gh[r[1:10]] = 0.9; # Creaks in 10 randomly chosen cells
pv2gh = 0.01 * ones(1,H); r = randperm(H); pv2gh[r[1:10]] = 0.9; # Bumps in 10 randomly chosen cells
setNoTicksLabels(x) = function(axesfig)
PyPlot.setp(axesfig[:get_xticklines](), visible=false)
PyPlot.setp(axesfig[:get_xticklabels](), visible=false)
PyPlot.setp(axesfig[:get_yticklines](), visible=false)
PyPlot.setp(axesfig[:get_yticklabels](), visible=false)
end
PyPlot.figure()
axc = PyPlot.subplot(2, 1, 1)
axc[:set_title]("creaks layout")
PyPlot.imshow(reshape(pv1gh, Gx, Gy), cmap="bone");
axb = PyPlot.subplot(2, 1, 2)
axb[:set_title]("bumps layout")
PyPlot.imshow(reshape(pv2gh, Gx, Gy), cmap="bone");
map([axb, axc]) do axesfig
PyPlot.setp(axesfig[:get_xticklines](), visible=false)
PyPlot.setp(axesfig[:get_xticklabels](), visible=false)
PyPlot.setp(axesfig[:get_yticklines](), visible=false)
PyPlot.setp(axesfig[:get_yticklabels](), visible=false)
end
# Form the joint distribution p(v|h)=p(v1|h)p(v2|h)
# v = (v1, v2) and v1 and v2 are independent given h
vv = zeros(4, 2)
for i = 1:4
pvgh[1, :] = pv1gh .* pv2gh; vv[1, :] = [1 1]; # p(v1=1|h)*p(v2=1|h)
pvgh[2, :] = pv1gh.*(1-pv2gh); vv[2, :] = [1 2]; # p(v1=1|h)*p(v2=1|h)
pvgh[3, :] = (1-pv1gh).*pv2gh; vv[3, :] = [2 1]; # p(v1=1|h)*p(v2=1|h)
pvgh[4, :] = (1-pv1gh).*(1-pv2gh); vv[4, :] = [2 2]; # p(v1=1|h)*p(v2=1|h)
end
# +
# draw some random samples:
T=10
h = zeros(Integer, 1, T) # holds the state value for the hidden variable at a specific timestep
v = zeros(Integer, 1, T) # holds the state value for the visible variable at a specific timestep
h[1]=randgen(ph1) # initialize the hidden variable @t=1 with a random state based on ph1 distribution
v[1]=randgen(pvgh[:, h[1]]) # initialize the visible variable @t=1 with a random state based on pvgh( vg | h@t=1)
for t=2:T
h[t]=randgen(phghm[:, h[t-1]]) # set the hidden variable state @t based on h@t-1 using the transition matrix
v[t]=randgen(pvgh[:, h[t]]) # set the visible variable state @t based on h@t using the emission matrix
end
# -
# Perform inference based on the observed v:
(alpha, loglik) = HMMforward(v, phghm, ph1, pvgh) # filtering
phtgV1t = alpha # filtered posterior - infer the current hidden state p(ht | v_1:t)
function HMMgamma(alpha,phghm)
#HMMGAMMA HMM Posterior smoothing using the Rauch-Tung-Striebel correction method
# gamma=HMMbackward(alpha,phghm)
#
# Inputs:
# alpha : alpha forward messages (see HMMforward.m)
# phghm : transition distribution in a matrix
#
# Outputs: gamma(i,t) is p(h(t)=i|v(1:T))
# See also HMMbackward.m, HMMviterbi.m, demoHMMinference.m
T=size(alpha,2); H=size(phghm, 1);
# gamma recursion
gamma=zeros(size(alpha))
gamma[:,T]=alpha[:,T];
for t=T-1:-1:1
phghp=condp(phghm'.*repmat(alpha[:,t],1,H));
gamma[:,t]=condp(phghp*gamma[:,t+1]);
end
if 1==0 # gamma recursion: More human readable
gamma[:, T]=alpha[:, T]./sum(alpha[:, T])
for t = T-1:-1:1
phghp=phghm'.*repmat(alpha[:,t],1,H)
phghp=phghp./repmat(sum(phghp, 1),H,1)
gamma[:,t]=phghp*gamma[:,t+1]
end
end
return gamma
end
phtgV1T = HMMgamma(alpha, phghm) # Smoothed Burglar distribution
maxstate, logprob = HMMviterbi(v, phghm, ph1, pvgh) # Most likely Burglar path
PyPlot.figure()
for t = 1:T
axg = PyPlot.subplot(5, T, t); PyPlot.imshow(repmat(vv[v[t], :], 2, 1), cmap="bone");
if t == 2 # used t == 1 or t == 2 for title alignment only
axg[:set_title]("Creaks and Bumps")
end
# add Filtering data row of T images from the previous row offset
axf = PyPlot.subplot(5, T, T+t); PyPlot.imshow(reshape(phtgV1t[:, t], Gx, Gy), cmap="bone");
if t == 1
axf[:set_title]("Filtering")
end
# add Smoothing data row of T images from the previous row offset
axs = PyPlot.subplot(5, T, 2*T+t); PyPlot.imshow(reshape(phtgV1T[:, t], Gx, Gy), cmap="bone");
if t == 1
axs[:set_title]("Smoothing")
end
z=zeros(H,1); z[maxstate[t]]=1;
# add Viterbi data row of T images from the previous row offset
axv = PyPlot.subplot(5,T,3*T+t); PyPlot.imshow(reshape(z,Gx,Gy), cmap="bone")
if t == 1
axv[:set_title]("Viterbi")
end
z = zeros(H,1); z[h[t]] = 1;
# add true data row of T images from the previous row offset
axt = PyPlot.subplot(5,T,4*T+t); PyPlot.imshow(reshape(z,Gx,Gy), cmap="bone")
if t == 2
axt[:set_title]("True Burglar position")
end
map([axg, axf, axs, axv, axt]) do axesfig
PyPlot.setp(axesfig[:get_xticklines](), visible=false)
PyPlot.setp(axesfig[:get_xticklabels](), visible=false)
PyPlot.setp(axesfig[:get_yticklines](), visible=false)
PyPlot.setp(axesfig[:get_yticklabels](), visible=false)
end
end
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: iMetricGAN
# language: python
# name: imetricgan
# ---
# +
import os
import io
import sys
import glob
import numpy as np
import librosa
from matplotlib import pyplot as plt
from torch.utils.tensorboard import SummaryWriter
repos_dir = r'/home/takkan/repos'
sys.path.append(repos_dir)
sys.path.append(os.path.join(repos_dir, 'Intelligibility-MetricGAN'))
import audio_util as au
main_dir = r'/home/takkan/experiments/jr'
log_dir = os.path.join(main_dir, 'log')
tensor_log_dir = os.path.join(log_dir, 'test_logs')
import PIL.Image
# -
train_dir = r'/home/common/db/audio_corpora/nele/imgan/all/train'
train_clean_dir = os.path.join(train_dir, 'clean')
train_noise_dir = os.path.join(train_dir, 'noise')
train_enhanced_dir = os.path.join(train_dir, 'enhanced')
# +
wav_clean_paths = glob.glob(os.path.join(train_clean_dir, '*.wav'))
wav_clean_paths.sort()
for i, wav_clean_path in enumerate(wav_clean_paths[0:1], start=1):
wav_basename = os.path.basename(wav_clean_path)
print(wav_basename)
signal, fs = librosa.load(wav_clean_path, sr=44100)
t = np.arange(0, len(signal)) / fs
fig = plt.figure()
ax1 = fig.add_subplot(2, 1, 1)
ax2 = fig.add_subplot(2, 1, 2)
ax1.plot(t, signal)
ax2.specgram(signal,Fs=fs)
#plt.plot(t, signal)
#plt.specgram(signal,Fs=fs)
plt.show()
# +
tensor_writer = SummaryWriter(log_dir=tensor_log_dir)
for i, wav_clean_path in enumerate(wav_clean_paths[0:3], start=1):
wav_basename = os.path.basename(wav_clean_path)
print(wav_basename)
signal, fs = librosa.load(wav_clean_path, sr=44100)
t = np.arange(0, len(signal))/fs
fig = plt.figure()
ax1 = fig.add_subplot(2, 1, 1)
ax2 = fig.add_subplot(2, 1, 2)
ax1.plot(t, signal)
ax2.specgram(signal,Fs=fs)
tensor_writer.add_figure(wav_basename, fig, i)
tensor_writer.close()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import glob
import os
from scipy import signal
from scipy.stats import gaussian_kde
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
# +
mpl.rcParams['axes.linewidth'] = 0.5 #set the value globally
mpl.rcParams['xtick.major.width'] = 0.5
mpl.rcParams['ytick.major.width'] = 0.5
mpl.rcParams['axes.titlesize'] = 10
mpl.rcParams['axes.labelsize'] = 8
mpl.rcParams["lines.linewidth"] = 0.5
mpl.rc('font',**{'family':'sans-serif','serif':['Arial']})
mpl.rcParams['pdf.fonttype'] = 42
# +
def GetAllUsableData(data, v):
visit = v
df = pd.DataFrame([])
for eachfile in data:
tail = os.path.basename(eachfile)
n = segments = tail.split("_")
name = n[0] + '_' + n[1]
temp = pd.read_csv(eachfile)
# if np.any(subset.names == name):
p1 = pd.Series(data = [name] * len(temp), name = 'name')
p2 = pd.Series(data = [visit] * len(temp), name = 'visit')
temp1 = pd.concat([temp, p1, p2], axis = 1)
df = df.append(temp1)
df = df[(df.radial_distance_normalized.notnull()) & (df.angle.notnull())]
return(df)
# -
def FitGaussainKde(radialDist, RRO):
m1 = radialDist
m2 = RRO
xmin = m1.min()
xmax = m1.max()
ymin = m2.min()
ymax = m2.max()
X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
values = np.vstack([m1, m2])
kernel = gaussian_kde(values)
return(X,Y,kernel)
def getFlowerCurvature(curve, x):
r = 1
R = 25
L = 15
y = L*(((x - r)/R) ** np.exp(curve))
return y
# ## draw the radial and angle distribution together
# ## plot each data axes separately
def accesorise(axes, tickY, tickX):
axes.spines['left'].set_visible(True)
axes.spines['bottom'].set_visible(True)
axes.spines['right'].set_visible(False)
axes.spines['top'].set_visible(False)
axes.spines['left'].set_smart_bounds(True)
axes.spines['bottom'].set_smart_bounds(True)
if tickY:
axes.set_yticks([0, 45, 90])
axes.set_yticklabels([0, 45, 90])
else:
axes.set_yticks([])
if tickX:
axes.set_xticks([0, 1])
axes.set_xticklabels([0, 1])
else:
axes.set_xticks([])
w = (3.5/4) # square-ish figure: 4 rows 4 columns - one column width for paper
f1, ax1 = plt.subplots(figsize = (w,w), num = 'hexbin')
f2, ax2 = plt.subplots(figsize = (w,w), num = 'pde')
outpath = r"../dataFolders/PaperPipelineOutput/Figures/v3/Paper/"
shapes = ['c-1_', 'c-2_', 'c-3_', 'c-10_']
visitnum = ['FirstVisit/','Later7thVisit/' , 'Later20thVisit/']
# +
for vv, visit in enumerate(visitnum):
data_path = os.path.join(r"../dataFolders/PaperPipelineOutput/v3/RadiusAndAngle/", visit)
data = glob.glob(data_path +'*.csv')
# videoselection = pd.read_csv(os.path.join(r"../dataFolders/PaperPipelineOutput/FilteredTracks/",visit) +
# "AllVideoNames.csv")
# subset = videoselection.loc[videoselection.AutomatatedTracking == 'TRUE', :]
# df = GetAllUsableData(data, subset, visit)
df = GetAllUsableData(data, visit)
# remove anything greater than 1.5 and less than 0.1
df = df[(df.radial_distance_normalized < 1.5)
& (df.radial_distance_normalized > 0.06)]
# print stats of numbers for paper
print(visit)
for ss, shape in enumerate(shapes):
r = df.loc[(df.name.str.contains(shape)) &
(df.visit == visit), 'radial_distance_normalized']
angle = df.loc[(df.name.str.contains(shape)) &
(df.visit == visit), 'angle']
# print stats for paper
print(shape)
print('num of frames: {:d}'.format(len(r)))
numMoths = len(df[(df.name.str.contains(shape)) & (df.visit == visit)].name.unique())
print('num of moths: {:d}'.format(numMoths))
# ax1.hexbin(r, angle)
# ax1.axvline(x = 1.0, ls = '--', linewidth = 1, color = 'fuchsia')
# plt.savefig(outpath + 'test.pdf')
# perform a kernel density estimation
X,Y,kernel = FitGaussainKde(r, angle)
#reset the kernel bandwidth to make it smaller
kernel.set_bandwidth(bw_method=kernel.factor / 1.5)
positions = np.vstack([X.ravel(), Y.ravel()])
Z = np.reshape(kernel(positions).T, X.shape)
tt = ax2.pcolormesh(X, Y, Z.reshape(X.shape)
, cmap=plt.cm.cividis
, shading = 'gouraud')
ax2.contour(X, Y, Z.reshape(X.shape), levels = 4
,cmap = plt.cm.Purples_r
, linewidth = 0.5)
ax2.axvline(x = 1.0, ls = '--', linewidth = 1, color = 'fuchsia')
ax2.set_xlim(0, 1.5)
# set up variables to accesorize
if vv == 2:
tickX = True
else:
tickX = False
if ss == 0:
tickY = True
else:
tickY = False
accesorise(ax1, tickY, tickX)
accesorise(ax2, tickY, tickX)
figname = shape + visit[:-1]
# f1.savefig(outpath + 'hexbin_' + figname + '.pdf')
# ax1.clear()
f2.savefig(outpath + 'pde_' + figname + '_Sub3_cividis.pdf')
ax2.clear()
# +
# draw the curvatures
f3, ax3 = plt.subplots(figsize = (w,w))
curvatures = [-1, -2, -3, -10]
x = np.arange(0, 25, 0.1)
for i, c in enumerate(curvatures):
y = getFlowerCurvature(c, x)
ax3.plot(x/np.max(x), y, color = 'k', linewidth = 1.0)
ax3.set_ylim(0, 16)
ax3.set_xlim(0,1.5)
f3.savefig(outpath + 'profile_c' + str(c) + '.pdf')
ax3.clear()
# -
# ## plot the raw data for the same graph as a separate figure
for axes in first + later7 + later:
axes.clear()
for visit, axes in zip(visitnum, [first, later7, later]):
data_path = os.path.join(r"../dataFolders/PaperPipelineOutput/RadiusAndAngle_v2/", visit)
data = glob.glob(data_path +'*.csv')
videoselection = pd.read_csv(os.path.join(r"../dataFolders/PaperPipelineOutput/FilteredTracks_v2/",visit) +
"AllVideoNames.csv")
subset = videoselection.loc[videoselection.AutomatatedTracking == 'TRUE', :]
df = GetAllUsableData(data, subset)
# remove anything greater than 1.5 and less than 0.1
df = df[(df.radial_distance_normalized < 1.5)
& (df.radial_distance_normalized > 0.06) ]
for i, shape in enumerate(shapes):
r = df.loc[(df.name.str.contains(shape)) &
(df.visit == visit), 'radial_distance_normalized']
angle = df.loc[(df.name.str.contains(shape)) &
(df.visit == visit), 'angle']
axes[i].hexbin(r, angle)
axes[i].axvline(x = 1.0, ls = '--', linewidth = 0.5, color = 'silver')
axes[i].set_xlim(0, 1.5)
i+=1
# +
# accesorize
for axes in first + later7 + later:
axes.spines['left'].set_visible(True)
axes.spines['bottom'].set_visible(True)
axes.spines['right'].set_visible(False)
axes.spines['top'].set_visible(False)
axes.spines['left'].set_smart_bounds(True)
axes.spines['bottom'].set_smart_bounds(True)
# for axes in [ax30, ax31]:
for axes in [first[0], later7[0], later[0]]:
axes.set_yticks([0, 45, 90])
axes.set_yticklabels([0, 45, 90])
for axes in later:
axes.set_xticks([0, 1])
axes.set_xticklabels([0, 1])
for axes in first + later7 + profiles:
axes.set_xticks([])
for axes in first[1:] + later7[1:] + later[1:] + profiles[1:]:
axes.set_yticks([])
# +
# colorbars
import matplotlib as mpl
from matplotlib import cm
import matplotlib.pyplot as plt
points = 100
cmap_hexbin = cm.get_cmap('viridis')
cmap_pdf = cm.get_cmap('Greens')
fig, ax = plt.subplots(figsize=(w/2, w/10))
# fig.subplots_adjust(bottom=0.5)
cmap = cmap_hexbin
# norm = mpl.colors.Normalize(vmin=(framestrt - lagPoints)/100, vmax= framestrt/100)
cb1 = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
# norm=norm,
orientation='horizontal')
cb1.set_label('Normalized count')
fig.show()
plt.savefig('../dataFolders/PaperPipelineOutput/Figures/v3/Paper/colorbar_hexbin-v1.pdf')
cmap = cmap_pdf
cb2 = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
# norm=norm,
orientation='horizontal')
cb2.set_label('Probability Density')
plt.savefig('../dataFolders/PaperPipelineOutput/Figures/v3/Paper/colorbar_2Dpdf-v1.pdf')
# +
# # accesorize
# for axes in first + later:
# axes.spines['left'].set_visible(True)
# axes.spines['bottom'].set_visible(True)
# axes.spines['right'].set_visible(False)
# axes.spines['top'].set_visible(False)
# axes.spines['left'].set_smart_bounds(True)
# axes.spines['bottom'].set_smart_bounds(True)
# for axes in [ax30, ax31]:
# axes.set_xticks([0, 45, 90])
# axes.set_xticklabels([0, 45, 90])
# for axes in first:
# axes.set_yticks([0, 1])
# axes.set_yticklabels([0, 1])
# for axes in first[:-1] + later[:-1] + profiles[:-1]:
# axes.set_xticks([])
# for axes in later + profiles:
# axes.set_yticks([])
# -
f
f.savefig(r"../dataFolders/PaperPipelineOutput/Figures/v2/Paper/Figure4-angleVsRRO_rawDatav0-3.pdf")
# +
# ## draw Fig3 in its entirity - vertical aligned for shape
# shapes = ['c-1', 'c-2','c-3', 'c-10']
# w = 3.5 # half width
# h = 4.67 # square-ish figure
# # gridspec inside gridspec
# f = plt.figure(figsize = (w,h))
# gs0 = plt.GridSpec(4, 5, figure=f, hspace = 0.05, wspace=0.05)
# ax00 = f.add_subplot(gs0[0,0:2])
# ax10 = f.add_subplot(gs0[1,0:2])
# ax20 = f.add_subplot(gs0[2,0:2])
# ax30 = f.add_subplot(gs0[3,0:2])
# ax01 = f.add_subplot(gs0[0,2:4])
# ax11 = f.add_subplot(gs0[1,2:4])
# ax21 = f.add_subplot(gs0[2,2:4])
# ax31 = f.add_subplot(gs0[3,2:4])
# ax02 = f.add_subplot(gs0[0,4])
# ax12 = f.add_subplot(gs0[1,4])
# ax22 = f.add_subplot(gs0[2,4])
# ax32 = f.add_subplot(gs0[3,4])
# +
# ## draw Fig3 in its entirity - horizontally aligned for shape
# shapes = ['c-1', 'c-2','c-3', 'c-10']
# w = 3.5 # half width
# h = (3.5/4)*4 # square-ish figure
# # gridspec inside gridspec
# f = plt.figure(figsize = (w,h))
# gs0 = plt.GridSpec(4, 4, figure=f, hspace = 0.05, wspace=0.05)
# axf0 = f.add_subplot(gs0[0,0])
# axf1 = f.add_subplot(gs0[0,1])
# axf2 = f.add_subplot(gs0[0,2])
# axf3 = f.add_subplot(gs0[0,3])
# axv00 = f.add_subplot(gs0[1,0])
# axv01 = f.add_subplot(gs0[1,1])
# axv02 = f.add_subplot(gs0[1,2])
# axv03 = f.add_subplot(gs0[1,3])
# axv10 = f.add_subplot(gs0[2,0])
# axv11 = f.add_subplot(gs0[2,1])
# axv12 = f.add_subplot(gs0[2,2])
# axv13 = f.add_subplot(gs0[2,3])
# axv20 = f.add_subplot(gs0[3,0])
# axv21 = f.add_subplot(gs0[3,1])
# axv22 = f.add_subplot(gs0[3,2])
# axv23 = f.add_subplot(gs0[3,3])
# +
# first = [axv00, axv01, axv02, axv03]
# later7 = [axv10, axv11, axv12, axv13]
# later = [axv20, axv21, axv22, axv23]
# profiles = [axf0, axf1, axf2, axf3]
# +
# shapes = ['c-1_', 'c-2_', 'c-3_', 'c-10_']
# visitnum = ['FirstVisit/','Later7thVisit/' , 'LaterVisit/']
# for visit, axes in zip(visitnum, [first, later7, later]):
# data_path = os.path.join(r"../dataFolders/PaperPipelineOutput/RadiusAndAngle_v2/", visit)
# data = glob.glob(data_path +'*.csv')
# videoselection = pd.read_csv(os.path.join(r"../dataFolders/PaperPipelineOutput/FilteredTracks_v2/",visit) +
# "AllVideoNames.csv")
# subset = videoselection.loc[videoselection.AutomatatedTracking == 'TRUE', :]
# df = GetAllUsableData(data, subset)
# # remove anything greater than 1.5 and less than 0.1
# df = df[(df.radial_distance_normalized < 1.5)
# & (df.radial_distance_normalized > 0.06)]
# for i, shape in enumerate(shapes):
# r = df.loc[(df.name.str.contains(shape)) &
# (df.visit == visit), 'radial_distance_normalized']
# angle = df.loc[(df.name.str.contains(shape)) &
# (df.visit == visit), 'angle']
# # ax[i].hexbin(r, angle)
# # ax[i].set_title(shape + 'radial Dist vs angle')
# # perform a kernel density estimation
# X,Y,kernel = FitGaussainKde(r, angle)
# #reset the kernel bandwidth to make it smaller
# kernel.set_bandwidth(bw_method=kernel.factor / 1.5)
# positions = np.vstack([X.ravel(), Y.ravel()])
# Z = np.reshape(kernel(positions).T, X.shape)
# tt = axes[i].pcolormesh(X, Y, Z.reshape(X.shape), cmap=plt.cm.BuGn_r)
# # # get colorbar
# # cbar = fig.colorbar(tt, ax=ax0)
# axes[i].contour(X, Y, Z.reshape(X.shape), levels = 4, linewidth = 0.5)
# axes[i].axvline(x = 1.0, ls = '--', linewidth = 0.5, color = 'fuchsia')
# axes[i].set_xlim(0, 1.5)
# i+=1
# +
# curvatures = [-1, -2, -3, -10]
# x = np.arange(0, 25, 0.1)
# for i, c in enumerate(curvatures):
# y = getFlowerCurvature(c, x)
# profiles[i].plot(x/np.max(x), y, color = 'k', linewidth = 1.0)
# profiles[i].set_ylim(0, 16)
# profiles[i].set_xlim(0,1.5)
# +
# # accesorize
# for axes in first + later7 + later:
# axes.spines['left'].set_visible(True)
# axes.spines['bottom'].set_visible(True)
# axes.spines['right'].set_visible(False)
# axes.spines['top'].set_visible(False)
# axes.spines['left'].set_smart_bounds(True)
# axes.spines['bottom'].set_smart_bounds(True)
# # for axes in [ax30, ax31]:
# for axes in [first[0], later7[0], later[0]]:
# axes.set_yticks([0, 45, 90])
# axes.set_yticklabels([0, 45, 90])
# for axes in later:
# axes.set_xticks([0, 1])
# axes.set_xticklabels([0, 1])
# for axes in first + later7 + profiles:
# axes.set_xticks([])
# for axes in first[1:] + later7[1:] + later[1:] + profiles[1:]:
# axes.set_yticks([])
# +
# f
# +
# f.savefig(r"../dataFolders/PaperPipelineOutput/Figures/v2/Paper/Figure4-angleVsRRO_v0-3.pdf")
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
import os
print os.getcwd()
dir = "/Users/walterdempsey/Box/MD2K Processed Data/smoking-lvm-cleaned-data/"
os.chdir(dir)
os.getcwd()
# read data
original_data = pd.read_csv(dir+'eventcontingent-ema.csv')
original_data = original_data.drop(['offset'], axis=1)
backup_data = pd.read_csv(dir+'eventcontingent-ema-backup.csv')
# check data types
print(original_data.dtypes)
print("")
print(backup_data.dtypes)
# +
# convert data types to be consistent
backup_data['urge'] = backup_data['urge'].astype(float)
backup_data['cheerful'] = backup_data['cheerful'].astype(float)
backup_data['happy'] = backup_data['happy'].astype(float)
backup_data['angry'] = backup_data['angry'].astype(float)
backup_data['stress'] = backup_data['stress'].astype(float)
backup_data['sad'] = backup_data['sad'].astype(float)
backup_data['access'] = backup_data['access'].astype(float)
backup_data['see_or_smell'] = backup_data['see_or_smell'].astype(float)
backup_data['smoking_location'] = backup_data['urge'].astype(float)
print(original_data.dtypes)
print(backup_data.dtypes)
# -
# check how many entries for each participant
print(original_data.shape)
original_data['participant_id'].value_counts().sort_index()
print(backup_data.shape)
backup_data['participant_id'].value_counts().sort_index()
# +
# Look at the participant_id intersection of the backup and original datasets
unique_backup_ids = set(backup_data.participant_id)
unique_original_ids = set(original_data.participant_id)
ids_intersection = unique_backup_ids.intersection(unique_original_ids)
original_data_subset = original_data[original_data.participant_id.isin(ids_intersection)]
backup_data_subset = backup_data[backup_data.participant_id.isin(ids_intersection)]
print unique_backup_ids.difference(unique_original_ids)
print unique_original_ids.difference(unique_backup_ids)
# -
# In intersection, the backup is a superset of the observations.
# Therefore, the backup shoud be used for analysis where possible.
print(original_data_subset.shape)
temp_original = pd.DataFrame(original_data_subset['participant_id'].value_counts().sort_index())
temp_backup = np.array(backup_data_subset['participant_id'].value_counts().sort_index())
temp_original['backup'] = temp_backup
temp_original.columns = ['original', 'backup']
temp_original
# +
s = set()
d = {}
lst2 = []
# store entries in the backup data
for index, row in backup_data.iterrows():
participant_id = row['participant_id']
hour = row['hour']
minute = row['minute']
day_of_week = row['day_of_week']
valid_key = (participant_id, hour, minute, day_of_week)
s.add(valid_key)
d[valid_key] = index
# store entries in the original data
for index, row in original_data.iterrows():
participant_id = row['participant_id']
hour = row['hour']
minute = row['minute']
day_of_week = row['day_of_week']
valid_key = (participant_id, hour, minute, day_of_week)
# remove if also in the backup data
if valid_key in s:
s.remove(valid_key)
else:
lst2.append(index)
lst = []
for i in s:
lst.append(d[i])
lst.sort()
difference = backup_data.ix[lst]
print(difference.shape)
difference # rows that are only in the backup but not in the original
# -
difference = original_data.ix[lst2]
print(difference.shape)
difference # rows in the original data but not in the backup data
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MAPEM de Pierro algorithm for the Bowsher prior
# One of the more popular methods for guiding a reconstruction based on a high quality image was suggested by Bowsher. This notebook explores this prior.
#
# We highly recommend you look at the [PET/MAPEM](../PET/MAPEM.ipynb) notebook first. This example extends upon the quadratic prior used in that notebook to use an anatomical prior.
# Authors: Kris Thielemans, Sam Ellis, Richard Brown, Casper da Costa-Luis
# First version: 22nd of October 2019
# Second version: 27th of October 2019
# Third version: June 2021
#
# CCP SyneRBI Synergistic Image Reconstruction Framework (SIRF)
# Copyright 2019,20201 University College London
# Copyright 2019 King's College London
#
# This is software developed for the Collaborative Computational
# Project in Synergistic Reconstruction for Biomedical Imaging. (http://www.synerbi.ac.uk/).
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# # Brief description of the Bowsher prior
# The "usual" quadratic prior penalises differences between neighbouring voxels (using the square of the difference). This tends to oversmooth parts of the image where you know there should be an edge. To overcome this, it is natural to not penalise the difference between those "edge" voxels. This can be done after segmentation of the anatomical image for instance.
#
# Bowsher suggested a segmentation-free approach to use an anatomical (or any "side" image) as follows:
# - compute edge information on the anatomical image.
# - for each voxel, consider only the $N_B$ neighbours which have the lowest difference in the anatomical image.
#
# The paper is
# Bowsher, J. E., Hong Yuan, L. W. Hedlund, T. G. Turkington, G. Akabani, A. Badea, W. C. Kurylo, et al. ‘Utilizing MRI Information to Estimate F18-FDG Distributions in Rat Flank Tumors’. In IEEE Symposium Conference Record Nuclear Science 2004., 4:2488-2492 Vol. 4, 2004. https://doi.org/10.1109/NSSMIC.2004.1462760.
#
# # All the normal imports and handy functions
# +
# %matplotlib notebook
# Setup the working directory for the notebook
import notebook_setup
from sirf_exercises import cd_to_working_dir
cd_to_working_dir('Synergistic', 'MAPEM_Bowsher')
#%% Initial imports etc
import numpy
import matplotlib.pyplot as plt
import os
import sys
import shutil
from tqdm.auto import tqdm, trange
import time
from scipy.ndimage.filters import gaussian_filter
import sirf.STIR as pet
from numba import jit
from sirf_exercises import exercises_data_path
brainweb_sim_data_path = exercises_data_path('working_folder', 'Synergistic', 'BrainWeb')
# set-up redirection of STIR messages to files
msg_red = pet.MessageRedirector('info.txt', 'warnings.txt', 'errors.txt')
# plotting settings
plt.ion() # interactive 'on' such that plots appear during loops
#%% some handy function definitions
def imshow(image, limits=None, title=''):
"""Usage: imshow(image, [min,max], title)"""
plt.title(title)
bitmap = plt.imshow(image)
if limits is None:
limits = [image.min(), image.max()]
plt.clim(limits[0], limits[1])
plt.colorbar(shrink=.6)
plt.axis('off')
return bitmap
def make_cylindrical_FOV(image):
"""truncate to cylindrical FOV"""
filt = pet.TruncateToCylinderProcessor()
filt.apply(image)
#%% define a function for plotting images and the updates
# This is the same function as in `ML_reconstruction`
def plot_progress(all_images, title, subiterations, cmax):
if len(subiterations)==0:
num_subiters = all_images[0].shape[0]-1
subiterations = range(1, num_subiters+1)
num_rows = len(all_images);
slice_show = 60
for it in subiterations:
plt.figure()
for r in range(num_rows):
plt.subplot(num_rows,2,2*r+1)
imshow(all_images[r][it,slice_show,:,:], [0,cmax], '%s at %d' % (title[r], it))
plt.subplot(num_rows,2,2*r+2)
imshow(all_images[r][it,slice_show,:,:]-all_images[r][it-1,slice_show,:,:],[-cmax*.1,cmax*.1], 'update')
plt.show();
def subplot_(idx,vol,title,clims=None,cmap="viridis"):
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar()
plt.title(title)
plt.axis("off")
# -
# # Load the data
# To generate the data needed for this notebook, run the [BrainWeb](./BrainWeb.ipynb) notebook first.
# +
full_acquired_data = pet.AcquisitionData(os.path.join(brainweb_sim_data_path, 'FDG_sino_noisy.hs'))
atten = pet.ImageData(os.path.join(brainweb_sim_data_path, 'uMap_small.hv'))
# Anatomical image
anatomical = pet.ImageData(os.path.join(brainweb_sim_data_path, 'T1_small.hv')) # could be T2_small.hv
anatomical_arr = anatomical.as_array()
# create initial image
init_image=atten.get_uniform_copy(atten.as_array().max()*.1)
make_cylindrical_FOV(init_image)
plt.figure()
imshow(anatomical.as_array()[64, :, :])
plt.show()
plt.figure()
imshow(full_acquired_data.as_array()[0, 64, :, :])
plt.show()
# -
# # Code from first MAPEM notebook
#
# The following chunk of code is copied and pasted more-or-less directly from the other notebook as a starting point.
#
# First, run the code chunk to get the objective functions etc
# ### construction of Likelihood objective functions and OSEM
# +
def get_obj_fun(acquired_data, atten):
print('\n------------- Setting up objective function')
# #%% create objective function
#%% create acquisition model
am = pet.AcquisitionModelUsingRayTracingMatrix()
am.set_num_tangential_LORs(5)
# Set up sensitivity due to attenuation
asm_attn = pet.AcquisitionSensitivityModel(atten, am)
asm_attn.set_up(acquired_data)
bin_eff = pet.AcquisitionData(acquired_data)
bin_eff.fill(1.0)
asm_attn.unnormalise(bin_eff)
asm_attn = pet.AcquisitionSensitivityModel(bin_eff)
# Set sensitivity of the model and set up
am.set_acquisition_sensitivity(asm_attn)
am.set_up(acquired_data,atten);
#%% create objective function
obj_fun = pet.make_Poisson_loglikelihood(acquired_data)
obj_fun.set_acquisition_model(am)
print('\n------------- Finished setting up objective function')
return obj_fun
def get_reconstructor(num_subsets, num_subiters, obj_fun, init_image):
print('\n------------- Setting up reconstructor')
#%% create OSEM reconstructor
OSEM_reconstructor = pet.OSMAPOSLReconstructor()
OSEM_reconstructor.set_objective_function(obj_fun)
OSEM_reconstructor.set_num_subsets(num_subsets)
OSEM_reconstructor.set_num_subiterations(num_subiters)
#%% initialise
OSEM_reconstructor.set_up(init_image)
print('\n------------- Finished setting up reconstructor')
return OSEM_reconstructor
# -
# Use rebin to create a smaller sinogram to speed up calculations
acquired_data = full_acquired_data.clone()
acquired_data = acquired_data.rebin(3)
# Get the objective function
obj_fun = get_obj_fun(acquired_data, atten)
# # Implement de Pierro MAP-EM for a quadratic prior with arbitrary weights
# The following code is almost a copy-paste of the implementation by A. Mehranian and S. Ellis [contributed during one of our hackathons](https://github.com/SyneRBI/SIRF-Contribs/tree/master/src/Python/sirf/contrib/kcl). It is copied here for you to have an easier look.
#
# Note that the code avoids the `for` loops in our simplistic version above and hence should be faster (however, the construction of the neighbourhood is still slow, but you should have to do this only once). Also, this is a Python reimplementation of MATLAB code (hence the use of "Fortran order" below).
# +
def dePierroReg(image,weights,nhoodIndVec):
"""Get the de Pierro regularisation image (xreg)"""
imSize = image.shape
# vectorise image for indexing
imageVec = image.reshape(-1,order='F')
# retrieve voxel intensities for neighbourhoods
resultVec = imageVec[nhoodIndVec]
result = resultVec.reshape(weights.shape,order='F')
# compute xreg
imageReg = 0.5*numpy.sum(weights*(result + image.reshape(-1,1,order='F')),axis=1)/numpy.sum(weights,axis=1)
imageReg = imageReg.reshape(imSize,order='F')
return imageReg
def compute_nhoodIndVec(imageSize,weightsSize):
"""Get the neigbourhoods of each voxel"""
w = int(round(weightsSize[1]**(1.0/3))) # side length of neighbourhood
nhoodInd = neighbourExtract(imageSize,w)
return nhoodInd.reshape(-1,order='F')
def neighbourExtract(imageSize,w):
"""Adapted from kcl.Prior class"""
n = imageSize[0]
m = imageSize[1]
h = imageSize[2]
wlen = 2*numpy.floor(w/2)
widx = xidx = yidx = numpy.arange(-wlen/2,wlen/2+1)
if h==1:
zidx = [0]
nN = w*w
else:
zidx = widx
nN = w*w*w
Y,X,Z = numpy.meshgrid(numpy.arange(0,m), numpy.arange(0,n), numpy.arange(0,h))
N = numpy.zeros([n*m*h, nN],dtype='int32')
l = 0
for x in xidx:
Xnew = setBoundary(X + x,n)
for y in yidx:
Ynew = setBoundary(Y + y,m)
for z in zidx:
Znew = setBoundary(Z + z,h)
N[:,l] = ((Xnew + (Ynew)*n + (Znew)*n*m)).reshape(-1,1).flatten('F')
l += 1
return N
def setBoundary(X,n):
"""Boundary conditions for neighbourExtract.
Adapted from kcl.Prior class"""
idx = X<0
X[idx] = X[idx] + n
idx = X>n-1
X[idx] = X[idx] - n
return X.flatten('F')
@jit
def dePierroUpdate(xEM, imageReg, beta):
"""Update the image based on the de Pierro regularisation image"""
return (2*xEM)/(((1 - beta*imageReg)**2 + 4*beta*xEM)**0.5 + (1 - beta*imageReg) + 0.00001)
# -
def MAPEM_iteration(OSEM_reconstructor,current_image,weights,nhoodIndVec,beta):
image_reg = dePierroReg(current_image.as_array(),weights,nhoodIndVec) # compute xreg
OSEM_reconstructor.update(current_image); # compute EM update
image_EM=current_image.as_array() # get xEM as a numpy array
updated = dePierroUpdate(image_EM, image_reg, beta) # compute new uxpdate
current_image.fill(updated) # store for next iteration
return current_image
# ## Create uniform and Bowsher weights
# We will use the `kcl.Prior` class here to construct the Bowsher weights given an anatomical image. The `kcl.Prior` class (and the above code) assumes that the `weights` are returned an $N_v \times N_n$ array, with $N_v$ the number of voxels and $N_n$ the number of neighbours (here 27 as the implementation is in 3D).
import sirf.contrib.kcl.Prior as pr
def update_bowsher_weights(prior,side_image,num_bowsher_neighbours):
return prior.BowshserWeights\
(side_image.as_array(),num_bowsher_neighbours)
# For illustration, we will keep only a few neighbours in the Bowsher prior. This makes the contrast with "uniform" weights higher of course.
num_bowsher_neighbours = 3
myPrior = pr.Prior(anatomical_arr.shape)
BowsherWeights = update_bowsher_weights(myPrior,anatomical,num_bowsher_neighbours)
# Ignore the warning about `divide by zero`, it is actually handled in the `kcl.Prior` class.
# compute indices of the neighbourhood for each voxel
nhoodIndVec=compute_nhoodIndVec(anatomical_arr.shape,BowsherWeights.shape)
# illustrate that only a few of the weights in the neighbourhood are kept
# (taking an arbitrary voxel)
print(BowsherWeights[500,:])
# You could try to understand the neighbourhood structure using the following, but it is quite complicated due to the Fortran order and linear indices.
# ```
# toLinearIndices=nhoodIndVec.reshape(BowsherWeights.shape,order='F')
# print(toLinearIndices[500,:])
# ```
# We will also use uniform weights where every neighbour is counted the same (often people will use 1/distance between voxels as weighting, but this isn't implemented here).
uniformWeights=BowsherWeights.copy()
uniformWeights[:,:]=1
# set "self-weight" of the voxel to zero
uniformWeights[:,27//2]=0
print(uniformWeights[500,:])
# # Run some experiments
num_subsets = 21
num_subiters = 42
# ## Do a normal OSEM (for comparison and initialisation)
# +
# Do initial OSEM recon
OSEM_reconstructor = get_reconstructor(num_subsets, num_subiters, obj_fun, init_image)
osem_image = init_image.clone()
OSEM_reconstructor.reconstruct(osem_image)
plt.figure()
imshow(osem_image.as_array()[60,:,:])
plt.show();
# -
# ## Run MAP-EM with the 2 different sets of weights
# To save some time, we will initialise the algorithms with the OSEM image. This makes sense of course as in the initial iterations, the penalty will just slow everything down (as it smooths an already too smooth image even more!).
# arbitrary value for the weight of the penalty. You might have to tune it
beta=1
# Compute with Bowsher penalty
# +
current_image=osem_image.clone()
for it in trange(1, num_subiters+1):
current_image = MAPEM_iteration(OSEM_reconstructor,current_image,BowsherWeights,nhoodIndVec,beta)
Bowsher=current_image.clone()
# -
# Compute with uniform weights (we'll call the result UQP for "uniform quadratic penalty")
# +
current_image=osem_image.clone()
for it in trange(1, num_subiters+1):
current_image = MAPEM_iteration(OSEM_reconstructor,current_image,uniformWeights,nhoodIndVec,beta)
UQP=current_image.clone()
# -
# Plot the anatomical, OSEM, and two MAPEM images
plt.figure()
cmax=osem_image.max()*.6
clim=[0,cmax]
subplot_([1,2,1],anatomical.as_array()[60,:,:],"anatomical")
subplot_([1,2,2],osem_image.as_array()[60,:,:],"OSEM",clim)
plt.figure()
subplot_([1,2,1],UQP.as_array()[60,:,:],"Uniform Quadratic prior",clim)
subplot_([1,2,2],Bowsher.as_array()[60,:,:],"Bowsher Quadratic prior",clim)
plt.figure()
y_idx=osem_image.dimensions()[1]//2
plt.plot(osem_image.as_array()[60,y_idx,:],label="OSEM")
plt.plot(UQP.as_array()[60,y_idx,:],label="Uniform Quadratic prior")
plt.plot(Bowsher.as_array()[60,y_idx,:],label="Bowsher Quadratic prior")
plt.legend()
# You will probably see that the MAP-EM are quite smooth, and that there is very little difference between the "uniform" and "Bowsher" weights after this number of updates. The difference will get larger with higher number of updates (try it!).
#
# Also, with the Bowsher weights you should be able to increase `beta` more than for the uniform weights without oversmoothing the image too much.
# # Misalignment between anatomical and emission images
#
# What happens if you want to use an anatomical prior but the image isn't aligned with the image you're trying to reconstruct?
#
# You'll have to register them of course! Have a look at the [registration notebook](../Reg/sirf_registration.ipynb) if you haven't already.
#
# The idea here would be to run an initial reconstruction (say, OSEM), and then register the anatomical image to the resulting reconstruction...
#
# Once we've got the anatomical image in the correct space, we can calculate the Bowsher weights.
# +
import sirf.Reg as Reg
registration = Reg.NiftyAladinSym()
registration.set_reference_image
registration.set_reference_image(osem_image)
registration.set_floating_image(anatomical)
registration.set_parameter('SetPerformRigid','1')
registration.set_parameter('SetPerformAffine','0')
registration.process()
anatomical_in_emission_space = registration.get_output()
Bweights = update_bowsher_weights(myPrior,anatomical_in_emission_space,num_bowsher_neighbours)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Table of Contents
#
# ## [1. Libraries](#section_1)
#
# ## [2. Wrangling-Part1](#section_2)
#
# * ### [2.1. Function to retrieve latitude and longitude based on business name](#section_3)
# * ### [2.2. Function to retrieve correct address based on latitude and longitude](#section_4)
#
# ## [3. Wrangling-Part2](#section_5)
#
# ## [4. Saving the csv](#section_6)
#
# ## [5. References](#section_7)
# ### 1. Libraries used<a id='section_1'></a>
# +
import googlemaps
from datetime import datetime
from geopy.geocoders import Nominatim
import os
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
import re
# -
# ### 2. Wrangling-Part1<a id='section_2'></a>
# #### 2.1. Writing Function to retrieve latitude and longitude based on business name<a id='section_3'></a>
# +
def get_lat_lng(apiKey, address):
"""
Returns the latitude and longitude of a location using the Google Maps Geocoding API.
API: https://developers.google.com/maps/documentation/geocoding/start
# INPUT -------------------------------------------------------------------
apiKey [str]
address [str]
# RETURN ------------------------------------------------------------------
lat [float]
lng [float]
"""
bad_address=[]
import requests
url = ('https://maps.googleapis.com/maps/api/geocode/json?address={}&key={}'
.format(address.replace(' ','+'), apiKey))
try:
response = requests.get(url)
resp_json_payload = response.json()
lat = resp_json_payload['results'][0]['geometry']['location']['lat']
lng = resp_json_payload['results'][0]['geometry']['location']['lng']
except:
print('ERROR: {}'.format(address))
bad_address.append(address)
lat = 0
lng = 0
return lat, lng
#return bad_address
# -
# #### 2.2. Function to retrieve correct address based on latitude and longitude <a id='section_4'></a>
def find_address(address):
apiKey=""
address=("{},VIC".format(address))
lat, lng = get_lat_lng(apiKey, address)
geolocator=Nominatim(timeout=None)
latlng=("{},{}".format(lat,lng))
location = geolocator.reverse(latlng)
return pd.Series([lat,lng,location.address])
# +
# Testing the function--- find_address()
lat,long,postal_address=find_address('The East West Overseas Aid Foundation Melbourne')
print(lat)
print(long)
print(postal_address)
# -
# ### 3. Wrangling-Part2<a id='section_5'></a>
df=pd.read_csv("final_clean_vic_activities.csv")
df_test=df.head()
# #### From the previous joined data, applying the functions to retrieve the correct address using google api
df[['lat','lng','address']]=df.apply(lambda x:find_address(x['Name']),axis=1)
# #### Finding the postcode on newly retrieved address
def find_postcode(text):
regex=r'\d{4}'
pc=re.findall(regex,text)
if pc:
value=int(pc[0])
else:
value=0
return value
# +
clean_df['postcode']=clean_df.apply(lambda x:find_postcode(x['address']),axis=1 )
# -
# #### Cleaning data which has False or no information
# +
# Removing data with no latitude and longitude information
clean_df = df[df['lat']!=0]
# +
# Cleaning the data which has different country other than Australia
clean_df=clean_df[clean_df['address'].str.contains('Australia')]
# +
#Cleaning the data which has no postcode information
clean_df = clean_df[clean_df['postcode']!=0]
# +
# Selecting final required columns
clean_df=clean_df[['postcode','address','lat', 'lng','Name','activity_1', 'activity_2', 'activity_3','activity_4']]
# +
#Total rows present in the data
clean_df.shape
# -
# ### 4. Saving the file to csv <a id='section_6'></a>
#
clean_df.to_csv("final_vic_activities.csv",index=False)
#
# ### 5. References <a id='section_7'></a>
#
#
# https://github.com/googlemaps/google-maps-services-python
#
# https://matthewkudija.com/blog/2018/11/19/google-maps-api/#targetText=Save%20this%20API%20key%20in,APIs%20and%20select%20%22Enable%22.
#
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Custom Sequences (Part 2b/c)
# For this example we'll re-use the Polygon class from a previous lecture on extending sequences.
#
# We are going to consider a polygon as nothing more than a collection of points (and we'll stick to a 2-dimensional space).
#
# So, we'll need a `Point` class, but we're going to use our own custom class instead of just using a named tuple.
#
# We do this because we want to enforce a rule that our Point co-ordinates will be real numbers. We would not be able to use a named tuple to do that and we could end up with points whose `x` and `y` coordinates could be of any type.
# First we'll need to see how we can test if a type is a numeric real type.
#
# We can do this by using the numbers module.
import numbers
# This module contains certain base types for numbers that we can use, such as Number, Real, Complex, etc.
isinstance(10, numbers.Number)
isinstance(10.5, numbers.Number)
isinstance(1+1j, numbers.Number)
# We will want out points to be real numbers only, so we can do it this way:
isinstance(1+1j, numbers.Real)
isinstance(10, numbers.Real)
isinstance(10.5, numbers.Real)
# So now let's write our Point class. We want it to have these properties:
#
# 1. The `x` and `y` coordinates should be real numbers only
# 2. Point instances should be a sequence type so that we can unpack it as needed in the same way we were able to unpack the values of a named tuple.
class Point:
def __init__(self, x, y):
if isinstance(x, numbers.Real) and isinstance(y, numbers.Real):
self._pt = (x, y)
else:
raise TypeError('Point co-ordinates must be real numbers.')
def __repr__(self):
return f'Point(x={self._pt[0]}, y={self._pt[1]})'
def __len__(self):
return 2
def __getitem__(self, s):
return self._pt[s]
# Let's use our point class and make sure it works as intended:
p = Point(1, 2)
p
len(p)
p[0], p[1]
x, y = p
x, y
# Now, we can start creatiung our Polygon class, that will essentially be a mutable sequence of points making up the verteces of the polygon.
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
return f'Polygon({self._pts})'
# Let's try it and see if everything is as we expect:
p = Polygon()
p
p = Polygon((0,0), [1,1])
p
p = Polygon(Point(0, 0), [1, 1])
p
# That seems to be working, but only one minor thing - our representation contains those square brackets which technically should not be there as the Polygon class init assumes multiple arguments, not a single iterable.
#
# So we shoudl fix that:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join(self._pts)
return f'Polygon({pts_str})'
# But that still won't work, because the `join` method expects an iterable of **strings** - here we are passing it an iterable of `Point` objects:
p = Polygon((0,0), (1,1))
p
# So, let's fix that:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
p = Polygon((0,0), (1,1))
p
# Ok, so now we can start making our Polygon into a sequence type, by implementing methods such as `__len__` and `__getitem__`:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
# Notice how we are simply delegating those methods to the ones supported by lists since we are storing our sequence of points internally using a list!
p = Polygon((0,0), Point(1,1), [2,2])
p
p[0]
p[::-1]
# Now let's implement concatenation (we'll skip repetition - wouldn't make much sense anyway):
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __add__(self, other):
if isinstance(other, Polygon):
new_pts = self._pts + other._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
p1 = Polygon((0,0), (1,1))
p2 = Polygon((2,2), (3,3))
print(id(p1), p1)
print(id(p2), p2)
result = p1 + p2
print(id(result), result)
# Now, let's handle in-place concatenation. Let's start by only allowing the RHS of the in-place concatenation to be another Polygon:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __add__(self, other):
if isinstance(other, Polygon):
new_pts = self._pts + other._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def __iadd__(self, pt):
if isinstance(pt, Polygon):
self._pts = self._pts + pt._pts
return self
else:
raise TypeError('can only concatenate with another Polygon')
p1 = Polygon((0,0), (1,1))
p2 = Polygon((2,2), (3,3))
print(id(p1), p1)
print(id(p2), p2)
p1 += p2
print(id(p1), p1)
# So that worked, but this would not:
p1 = Polygon((0,0), (1,1))
p1 += [(2,2), (3,3)]
# As you can see we get that type error. But we really should be able to handle appending any iterable of Points - and of course Pointsd could also be specified as just iterables of length 2 containing numbers:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __add__(self, pt):
if isinstance(pt, Polygon):
new_pts = self._pts + pt._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def __iadd__(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
return self
p1 = Polygon((0,0), (1,1))
p1 += [(2,2), (3,3)]
p1
# Now let's implement some methods such as `append`, `extend` and `insert`:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __add__(self, pt):
if isinstance(pt, Polygon):
new_pts = self._pts + pt._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def __iadd__(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
return self
def append(self, pt):
self._pts.append(Point(*pt))
def extend(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
def insert(self, i, pt):
self._pts.insert(i, Point(*pt))
# Notice how we used almost the same code for `__iadd__` and `extend`?
# The only difference is that `__iadd__` returns the object, while `extend` does not - so let's clean that up a bit:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __add__(self, pt):
if isinstance(pt, Polygon):
new_pts = self._pts + pt._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def append(self, pt):
self._pts.append(Point(*pt))
def extend(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
def __iadd__(self, pts):
self.extend(pts)
return self
def insert(self, i, pt):
self._pts.insert(i, Point(*pt))
# Now let's give all this a try:
p1 = Polygon((0,0), Point(1,1))
p2 = Polygon([2, 2], [3, 3])
print(id(p1), p1)
print(id(p2), p2)
p1 += p2
print(id(p1), p1)
# That worked still, now let's see `append`:
p1
p1.append((4, 4))
p1
p1.append(Point(5,5))
print(id(p1), p1)
# `append` seems to be working, now for `extend`:
p3 = Polygon((6,6), (7,7))
p1.extend(p3)
print(id(p1), p1)
p1.extend([(8,8), Point(9,9)])
print(id(p1), p1)
# Now let's see if `insert` works as expected:
p1 = Polygon((0,0), (1,1), (2,2))
print(id(p1), p1)
p1.insert(1, (100, 100))
print(id(p1), p1)
p1.insert(1, Point(50, 50))
print(id(p1), p1)
# Now that we have that working, let's turn our attention to the `__setitem__` method so we can support index and slice assignments:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __setitem__(self, s, value):
# value could be a single Point (or compatible type) for s an int
# or it could be an iterable of Points if s is a slice
# let's start by handling slices only first
self._pts[s] = [Point(*pt) for pt in value]
def __add__(self, pt):
if isinstance(pt, Polygon):
new_pts = self._pts + pt._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def append(self, pt):
self._pts.append(Point(*pt))
def extend(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
def __iadd__(self, pts):
self.extend(pts)
return self
def insert(self, i, pt):
self._pts.insert(i, Point(*pt))
# So, we are only handling slice assignments at this point, not assignments such as `p[0] = Point(0,0)`:
p = Polygon((0,0), (1,1), (2,2))
print(id(p), p)
p[0:2] = [(10, 10), (20, 20), (30, 30)]
print(id(p), p)
# So this seems to work fine. But this won't yet:
p[0] = Point(100, 100)
# If we look at the precise error, we see that our list comprehension is the cause opf the error - we fail to correctly handle the case where the value passed in is not an iterable of Points...
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __setitem__(self, s, value):
# value could be a single Point (or compatible type) for s an int
# or it could be an iterable of Points if s is a slice
# we could do this:
if isinstance(s, int):
self._pts[s] = Point(*value)
else:
self._pts[s] = [Point(*pt) for pt in value]
def __add__(self, pt):
if isinstance(pt, Polygon):
new_pts = self._pts + pt._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def append(self, pt):
self._pts.append(Point(*pt))
def extend(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
def __iadd__(self, pts):
self.extend(pts)
return self
def insert(self, i, pt):
self._pts.insert(i, Point(*pt))
# This will now work as expected:
p = Polygon((0,0), (1,1), (2,2))
print(id(p), p)
p[0] = Point(10, 10)
print(id(p), p)
# What happens if we try to assign a single Point to a slice:
p[0:2] = Point(10, 10)
# As expected this will not work. What about assigning an iterable of points to an index:
p[0] = [Point(10, 10), Point(20, 20)]
# This works fine, but the error messages are a bit misleading - we probably shoudl do something about that:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __setitem__(self, s, value):
# we first should see if we have a single Point
# or an iterable of Points in value
try:
rhs = [Point(*pt) for pt in value]
is_single = False
except TypeError:
# not a valid iterable of Points
# maybe a single Point?
try:
rhs = Point(*value)
is_single = True
except TypeError:
# still no go
raise TypeError('Invalid Point or iterable of Points')
# reached here, so rhs is either an iterable of Points, or a Point
# we want to make sure we are assigning to a slice only if we
# have an iterable of points, and assigning to an index if we
# have a single Point only
if (isinstance(s, int) and is_single) \
or isinstance(s, slice) and not is_single:
self._pts[s] = rhs
else:
raise TypeError('Incompatible index/slice assignment')
def __add__(self, pt):
if isinstance(pt, Polygon):
new_pts = self._pts + pt._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def append(self, pt):
self._pts.append(Point(*pt))
def extend(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
def __iadd__(self, pts):
self.extend(pts)
return self
def insert(self, i, pt):
self._pts.insert(i, Point(*pt))
# So now let's see if we get better error messages:
p1 = Polygon((0,0), (1,1), (2,2))
p1[0:2] = (10,10)
p1[0] = [(0,0), (1,1)]
# And the allowed slice/index assignments work as expected:
p[0] = Point(100, 100)
p
p[0:2] = [(0,0), (1,1), (2,2)]
p
# And if we try to replace with bad Point data:
p[0] = (0, 2+2j)
# We also get a better error message.
# Lastly let's see how we would implement the `del` keyword and the `pop` method.
# Recall how the `del` keyword works for a list:
l = [1, 2, 3, 4, 5]
del l[0]
l
del l[0:2]
l
del l[-1]
l
# So, `del` works with indices (positive or negative) and slices too. We'll do the same:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __setitem__(self, s, value):
# we first should see if we have a single Point
# or an iterable of Points in value
try:
rhs = [Point(*pt) for pt in value]
is_single = False
except TypeError:
# not a valid iterable of Points
# maybe a single Point?
try:
rhs = Point(*value)
is_single = True
except TypeError:
# still no go
raise TypeError('Invalid Point or iterable of Points')
# reached here, so rhs is either an iterable of Points, or a Point
# we want to make sure we are assigning to a slice only if we
# have an iterable of points, and assigning to an index if we
# have a single Point only
if (isinstance(s, int) and is_single) \
or isinstance(s, slice) and not is_single:
self._pts[s] = rhs
else:
raise TypeError('Incompatible index/slice assignment')
def __add__(self, pt):
if isinstance(pt, Polygon):
new_pts = self._pts + pt._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def append(self, pt):
self._pts.append(Point(*pt))
def extend(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
def __iadd__(self, pts):
self.extend(pts)
return self
def insert(self, i, pt):
self._pts.insert(i, Point(*pt))
def __delitem__(self, s):
del self._pts[s]
p = Polygon(*zip(range(6), range(6)))
p
del p[0]
p
del p[-1]
p
del p[0:2]
p
# Now, we just have to implement `pop`:
class Polygon:
def __init__(self, *pts):
if pts:
self._pts = [Point(*pt) for pt in pts]
else:
self._pts = []
def __repr__(self):
pts_str = ', '.join([str(pt) for pt in self._pts])
return f'Polygon({pts_str})'
def __len__(self):
return len(self._pts)
def __getitem__(self, s):
return self._pts[s]
def __setitem__(self, s, value):
# we first should see if we have a single Point
# or an iterable of Points in value
try:
rhs = [Point(*pt) for pt in value]
is_single = False
except TypeError:
# not a valid iterable of Points
# maybe a single Point?
try:
rhs = Point(*value)
is_single = True
except TypeError:
# still no go
raise TypeError('Invalid Point or iterable of Points')
# reached here, so rhs is either an iterable of Points, or a Point
# we want to make sure we are assigning to a slice only if we
# have an iterable of points, and assigning to an index if we
# have a single Point only
if (isinstance(s, int) and is_single) \
or isinstance(s, slice) and not is_single:
self._pts[s] = rhs
else:
raise TypeError('Incompatible index/slice assignment')
def __add__(self, pt):
if isinstance(pt, Polygon):
new_pts = self._pts + pt._pts
return Polygon(*new_pts)
else:
raise TypeError('can only concatenate with another Polygon')
def append(self, pt):
self._pts.append(Point(*pt))
def extend(self, pts):
if isinstance(pts, Polygon):
self._pts = self._pts + pts._pts
else:
# assume we are being passed an iterable containing Points
# or something compatible with Points
points = [Point(*pt) for pt in pts]
self._pts = self._pts + points
def __iadd__(self, pts):
self.extend(pts)
return self
def insert(self, i, pt):
self._pts.insert(i, Point(*pt))
def __delitem__(self, s):
del self._pts[s]
def pop(self, i):
return self._pts.pop(i)
p = Polygon(*zip(range(6), range(6)))
p
p.pop(1)
p
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.8 64-bit (conda)
# language: python
# name: python3
# ---
# # Implementing the AdaBoost Algorithm From Scratch
#
# ref: https://www.kdnuggets.com/2020/12/implementing-adaboost-algorithm-from-scratch.html
# ## Prepare Data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import confusion_matrix
from sklearn import tree
from math import log
pd.set_option('display.max_rows', 100)
pd.set_option('display.max_columns', None)
iris=pd.read_csv("./iris.csv")
iris.head()
# consider only two class
dataset = iris[(iris['variety'] == 'Versicolor') | (iris['variety'] == 'Virginica')]
dataset.head(2)
# replace the two classes with +1 and -1
dataset['Label'] = dataset['variety'].replace(to_replace=['Versicolor', 'Virginica'], value=[1, -1])
dataset.head(2)
dataset = dataset.drop('variety', axis=1)
# ## Boosting Round 1
# initially assign same weights to each records in the dataset
dataset['probR1'] = 1/(dataset.shape[0])
dataset.head()
# 随机有放回抽样获取样本
random.seed(10)
dataset1 = dataset.sample(len(dataset), replace=True, weights=dataset['probR1'])
dataset1
X_train = dataset1.iloc[0:len(iris), 0:4]
y_train = dataset1.iloc[0:len(iris), 4]
clf_gini = DecisionTreeClassifier(criterion="gini", random_state=100, max_depth=1)
clf = clf_gini.fit(X_train, y_train)
# plot tree for round 1 boosting
tree.plot_tree(clf)
# predict
y_pred = clf_gini.predict(dataset.iloc[0:len(iris), 0:4])
y_pred
# adding a column pred1 after the first round of boosting
dataset['pred1'] = y_pred
dataset
# misclassified = 0 if the label and prediction are same
dataset.loc[dataset.Label != dataset.pred1, 'misclassified'] = 1
dataset.loc[dataset.Label == dataset.pred1, 'misclassified'] = 0
dataset
# calculate error
e1 = sum(dataset['misclassified'] * dataset['probR1'])
e1
# calculate the alpha
alpha1 = 0.5 * log((1-e1)/e1)
alpha1
# update the weight
new_weight = dataset['probR1'] * np.exp(-1*alpha1*dataset['Label']*dataset['pred1'])
new_weight
# normalize weight
z = sum(new_weight)
normalized_weight = new_weight / z
dataset['probR2'] = round(normalized_weight, 4)
dataset
# ## Boosting Round 2
# +
# round 2
random.seed(20)
dataset2 = dataset.sample(len(dataset), replace=True, weights=dataset['probR2'])
dataset2 = dataset2.iloc[:, 0:5]
X_train = dataset2.iloc[0:len(iris), 0:4]
y_train = dataset2.iloc[0:len(iris), 4]
clf_gini = DecisionTreeClassifier(criterion='gini', random_state=100, max_depth=1)
clf = clf_gini.fit(X_train, y_train)
tree.plot_tree(clf)
y_pred = clf_gini.predict(dataset.iloc[0:len(iris),0:4])
#adding a column pred2 after the second round of boosting
dataset['pred2'] = y_pred
dataset
# +
# adding a field misclassified2
dataset.loc[dataset.Label != dataset.pred2, 'misclassified2'] = 1
dataset.loc[dataset.Label == dataset.pred2, 'misclassified2'] = 0
# calculation of error
e2 = sum(dataset['misclassified2'] * dataset['probR2'])
print("e2:", e2)
#calculation of alpha
alpha2 = 0.5*log((1-e2)/e2)
print("alpha2:", alpha2)
#update weight
new_weight = dataset['probR2']*np.exp(-1*alpha2*dataset['Label']*dataset['pred2'])
z = sum(new_weight)
normalized_weight = new_weight/z
dataset['probR3'] = round(normalized_weight,4)
dataset
# -
# ## Boosting Round 3
# +
# round 3
random.seed(30)
dataset3 = dataset.sample(len(dataset), replace=True, weights=dataset['probR3'])
dataset3 = dataset3.iloc[:, 0:5]
X_train = dataset3.iloc[0:len(iris), 0:4]
y_train = dataset3.iloc[0:len(iris), 4]
clf_gini = DecisionTreeClassifier(criterion='gini', random_state=100, max_depth=1)
clf = clf_gini.fit(X_train, y_train)
tree.plot_tree(clf)
y_pred = clf_gini.predict(dataset.iloc[0:len(iris),0:4])
#adding a column pred2 after the second round of boosting
dataset['pred3'] = y_pred
dataset
# +
# adding a field misclassified3
dataset.loc[dataset.Label != dataset.pred3, 'misclassified3'] = 1
dataset.loc[dataset.Label == dataset.pred3, 'misclassified3'] = 0
# calculation of error
e3 = sum(dataset['misclassified3'] * dataset['probR3'])
print("e3:", e3)
#calculation of alpha
alpha3 = 0.5*log((1-e3)/e3)
print("alpha3:", alpha3)
#update weight
new_weight = dataset['probR3']*np.exp(-1*alpha3*dataset['Label']*dataset['pred3'])
z = sum(new_weight)
normalized_weight = new_weight/z
dataset['probR4'] = round(normalized_weight,4)
dataset
# -
# ## Boosting Round 4
# +
# round 4
random.seed(40)
dataset4 = dataset.sample(len(dataset), replace=True, weights=dataset['probR4'])
dataset4 = dataset4.iloc[:, 0:5]
X_train = dataset4.iloc[0:len(iris), 0:4]
y_train = dataset4.iloc[0:len(iris), 4]
clf_gini = DecisionTreeClassifier(criterion='gini', random_state=100, max_depth=1)
clf = clf_gini.fit(X_train, y_train)
tree.plot_tree(clf)
y_pred = clf_gini.predict(dataset.iloc[0:len(iris),0:4])
#adding a column pred2 after the second round of boosting
dataset['pred4'] = y_pred
dataset
# +
# adding a field misclassified3
dataset.loc[dataset.Label != dataset.pred4, 'misclassified4'] = 1
dataset.loc[dataset.Label == dataset.pred4, 'misclassified4'] = 0
# calculation of error
e4 = sum(dataset['misclassified4'] * dataset['probR4'])
print("e1:", e1)
print("e2:", e2)
print("e3:", e3)
print("e4:", e4)
#calculation of alpha
alpha4 = 0.5*log((1-e4)/e4)
print("alpha1:", alpha1)
print("alpha2:", alpha2)
print("alpha3:", alpha3)
print("alpha4:", alpha4)
# -
# ## Final model
# +
# final prediction
t = alpha1 * dataset['pred1'] + alpha2 * dataset['pred2'] + alpha3 * dataset['pred3'] + alpha4 * dataset['pred4']
# sign the final prediction
dataset['final_pred'] = np.sign(list(t))
dataset
# +
# Confusion matrix
c = confusion_matrix(dataset['Label'], dataset['final_pred'])
print("confusion_matrix:", c)
# Overall Accuracy
acc = (c[0,0]+c[1,1])/np.sum(c)*100
print("acc:", acc)
# -
# ## AdaBoostClassifier
# Fitting the model using the adaboost classifier library
from sklearn.ensemble import AdaBoostClassifier
iris=pd.read_csv("./iris.csv")
dataset = iris[(iris['variety'] == 'Versicolor') | (iris['variety'] == 'Virginica')]
#X_train and Y_train split
X_train = iris.iloc[0:len(iris),0:4]
y_train = iris.iloc[0:len(iris),4]
clf = AdaBoostClassifier(n_estimators=4, random_state=0)
clf.fit(X_train, y_train)
clf.predict([[5.5, 2.5, 4.0, 1.3]])
clf.score(X_train, y_train)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def identity_block(X, f, filters, stage, block):
"""
实现图3的恒等块
参数:
X - 输入的tensor类型的数据,维度为( m, 1001, 1)
f - 整数,指定主路径中间的CONV窗口的维度
filters - 整数列表,定义了主路径每层的卷积层的过滤器数量
stage - 整数,根据每层的位置来命名每一层,与block参数一起使用。
block - 字符串,据每层的位置来命名每一层,与stage参数一起使用。
返回:
X - 恒等块的输出,tensor类型,维度为(1001,1)
"""
#定义命名规则
conv_name_base = "res" + str(stage) + block + "_branch"
bn_name_base = "bn" + str(stage) + block + "_branch"
#获取过滤器
F1, F2, F3 = filters
#保存输入数据,将会用于为主路径添加捷径
X_shortcut = X
#主路径的第一部分
##卷积层
X = Conv1D(filters=F1, kernel_size=1, strides=1 ,padding="valid",
name=conv_name_base+"2a", kernel_initializer=glorot_uniform(seed=0))(X)
##归一化
X = BatchNormalization(name=bn_name_base+"2a")(X)
##使用ReLU激活函数
X = Activation("relu")(X)
#主路径的第二部分
##卷积层
X = Conv1D(filters=F2, kernel_size=f,strides=1, padding="same",
name=conv_name_base+"2b", kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(name=bn_name_base+"2b")(X)
##使用ReLU激活函数
X = Activation("relu")(X)
#主路径的第三部分
##卷积层
X = Conv1D(filters=F3, kernel_size=1, strides=1, padding="valid",
name=conv_name_base+"2c", kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(name=bn_name_base+"2c")(X)
##没有ReLU激活函数
#最后一步:
##将捷径与输入加在一起
X = Add()([X,X_shortcut])
##使用ReLU激活函数
X = Activation("relu")(X)
return X
# +
def convolutional_block(X, f, filters, stage, block, s=2):
#定义命名规则
conv_name_base = "res" + str(stage) + block + "_branch"
bn_name_base = "bn" + str(stage) + block + "_branch"
#获取过滤器数量
F1, F2, F3 = filters
#保存输入数据
X_shortcut = X
#主路径
##主路径第一部分
X = Conv1D(filters=F1, kernel_size=1, strides=s, padding="valid",
name=conv_name_base+"2a", kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(name=bn_name_base+"2a")(X)
X = Activation("relu")(X)
##主路径第二部分
X = Conv1D(filters=F2, kernel_size=f, strides=1, padding="same",
name=conv_name_base+"2b", kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(name=bn_name_base+"2b")(X)
X = Activation("relu")(X)
# ##主路径第三部分
# X = Conv1D(filters=F3, kernel_size=1, strides=1, padding="valid",
# name=conv_name_base+"2c", kernel_initializer=glorot_uniform(seed=0))(X)
# X = BatchNormalization(name=bn_name_base+"2c")(X)
#捷径
X_shortcut = Conv1D(filters=F3, kernel_size=1, strides=s, padding="valid",
name=conv_name_base+"1", kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(name=bn_name_base+"1")(X_shortcut)
#最后一步
X = Add()([X,X_shortcut])
X = Activation("relu")(X)
return X
# -
# Construct ResNet50
def ResNet50(input_shape=(1001,1),classes=186):
#定义tensor类型的输入数据
X_input = Input(input_shape)
#0填充
X = keras.layers.convolutional.ZeroPadding1D(3)(X_input)
#stage1
X = Conv1D(filters=32, kernel_size=7, strides=2, name="conv1",
kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(name="bn_conv1")(X)
X = Activation("relu")(X)
X = MaxPooling1D(pool_size=3, strides=2)(X)
#stage2
X = convolutional_block(X, f=3, filters=[32,32,64], stage=2, block="a", s=1)
X = identity_block(X, f=3, filters=[32,32,64], stage=2, block="b")
# X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="c")
#stage3
X = convolutional_block(X, f=3, filters=[64,64,128], stage=3, block="a", s=2)
X = identity_block(X, f=3, filters=[64,64,128], stage=3, block="b")
# X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="c")
# X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="d")
#stage4
X = convolutional_block(X, f=3, filters=[128,128,256], stage=4, block="a", s=2)
X = identity_block(X, f=3, filters=[128,128,256], stage=4, block="b")
# X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="c")
# X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="d")
# X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="e")
# X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="f")
#stage5
X = convolutional_block(X, f=3, filters=[128,128,256], stage=5, block="a", s=2)
X = identity_block(X, f=3, filters=[128,128,256], stage=5, block="b")
# X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="c")
#均值池化层
X = AveragePooling1D(pool_size=2,padding="same")(X)
#输出层
X = Flatten()(X)
X = Dense(classes, activation="softmax", name="fc"+str(classes),
kernel_initializer=glorot_uniform(seed=0))(X)
#创建模型
cnn_model = Model(inputs=X_input, outputs=X, name="ResNet50")
return cnn_model
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Like fig. 7.7 but for emissions (plot is the remit of Sara / Terje / Sophie)
#
# Theme Song: Deus Ex Machina<br>
# Artist: Pure Reason Revolution<br>
# Album: Amor Vincit Omnia<br>
# Released: 2009
#
# Temperature response to emissions 1750-2019
# +
import numpy as np
import scipy.stats as st
import pandas as pd
import matplotlib.pyplot as pl
import os
from matplotlib import gridspec, rc
from matplotlib.lines import Line2D
import matplotlib.patches as mp
from netCDF4 import Dataset
import warnings
from ar6.utils.h5 import *
# -
results = load_dict_from_hdf5('../data_output_large/twolayer_AR6-historical-emissions-based.h5')
results.keys()
results['AR6-anthro_climuncert']['surface_temperature'].shape
results['AR6-anthro_climuncert']['surface_temperature'][0].mean()
forcings = list(results.keys())
forcings.remove('AR6-anthro_climuncert')
forcings
# +
AR6_ecsforc = {}
for forcing in forcings:
AR6_ecsforc[forcing[7:14]] = np.zeros(5)
AR6_ecsforc[forcing[7:14]] = np.percentile(
(results['AR6-anthro_climuncert']['surface_temperature'][-1] - results['AR6-anthro_climuncert']['surface_temperature'][0])-
(results[forcing]['surface_temperature'][-1] - results[forcing]['surface_temperature'][0]), (5,16,50,84,95)
)
# -
AR6_ecsforc.keys()
print(AR6_ecsforc['ch4_co2'])
# +
emissions = ['co2', 'ch4', 'n2o', 'oth', 'nox', 'voc', 'so2', 'blc', 'orc', 'nh3', 'con', 'luc']
forc = ['co2', 'ch4', 'n2o', 'oth', 'ozo', 'h2o', 'ari', 'aci', 'bcs', 'con', 'luc']
emissions_full = ['CO2', 'CH4', 'N2O', 'Halocarbons', 'NOx', 'VOC', 'SO2', 'BC', 'OC', 'NH3', 'Contrails', 'Land use']
forcings_full = ['CO2', 'CH4', 'N2O', 'Halocarbons', 'O3', 'Stratospheric H2O', 'Aerosol-radiation', 'Aerosol-cloud', 'BC on snow', 'Contrails', 'Land use']
results = np.zeros((len(emissions), len(forc)))
for i, em in enumerate(emissions):
for j, fo in enumerate(forc):
combo = '%s_%s' % (em, fo)
if combo in AR6_ecsforc.keys():
#print(AR6_ecsforc[combo][2])
results[i, j] = AR6_ecsforc[combo][2]
# -
df50 = pd.DataFrame(results, index=emissions_full, columns=forcings_full)
df50.to_csv('../data_output/GSAT_by_emissions.csv')
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Analisis de suicidios comparando: Japon, Rusia, Italia, Chile y México.
# Esta parte es un analisis de distintos paises. Fueron elegidos al azar (excepto México). A diferencia del reporte en Kaggle que toma paises que podrian ser similares a nivel cultural, social y economico, aqui por coincidencia y a mi consideracion se eligieron paises con culturas muy opuestas entre si. El caso México, Chile a pesar de hablar el mismo idioma y ser tal vez mas cercanos tengo entendido que tienen una calidad de vida economica y cultural muy distinta. Valdra la pena ver la diferencia entre estos dos paises de Latinoamerica.
#
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.patches as mpatches
df = pd.read_csv("masterClean.csv")
df.head(1)
df.drop(columns=["Unnamed: 0"], inplace=True)
df["continent"] = df["continent"].fillna("NA")
df.head(1)
df.drop(df[df["country"] == "Macau"].index, inplace=True)
# # Tendencia de suicidios generales.
japon = df[df["country"]=="Japan"]
italia = df[df["country"]=="Italy"]
mexico = df[df["country"]=="Mexico"]
chile = df[df["country"]=="Chile"]
rusia = df[df["country"]=="Russian Federation"]
df_group = df.groupby(["country","year"])
japon_group = japon.groupby(["country","year"])
italia_group = italia.groupby(["country","year"])
mexico_group = mexico.groupby(["country","year"])
chile_group = chile.groupby(["country","year"])
rusia_group = rusia.groupby(["country","year"])
'''for names,groups in rusia_group:
print(names)
print(groups)'''
japon_pob = japon_group["population"].sum()
italia_pob = italia_group["population"].sum()
mexico_pob = mexico_group["population"].sum()
chile_pob = chile_group["population"].sum()
rusia_pob = rusia_group["population"].sum()
japon_suic = japon_group["suicides_no"].sum()
italia_suic = italia_group["suicides_no"].sum()
mexico_suic = mexico_group["suicides_no"].sum()
chile_suic = chile_group["suicides_no"].sum()
rusia_suic = rusia_group["suicides_no"].sum()
J_suicides_per_100k = (japon_suic/japon_pob)*100000
I_suicides_per_100k = (italia_suic/italia_pob)*100000
M_suicides_per_100k = (mexico_suic/mexico_pob)*100000
C_suicides_per_100k = (chile_suic/chile_pob)*100000
R_suicides_per_100k = (rusia_suic/rusia_pob)*100000
len(R_suicides_per_100k)
japan_year = japon["year"].unique().tolist()
italia_year = italia["year"].unique().tolist()
mexico_year = mexico["year"].unique().tolist()
chile_year = chile["year"].unique().tolist()
rusia_year = rusia["year"].unique().tolist()
# +
plt.figure(figsize=(15,10))
sns.lineplot(x = japan_year,y=J_suicides_per_100k,marker="o")
sns.lineplot(x = italia_year,y=I_suicides_per_100k,marker="o")
sns.lineplot(x = mexico_year,y=M_suicides_per_100k,marker="o")
sns.lineplot(x = chile_year,y=C_suicides_per_100k,marker="o")
sns.lineplot(x = rusia_year,y=R_suicides_per_100k,marker="o")
plt.xticks(np.arange(1985,2020,step=3))
plt.yticks(np.arange(0,50,step=5))
plt.grid()
plt.legend(labels=["Japon","Italia","Mexico","Chile","Rusia"], loc = 0, bbox_to_anchor = (1,0.5))
plt.show()
# -
# # Insights
# * Rusia desde el 91 tuvo un aumento en sus tasas de suicidio sin embargo se ha nivelado al nivel de Japon desde el 2009, juntos tienen una tendencia a la baja.
# * Chile tiene una tendencia alta a comparacion de México e incluso Italia, superando a este ultimo desde 1999.
# * México es el mas bajo de todos aunque tiene una tendencia bastante clara a la alza.
# # Por genero.
# ## Hombres y mujeres (general)
japon_group_G = japon.groupby(["sex","year"])
italia_group_G = italia.groupby(["sex","year"])
mexico_group_G = mexico.groupby(["sex","year"])
chile_group_G = chile.groupby(["sex","year"])
rusia_group_G = rusia.groupby(["sex","year"])
japon_pob_G = japon_group_G["population"].sum()
italia_pob_G = italia_group_G["population"].sum()
mexico_pob_G = mexico_group_G["population"].sum()
chile_pob_G = chile_group_G["population"].sum()
rusia_pob_G = rusia_group_G["population"].sum()
#japon_pob_G
japon_suic_G = japon_group_G["suicides_no"].sum()
italia_suic_G = italia_group_G["suicides_no"].sum()
mexico_suic_G = mexico_group_G["suicides_no"].sum()
chile_suic_G = chile_group_G["suicides_no"].sum()
rusia_suic_G = rusia_group_G["suicides_no"].sum()
JG_suicides_per_100k = list((japon_suic_G/japon_pob_G)*100000)
IG_suicides_per_100k = list((italia_suic_G/italia_pob_G)*100000)
MG_suicides_per_100k = list((mexico_suic_G/mexico_pob_G)*100000)
CG_suicides_per_100k = list((chile_suic_G/chile_pob_G)*100000)
RG_suicides_per_100k = list((rusia_suic_G/rusia_pob_G)*100000)
len(RG_suicides_per_100k)/2
japon_f = JG_suicides_per_100k[0:31]
japon_m = JG_suicides_per_100k[31:]
italia_f = IG_suicides_per_100k[0:31]
italia_m = IG_suicides_per_100k[31:]
mexico_f = MG_suicides_per_100k[0:31]
mexico_m = MG_suicides_per_100k[31:]
chile_f = CG_suicides_per_100k[0:31]
chile_m = CG_suicides_per_100k[31:]
rusia_f = RG_suicides_per_100k[0:27]
rusia_m = RG_suicides_per_100k[27:]
# +
plt.figure(figsize=(15,10))
plt.title("Suicidios hombres")
sns.lineplot(x = japan_year,y=japon_m,marker="o")
sns.lineplot(x = italia_year,y=italia_m,marker="o")
sns.lineplot(x = mexico_year,y=mexico_m,marker="o")
sns.lineplot(x = chile_year,y=chile_m,marker="o")
sns.lineplot(x = rusia_year,y=rusia_m,marker="o")
plt.xticks(np.arange(1985,2020,step=3))
plt.yticks(np.arange(0,100,step=5))
plt.grid()
plt.legend(labels=["Japon","Italia","Mexico","Chile","Rusia"], loc = 0, bbox_to_anchor = (1,0.5))
plt.show()
# +
plt.figure(figsize=(15,10))
plt.title("Suicidios mujeres")
sns.lineplot(x = japan_year,y=japon_f,marker="o")
sns.lineplot(x = italia_year,y=italia_f,marker="o")
sns.lineplot(x = mexico_year,y=mexico_f,marker="o")
sns.lineplot(x = chile_year,y=chile_f,marker="o")
sns.lineplot(x = rusia_year,y=rusia_f,marker="o")
plt.xticks(np.arange(1985,2020,step=3))
plt.yticks(np.arange(0,50,step=5))
plt.grid()
plt.legend(labels=["Japon","Italia","Mexico","Chile","Rusia"], loc = 0, bbox_to_anchor = (1,0.5))
plt.show()
# -
# ## Insigths
# * Los hombres son los que marcan la tendencia de la grafica general. Por lo tanto no hay variacion en su grafica individual.
# * Lo hombres de Rusia por cada 100k habitantes casi el doble de la cuenta general. Japon no tiene tanta diferencia.
# * Las mujeres Japonesas tienen una tasa de suicidios superior.
# * La tendencia de Chile e Italia en los dos generos se mantiene. Incluso pareceria que la tendencia en Chile aumenta en mujeres.
# * Mexico se mantiene casi igual en los dos generos respecto a los demas paises. Sin embargo los hombres se suicidan casi 4 veces mas que las mujeres.
# # 2010-2015
japan_year = japan_year[-6:]
italia_year = italia_year[-6:]
mexico_year = mexico_year[-6:]
chile_year = chile_year[-6:]
rusia_year = rusia_year[-6:]
japon_f_5 = np.mean(JG_suicides_per_100k[25:31])
japon_m_5 = np.mean(JG_suicides_per_100k[-6:])
italia_f_5 = np.mean(IG_suicides_per_100k[25:31])
italia_m_5 = np.mean(IG_suicides_per_100k[-6:])
mexico_f_5 = np.mean(MG_suicides_per_100k[25:31])
mexico_m_5 = np.mean(MG_suicides_per_100k[-6:])
chile_f_5 = np.mean(CG_suicides_per_100k[25:31])
chile_m_5= np.mean(CG_suicides_per_100k[-6:])
rusia_f_5= np.mean(RG_suicides_per_100k[21:27])
rusia_m_5= np.mean(RG_suicides_per_100k[-6:])
male_suicides_per_100k = []
female_suicides_per_100k = []
male_suicides_per_100k.append([japon_m_5, italia_m_5 , mexico_m_5 ,chile_m_5 , rusia_m_5])
female_suicides_per_100k.append([japon_f_5, italia_f_5 , mexico_f_5 ,chile_f_5 , rusia_f_5])
len(male_suicides_per_100k[0])
male_proportion = []
for i in range(0,len(male_suicides_per_100k[0])):
male_proportion.append((male_suicides_per_100k[0][i]/(female_suicides_per_100k[0][i]+male_suicides_per_100k[0][i]))*100)
male_proportion1 = [100]*len(male_proportion)
#male_proportion
female_proportion = [(100 - male_proportion[i]) for i in range(len(male_proportion)) ]
female_proportion1 = [100]*len(male_proportion)
paises = ["Japon","Italia","Mexico","Chile","Rusia"]
# +
plt.figure(figsize=(10,10))
plt.title("Proporcion de suicidios entre hombres y mujeres \n 2010-2015")
plt.ylabel("Pais")
plt.xlabel("Proporcion")
plt.yticks(np.arange(0,105,step=5))
bar2 = sns.barplot(x = paises ,y = female_proportion1,color='#F56750')
#Top
bar1 = sns.barplot(x = paises ,y = male_proportion,color='#803BFA')
#Bottom
#mean_line = ax.plot(x,suicide_prom, label='Mean', linestyle='--')
top_bar = mpatches.Patch(color='#F56750', label='Mujeres')
bottom_bar = mpatches.Patch(color='#803BFA', label='Hombres')
plt.legend(handles=[top_bar, bottom_bar])
plt.show()
# -
# ## Insights
# * Las mujeres japonesas tienen una proporcion considerablemente mas alta que el resto de paises.
# * En general el 75% de las muertes por suicidio son de hombres.
# * Mexico, Italia y Chile tienen proporciones muy similares.
# # Por edad
japon_group_E = japon.groupby(["age"])
italia_group_E = italia.groupby(["age"])
mexico_group_E = mexico.groupby(["age"])
chile_group_E = chile.groupby(["age"])
rusia_group_E = rusia.groupby(["age"])
japon_pob_E = japon_group_E["population"].sum()
italia_pob_E = italia_group_E["population"].sum()
mexico_pob_E = mexico_group_E["population"].sum()
chile_pob_E = chile_group_E["population"].sum()
rusia_pob_E = rusia_group_E["population"].sum()
japon_suic_E = japon_group_E["suicides_no"].sum()
italia_suic_E = italia_group_E["suicides_no"].sum()
mexico_suic_E = mexico_group_E["suicides_no"].sum()
chile_suic_E = chile_group_E["suicides_no"].sum()
rusia_suic_E = rusia_group_E["suicides_no"].sum()
#rusia_suic_E
JE_suicides_per_100k = list((japon_suic_E/japon_pob_E)*100000)
IE_suicides_per_100k = list((italia_suic_E/italia_pob_E)*100000)
ME_suicides_per_100k = list((mexico_suic_E/mexico_pob_E)*100000)
CE_suicides_per_100k = list((chile_suic_E/chile_pob_E)*100000)
RE_suicides_per_100k = list((rusia_suic_E/rusia_pob_E)*100000)
suicides_per_100k_general = [JE_suicides_per_100k,IE_suicides_per_100k,ME_suicides_per_100k,CE_suicides_per_100k,RE_suicides_per_100k]
for i in range(0,5):
print(suicides_per_100k_general[i][0])
years15 = [suicides_per_100k_general[i][0] for i in range(0,5)]
years25 = [suicides_per_100k_general[i][1] for i in range(0,5)]
years35 = [suicides_per_100k_general[i][2] for i in range(0,5)]
years5 = [suicides_per_100k_general[i][3] for i in range(0,5)]
years55 = [suicides_per_100k_general[i][4] for i in range(0,5)]
years75 = [suicides_per_100k_general[i][5] for i in range(0,5)]
# +
# %matplotlib inline
x = np.arange(len(paises))
width=0.1
fig, ax = plt.subplots(figsize=(20,10))
rects1 = ax.bar(x + 0.25, years15, width, label='15-24 years', color="#fde725")
rects2 = ax.bar(x + 0.15, years25, width, label='25-34 years',color="#7ad151")
rects3 = ax.bar(x + 0.05 , years35, width, label='35-54 years', color="#22a884")
rects4 = ax.bar(x - 0.05, years5, width, label='5-14',color="#2a788e")
rects5 = ax.bar(x - 0.15, years55, width, label='55-74', color="#414487")
rects6 = ax.bar(x - 0.25, years75, width, label='75+',color="#440154")
ax.set_ylabel('Suicidios per 100k')
ax.set_title('Suicidios por edad')
ax.set_xticks(x, paises)
ax.legend()
#mean_line = ax.plot(x,suicide_prom, label='Mean', linestyle='--')
#ax.bar_label(rects1)
#ax.bar_label(rects2)
#ax.bar_label(rects3)
#ax.bar_label(rects4)
#ax.bar_label(rects5)
#ax.bar_label(rects6)
#fig.tight_layout()
plt.show()
# -
# ## Insights
# * Casi todos los paises siguen la tendencia de que la probabilidad de suicidio incremente con la edad.
# * Excepto Rusia, el cual sus adultos de entre 25 a 35 tienen una tendencia casi mayor a los adultos mayores.
# * México tiene una tendencia similar a Rusia en ese sentido, aunque solo con el grupo de 55 a 75 años.
# * Lo alarmante de México es que su segundo grupo con tendencia mas alta el que junta a lo grupos 15-35 años.
# * El grupo de edad de menores de 15 años en Rusia es superior al resto de paises.
# # Diferencia entre edades y por genero.
japon_group_EG = japon.groupby(["sex","age"])
italia_group_EG = italia.groupby(["sex","age"])
mexico_group_EG = mexico.groupby(["sex","age"])
chile_group_EG = chile.groupby(["sex","age"])
rusia_group_EG = rusia.groupby(["sex","age"])
japon_pob_EG = japon_group_EG["population"].sum()
italia_pob_EG = italia_group_EG["population"].sum()
mexico_pob_EG = mexico_group_EG["population"].sum()
chile_pob_EG = chile_group_EG["population"].sum()
rusia_pob_EG = rusia_group_EG["population"].sum()
#japon_pob_EG
japon_suic_EG = japon_group_EG["suicides_no"].sum()
italia_suic_EG = italia_group_EG["suicides_no"].sum()
mexico_suic_EG = mexico_group_EG["suicides_no"].sum()
chile_suic_EG = chile_group_EG["suicides_no"].sum()
rusia_suic_EG = rusia_group_EG["suicides_no"].sum()
#rusia_suic_EG
JEG_suicides_per_100k = list((japon_suic_EG/japon_pob_EG)*100000)
IEG_suicides_per_100k = list((italia_suic_EG/italia_pob_EG)*100000)
MEG_suicides_per_100k = list((mexico_suic_EG/mexico_pob_EG)*100000)
CEG_suicides_per_100k = list((chile_suic_EG/chile_pob_EG)*100000)
REG_suicides_per_100k = list((rusia_suic_EG/rusia_pob_EG)*100000)
#JEG_suicides_per_100k
japon_female_EG = JEG_suicides_per_100k[0:6]
japon_male_EG = JEG_suicides_per_100k[6:]
italia_female_EG = IEG_suicides_per_100k[0:6]
italia_male_EG = IEG_suicides_per_100k[6:]
mexico_female_EG = MEG_suicides_per_100k[0:6]
mexico_male_EG = MEG_suicides_per_100k[6:]
chile_female_EG = CEG_suicides_per_100k[0:6]
chile_male_EG = CEG_suicides_per_100k[6:]
rusia_female_EG = REG_suicides_per_100k[0:6]
rusia_male_EG = REG_suicides_per_100k[6:]
suicides_per_100k_general_F = [japon_female_EG,italia_female_EG,mexico_female_EG,chile_female_EG,rusia_female_EG]
suicides_per_100k_general_M = [japon_male_EG,italia_male_EG,mexico_male_EG,chile_male_EG,rusia_male_EG]
years15F = [suicides_per_100k_general_F[i][0] for i in range(0,5)]
years25F = [suicides_per_100k_general_F[i][1] for i in range(0,5)]
years35F = [suicides_per_100k_general_F[i][2] for i in range(0,5)]
years5F = [suicides_per_100k_general_F[i][3] for i in range(0,5)]
years55F = [suicides_per_100k_general_F[i][4] for i in range(0,5)]
years75F = [suicides_per_100k_general_F[i][5] for i in range(0,5)]
years15M = [suicides_per_100k_general_M[i][0] for i in range(0,5)]
years25M = [suicides_per_100k_general_M[i][1] for i in range(0,5)]
years35M = [suicides_per_100k_general_M[i][2] for i in range(0,5)]
years5M = [suicides_per_100k_general_M[i][3] for i in range(0,5)]
years55M = [suicides_per_100k_general_M[i][4] for i in range(0,5)]
years75M = [suicides_per_100k_general_M[i][5] for i in range(0,5)]
# +
# %matplotlib inline
x = np.arange(len(paises))
width=0.1
fig, ax = plt.subplots(figsize=(20,10))
rects1 = ax.bar(x + 0.25, years15F, width, label='15-24 years', color="#fde725")
rects2 = ax.bar(x + 0.15, years25F, width, label='25-34 years',color="#7ad151")
rects3 = ax.bar(x + 0.05 , years35F, width, label='35-54 years', color="#22a884")
rects4 = ax.bar(x - 0.05, years5F, width, label='5-14',color="#2a788e")
rects5 = ax.bar(x - 0.15, years55F, width, label='55-74', color="#414487")
rects6 = ax.bar(x - 0.25, years75F, width, label='75+',color="#440154")
ax.set_ylabel('Suicidios per 100k')
ax.set_title('Suicidios por edad en mujeres')
ax.set_xticks(x, paises)
ax.legend()
#mean_line = ax.plot(x,suicide_prom, label='Mean', linestyle='--')
#ax.bar_label(rects1)
#ax.bar_label(rects2)
#ax.bar_label(rects3)
#ax.bar_label(rects4)
#ax.bar_label(rects5)
#ax.bar_label(rects6)
#fig.tight_layout()
plt.show()
# -
# ## Insights
# * En general la tendencia es que la tasa de suicidio aumenta conforme la edad aumenta.
# * Chile y Mexico paises latinoamericanos son los unicos donde las mujeres jovenes adultas tienen una tasa de suicidio superios.
# * En Mexico la tasa es muy alta en mujeres de 15-25, en Chile de 35-55.
# +
# %matplotlib inline
x = np.arange(len(paises))
width=0.1
fig, ax = plt.subplots(figsize=(20,10))
rects1 = ax.bar(x + 0.25, years15M, width, label='15-24 years', color="#fde725")
rects2 = ax.bar(x + 0.15, years25M, width, label='25-34 years',color="#7ad151")
rects3 = ax.bar(x + 0.05 , years35M, width, label='35-54 years', color="#22a884")
rects4 = ax.bar(x - 0.05, years5M, width, label='5-14',color="#2a788e")
rects5 = ax.bar(x - 0.15, years55M, width, label='55-74', color="#414487")
rects6 = ax.bar(x - 0.25, years75M, width, label='75+',color="#440154")
ax.set_ylabel('Suicidios per 100k')
ax.set_title('Suicidios por edad en hombres')
ax.set_xticks(x, paises)
ax.legend()
#mean_line = ax.plot(x,suicide_prom, label='Mean', linestyle='--')
#ax.bar_label(rects1)
#ax.bar_label(rects2)
#ax.bar_label(rects3)
#ax.bar_label(rects4)
#ax.bar_label(rects5)
#ax.bar_label(rects6)
#fig.tight_layout()
plt.show()
# -
# ## Insights
# * Rusia tiene una tasa dos veces mayor al segundo lugar (Japon).
# * La tendencia general es que conforme la edad la tendencia los suicidios aumentan.
# * Mexico es el unico pais donde casi todos los grupos de edad de adultos menores de 75 años tienen la misma probabilidad de suicidio.
# # Analisis hombres 15-54 años
# Soy hombre asi que quiero revisar eso.
# ## Edades combinadas
# +
japon_hombre = japon[japon["age"]!= "75+ years"]
japon_hombre = japon_hombre[japon_hombre["age"] != "5-14 years"]
japon_hombre = japon_hombre[japon_hombre["age"] != "55-74 years"]
japon_hombre = japon_hombre[japon_hombre["sex"] == "male"]
italia_hombre = italia[italia["age"]!= "75+ years"]
italia_hombre = italia_hombre[italia_hombre["age"] != "5-14 years"]
italia_hombre = italia_hombre[italia_hombre["age"] != "55-74 years"]
italia_hombre = italia_hombre[italia_hombre["sex"] == "male"]
mexico_hombre = mexico[mexico["age"]!= "75+ years"]
mexico_hombre = mexico_hombre[mexico_hombre["age"] != "5-14 years"]
mexico_hombre = mexico_hombre[mexico_hombre["age"] != "55-74 years"]
mexico_hombre = mexico_hombre[mexico_hombre["sex"] == "male"]
chile_hombre = chile[chile["age"]!= "75+ years"]
chile_hombre = chile_hombre[chile_hombre["age"] != "5-14 years"]
chile_hombre = chile_hombre[chile_hombre["age"] != "55-74 years"]
chile_hombre = chile_hombre[chile_hombre["sex"] == "male"]
rusia_hombre = rusia[rusia["age"]!= "75+ years"]
rusia_hombre = rusia_hombre[rusia_hombre["age"] != "5-14 years"]
rusia_hombre = rusia_hombre[rusia_hombre["age"] != "55-74 years"]
rusia_hombre = rusia_hombre[rusia_hombre["sex"] == "male"]
# -
japon_years = japon_hombre["year"].unique().tolist()
italia_years = italia_hombre["year"].unique().tolist()
mexico_years = mexico_hombre["year"].unique().tolist()
chile_years = chile_hombre["year"].unique().tolist()
rusia_years = rusia_hombre["year"].unique().tolist()
japon_group_h = japon_hombre.groupby(["year"])
italia_group_h = italia_hombre.groupby(["year"])
mexico_group_h = mexico_hombre.groupby(["year"])
chile_group_h = chile_hombre.groupby(["year"])
rusia_group_h = rusia_hombre.groupby(["year"])
japon_pob_h = japon_group_h["population"].sum()
italia_pob_h = italia_group_h["population"].sum()
mexico_pob_h = mexico_group_h["population"].sum()
chile_pob_h = chile_group_h["population"].sum()
rusia_pob_h = rusia_group_h["population"].sum()
japon_suic_h = japon_group_h["suicides_no"].sum()
italia_suic_h = italia_group_h["suicides_no"].sum()
mexico_suic_h = mexico_group_h["suicides_no"].sum()
chile_suic_h = chile_group_h["suicides_no"].sum()
rusia_suic_h = rusia_group_h["suicides_no"].sum()
#japon_suic_h
JH_suicides_per_100k = list((japon_suic_h/japon_pob_h)*100000)
IH_suicides_per_100k = list((italia_suic_h/italia_pob_h)*100000)
MH_suicides_per_100k = list((mexico_suic_h/mexico_pob_h)*100000)
CH_suicides_per_100k = list((chile_suic_h/chile_pob_h)*100000)
RH_suicides_per_100k = list((rusia_suic_h/rusia_pob_h)*100000)
# +
plt.figure(figsize=(15,10))
plt.title("Suicidios hombres. 15-54")
sns.lineplot(x = japon_years,y=JH_suicides_per_100k,marker="o")
sns.lineplot(x = italia_years,y=IH_suicides_per_100k,marker="o")
sns.lineplot(x = mexico_years,y=MH_suicides_per_100k,marker="o")
sns.lineplot(x = chile_years,y=CH_suicides_per_100k,marker="o")
sns.lineplot(x = rusia_years,y=RH_suicides_per_100k,marker="o")
plt.xticks(np.arange(1985,2020,step=3))
plt.yticks(np.arange(0,100,step=5))
plt.grid()
plt.legend(labels=["Japon","Italia","Mexico","Chile","Rusia"], loc = 0, bbox_to_anchor = (1,0.5))
plt.show()
# -
# ## Insights
# * México sigue en aumento su tendencia, Italia va a la baja y alcanzo a este pais.
# * Las tendencias se mantienen mas o menos igual.
# # Por grupos de edad
japon_group_h1 = japon_hombre.groupby(["age","year"])
italia_group_h1 = italia_hombre.groupby(["age","year"])
mexico_group_h1 = mexico_hombre.groupby(["age","year"])
chile_group_h1 = chile_hombre.groupby(["age","year"])
rusia_group_h1 = rusia_hombre.groupby(["age","year"])
japon_pob_h1 = japon_group_h1["population"].sum()
italia_pob_h1 = italia_group_h1["population"].sum()
mexico_pob_h1 = mexico_group_h1["population"].sum()
chile_pob_h1 = chile_group_h1["population"].sum()
rusia_pob_h1 = rusia_group_h1["population"].sum()
japon_suic_h1 = japon_group_h1["suicides_no"].sum()
italia_suic_h1 = italia_group_h1["suicides_no"].sum()
mexico_suic_h1 = mexico_group_h1["suicides_no"].sum()
chile_suic_h1 = chile_group_h1["suicides_no"].sum()
rusia_suic_h1 = rusia_group_h1["suicides_no"].sum()
JH1_suicides_per_100k = list((japon_suic_h1/japon_pob_h1)*100000)
IH1_suicides_per_100k = list((italia_suic_h1/italia_pob_h1)*100000)
MH1_suicides_per_100k = list((mexico_suic_h1/mexico_pob_h1)*100000)
CH1_suicides_per_100k = list((chile_suic_h1/chile_pob_h1)*100000)
RH1_suicides_per_100k = list((rusia_suic_h1/rusia_pob_h1)*100000)
#Grupos Japon
yearsJ15 = JH1_suicides_per_100k[0:31]
yearsJ25 = JH1_suicides_per_100k[31:62]
yearsJ35 = JH1_suicides_per_100k[62:]
len(yearsJ15),len(yearsJ25),len(yearsJ35)
#Italia
yearsI15 = IH1_suicides_per_100k[0:31]
yearsI25 = IH1_suicides_per_100k[31:62]
yearsI35 = IH1_suicides_per_100k[62:]
len(yearsI15),len(yearsI25),len(yearsI35)
#Mexico
yearsM15 = MH1_suicides_per_100k[0:31]
yearsM25 = MH1_suicides_per_100k[31:62]
yearsM35 = MH1_suicides_per_100k[62:]
len(yearsM15),len(yearsM25),len(yearsM35)
#Chile
yearsC15 = CH1_suicides_per_100k[0:31]
yearsC25 = CH1_suicides_per_100k[31:62]
yearsC35 = CH1_suicides_per_100k[62:]
len(yearsC15),len(yearsC25),len(yearsC35)
#Rusia
yearsR15 = RH1_suicides_per_100k[0:27]
yearsR25 = RH1_suicides_per_100k[27:54]
yearsR35 = RH1_suicides_per_100k[54:]
len(yearsR15),len(yearsR25),len(yearsR35)
# +
plt.figure(figsize=(15,10))
plt.title("Suicidios hombres. 15-24")
sns.lineplot(x = japon_years,y=yearsJ15,marker="o")
sns.lineplot(x = italia_years,y=yearsI15,marker="o")
sns.lineplot(x = mexico_years,y=yearsM15,marker="o")
sns.lineplot(x = chile_years,y=yearsC15,marker="o")
sns.lineplot(x = rusia_years,y=yearsR15,marker="o")
plt.xticks(np.arange(1985,2020,step=3))
plt.yticks(np.arange(0,100,step=5))
plt.grid()
plt.legend(labels=["Japon","Italia","Mexico","Chile","Rusia"], loc = 0, bbox_to_anchor = (1,0.5))
plt.show()
# +
plt.figure(figsize=(15,10))
plt.title("Suicidios hombres. 25-34")
sns.lineplot(x = japon_years,y=yearsJ25,marker="o")
sns.lineplot(x = italia_years,y=yearsI25,marker="o")
sns.lineplot(x = mexico_years,y=yearsM25,marker="o")
sns.lineplot(x = chile_years,y=yearsC25,marker="o")
sns.lineplot(x = rusia_years,y=yearsR25,marker="o")
plt.xticks(np.arange(1985,2020,step=3))
plt.yticks(np.arange(0,110,step=5))
plt.grid()
plt.legend(labels=["Japon","Italia","Mexico","Chile","Rusia"], loc = 0, bbox_to_anchor = (1,0.5))
plt.show()
# +
plt.figure(figsize=(15,10))
plt.title("Suicidios hombres. 34-45")
sns.lineplot(x = japon_years,y=yearsJ35,marker="o")
sns.lineplot(x = italia_years,y=yearsI35,marker="o")
sns.lineplot(x = mexico_years,y=yearsM35,marker="o")
sns.lineplot(x = chile_years,y=yearsC35,marker="o")
sns.lineplot(x = rusia_years,y=yearsR35,marker="o")
plt.xticks(np.arange(1985,2020,step=3))
plt.yticks(np.arange(0,130,step=5))
plt.grid()
plt.legend(labels=["Japon","Italia","Mexico","Chile","Rusia"], loc = 0, bbox_to_anchor = (1,0.5))
plt.show()
# -
# ## Insigths
# * De 15-35 Mexico ha superado a Italia y sigue una tendencia a la alza.
# * En el grupo 15-25 Japon y Chile han tenido un comportamiento entrecruzado, a vez uno supera al otro y viceversa
# Con esto he terminado mi primer analisis, ha sido algo trabajoso y me di cuenta que todo lo que aprendes en un
# curso no es suficiente. Han salido cosas muy interesantes, sobre todo para un tema tan complejo e importante
# como los suicidios.
# Los siguientes pasos:
# * Pasar los graficos en Tableu o Power BI.
# * Seguir mejorando las practicas de Pandas.
# * Transformarlo en un cuaderno interactivo (online o algo asi) para que los compañeros puedan hacerlo.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
from google.cloud import bigquery
# +
# Create a "Client" object
client = bigquery.Client()
# +
# Construct a reference to the "hacker_news" dataset
dataset_ref = client.dataset("hacker_news", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# +
# Construct a reference to the "comments" table
table_ref = dataset_ref.table("comments")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the "comments" table
client.list_rows(table, max_results=5).to_dataframe()
# -
# Query to select comments that received more than 10 replies
query_popular = """
SELECT parent, COUNT(id)
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
HAVING COUNT(id) > 10
"""
# +
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 10 GB)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
query_job = client.query(query_popular, job_config=safe_config)
# API request - run the query, and convert the results to a pandas DataFrame
popular_comments = query_job.to_dataframe()
# Print the first five rows of the DataFrame
popular_comments.head()
# +
# Improved version of earlier query, now with aliasing & improved readability
query_improved = """
SELECT parent, COUNT(1) AS NumPosts
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
HAVING COUNT(1) > 10
"""
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
query_job = client.query(query_improved, job_config=safe_config)
# API request - run the query, and convert the results to a pandas DataFrame
improved_df = query_job.to_dataframe()
# Print the first five rows of the DataFrame
improved_df.head()
# -
query_good = """
SELECT parent, COUNT(id)
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
"""
query_bad = """
SELECT author, parent, COUNT(id)
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
"""
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('base')
# language: python
# name: python3
# ---
# +
#|default_exp data.transforms
# +
#|export
from __future__ import annotations
import pandas as pd
from pathlib import Path
from fastcore.foundation import mask2idxs
from fastai.data.transforms import IndexSplitter
from fastxtend.imports import *
# -
# # Splitters
#
# > Additional functions for splitting data
#|hide
from nbdev.showdoc import *
from fastxtend.test_utils import *
#|export
def KFoldColSplitter(
fold:listified[int]=0, # Valid set fold(s)
col:int|str='folds' # Column with folds
):
"Split `items` (supposed to be a dataframe) by `fold` in `col`"
def _inner(o):
assert isinstance(o, pd.DataFrame), "KFoldColSplitter only works when your items are a pandas DataFrame"
valid_col = o.iloc[:,col] if isinstance(col, int) else o[col]
valid_idx = valid_col.isin(fold) if is_listy(fold) else valid_col.values == fold
return IndexSplitter(mask2idxs(valid_idx))(o)
return _inner
# +
#|hide
df = pd.DataFrame({'a': [0,1,2,3,4,5,6,7,8,9], 'b': [0,1,2,3,4,0,1,2,3,4]})
splits = KFoldColSplitter(col='b')(df)
test_eq(splits, [[1,2,3,4,6,7,8,9], [0,5]])
# Works with strings or index
splits = KFoldColSplitter(col=1)(df)
test_eq(splits, [[1,2,3,4,6,7,8,9], [0,5]])
# Works with single or multiple folds
df = pd.DataFrame({'a': [0,1,2,3,4,5,6,7,8,9], 'folds': [0,1,2,3,4,0,1,2,3,4]})
splits = KFoldColSplitter(fold=[0,1],col='folds')(df)
test_eq(splits, [[2,3,4,7,8,9], [0,1,5,6]])
# +
#|hide
from fastcore.basics import ifnone, range_of
def _test_splitter(f, items=None):
"A basic set of condition a splitter must pass"
items = ifnone(items, range_of(30))
trn,val = f(items)
assert 0<len(trn)<len(items)
assert all(o not in val for o in trn)
test_eq(len(trn), len(items)-len(val))
# test random seed consistency
test_eq(f(items)[0], trn)
return trn, val
# -
#|exporti
def _parent_idxs(items, name):
def _inner(items, name): return mask2idxs(Path(o).parent.name == name for o in items)
return [i for n in L(name) for i in _inner(items,n)]
#|export
def ParentSplitter(
train_name:str='train', # Train set folder name
valid_name:str='valid' # Valid set folder name
):
"Split `items` from the parent folder names (`train_name` and `valid_name`)."
def _inner(o):
return _parent_idxs(o, train_name),_parent_idxs(o, valid_name)
return _inner
# +
#|hide
fnames = ['dir/train/9932.png', 'dir/valid/7189.png',
'dir/valid/7320.png', 'dir/train/9833.png',
'dir/train/7666.png', 'dir/valid/925.png',
'dir/train/724.png', 'dir/valid/93055.png']
splitter = ParentSplitter()
_test_splitter(splitter, items=fnames)
test_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]])
# -
#|exporti
def _greatgrandparent_idxs(items, name):
def _inner(items, name): return mask2idxs(Path(o).parent.parent.parent.name == name for o in items)
return [i for n in L(name) for i in _inner(items,n)]
#|export
def GreatGrandparentSplitter(
train_name:str='train', # Train set folder name
valid_name:str='valid' # Valid set folder name
):
"Split `items` from the great grand parent folder names (`train_name` and `valid_name`)."
def _inner(o):
return _greatgrandparent_idxs(o, train_name),_greatgrandparent_idxs(o, valid_name)
return _inner
# +
#|hide
fnames = ['dir/train/9/9/9932.png', 'dir/valid/7/1/7189.png',
'dir/valid/7/3/7320.png', 'dir/train/9/8/9833.png',
'dir/train/7/6/7666.png', 'dir/valid/9/2/925.png',
'dir/train/7/2/724.png', 'dir/valid/9/3/93055.png']
splitter = GreatGrandparentSplitter()
_test_splitter(splitter, items=fnames)
test_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]])
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import requests
url = "https://tiktok33.p.rapidapi.com/music/feed/6970139697599237381"
headers = {
'x-rapidapi-key': "4c74301b08msh16e7070111b8497p1a6cb4jsn13f4d5f28255",
'x-rapidapi-host': "tiktok33.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
# +
import requests
url = "https://tiktok-trending-data.p.rapidapi.com/m"
headers = {
'x-rapidapi-key': "4c74301b08msh16e7070111b8497p1a6cb4jsn13f4d5f28255",
'x-rapidapi-host': "tiktok-trending-data.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
# +
import requests
url = "https://tiktok28.p.rapidapi.com/music/6951533800497188102"
headers = {
'x-rapidapi-key': "4c74301b08msh16e7070111b8497p1a6cb4jsn13f4d5f28255",
'x-rapidapi-host': "tiktok28.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
# +
import requests
url = "https://tik-tok-feed.p.rapidapi.com/"
querystring = {"search":"6951533800497188102","type":"music-feed","max":"0"}
headers = {
'x-rapidapi-key': "4c74301b08msh16e7070111b8497p1a6cb4jsn13f4d5f28255",
'x-rapidapi-host': "tik-tok-feed.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
# +
import requests
url = "https://tik-tok-feed.p.rapidapi.com/"
querystring = {"search":"6968387127449012994","type":"music-feed","max":"0"}
headers = {
'x-rapidapi-key': "4c74301b08msh16e7070111b8497p1a6cb4jsn13f4d5f28255",
'x-rapidapi-host': "tik-tok-feed.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Reading record.json file
fd = open("record.json",'r')
txt = fd.read()
fd.close()
# ### Printing the text file from record.json
txt
# ### Checking the type of txt file
type(txt)
# ### Importing json file and converting into a dictionary
import json
record = json.loads(txt)
record
# ### Checking the type of converted record.json file
type(record)
# ### Fetching the data from record on the basis of id
name = input("Enter name of the customer :")
id = input("Enter the id : ")
if id in record :
print(id,"=",record[id])
else:
print("\t\t\t\t\tInvalid input")
# ### Producing the bill for the purchase
# +
print("*" * 37 ,"WARDROBE INVENTORY MANAGEMENT SYSTEM USING JSON","*" * 41)
name = input("\n\t\t\t\t\tEnter name of the customer :")
id = input("\t\t\t\t\tEnter the id : ")
qty = int(input("\t\t\t\t\tEnter the quantity : "))
if id in record:
if qty > 1:
print("\n\t\t\t\t\tName of the Customer is :",name)
print("\t\t\t\t\tId of the product is: ", id)
print("\t\t\t\t\tName of the product : ", record[id]['Name'])
print("\t\t\t\t\tQuantity of product to be bought :", qty)
print("\t\t\t\t\tActual Price of the Product : " , record[id]['Price']*qty)
print("\t\t\t\t\tDiscount available on product : ", record[id]['Discount'])
new_price = (record[id]['Price']) * ((record[id]['Discount'])/100) * qty
print("\t\t\t\t\tDiscounted price of product : ", int(new_price))
elif qty==1:
print("\n\t\t\t\t\tName of the Customer is :",name)
print("\t\t\t\t\tId of the product is: ", id)
print("\t\t\t\t\tName of the product : ", record[id]['Name'])
print("\t\t\t\t\tQuantity of product to be bought :", qty)
print("\t\t\t\t\tActual Price of the Product : " , record[id]['Price']*qty)
print("\t\t\t\t\tDiscount available on product : ", record[id]['Discount'])
new_price = (record[id]['Price']) * ((record[id]['Discount'])/100) * qty
print("\t\t\t\t\tDiscounted price of product : ", int(new_price))
elif qty ==0:
print("\n\t\t\t\t\tName of the Customer is :",name)
print("\t\t\t\t\tId of the product is: ", id)
print("\t\t\t\t\tName of the product : ", record[id]['Name'])
print("\t\t\t\t\tQuantity of product to be bought :", qty)
print("\t\t\t\t\tActual Price of the Product : " , record[id]['Price']*qty)
print("\t\t\t\t\tDiscount available on product : ", record[id]['Discount'])
new_price = (record[id]['Price']) * ((record[id]['Discount'])/100) * qty
print("\t\t\t\t\tDiscounted price of product : ", int(new_price))
else: #if the input for quantity is negative
print("\t\t\t\t\tInavlid Input as the quantity is in negative format ")
else:
print("Invalid Input")
print("\n")
print("*" * 127)
# -
# ### Updating the dictionary
record[id]['Qty'] = record[id]['Qty'] - qty
record
# ### Fetching data of record on the basis of discount
id = input("Enter the id : ")
disc = int(input("Enter the discount : "))
if id in record:
if disc == record[id]['Discount']:
print("Name of the product is : ",record[id]['Name'])
print("Actual price of the product is : ",record[id]['Price'])
disc_price = (record[id]['Price']) * ((record[id]['Discount'])/100)
print("Discounted price of the product is : ",int(disc_price))
elif disc > record[id]['Discount']:
print("Name of the product is : ", record[id]['Name'])
print("Actual price of the product is : ", record[id]['Price'])
disc_price = ((record[id]['Price']) * ((record[id]['Discount'])/100)) + disc
print("Discounted price of the product is : ", int(disc_price))
elif disc < record[id]['Discount'] and disc >= 0:
print("Name of the product is : ", record[id]['Name'])
print("Actual price of the product is : ", record[id]['Price'])
disc_price = ((record[id]['Price']) * ((record[id]['Discount'])/100)) - disc
print("Discounted price of the prodcut is : ", int(disc_price))
else: #if discount < 0
print("Price can't be generated for a negative value of discount")
else:
print("Invalid Input")
# ### Updating the inventory
js = json.dumps(record)
fd = open("record.json","w")
fd.write(js)
fd.close()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
from matplotlib import pyplot as plt
import numpy as np
import json
with open('../dataset/metrics.json') as f:
metrics = json.load(f)
# +
attributes = ['price', 'sku', 'availability'] #, 'InStock', 'OutOfStock']
names = ['Zyte', 'Diffbot', 'extruct']
colors = ['tan', 'lightblue', 'silver']
fig = plt.figure()
ticks = np.arange(len(attributes)) + 0.5
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
w = 1 / 4
offset = -w
for name, color in zip(names, colors):
values = [metrics[name][a]['f1'] for a in attributes]
yerr = [metrics[name][a]['f1_std'] for a in attributes]
ax.bar(
ticks + offset,
values,
yerr=yerr,
width=w,
color=color,
label=name,
)
ax.errorbar(
ticks + offset,
values,
yerr=yerr,
fmt='o',
capsize=10,
color='black',
)
offset += w
ax.set_xticks(ticks)
ax.set_xticklabels(attributes)
ax.set_ylabel('F1')
ax.legend()
fig.savefig('plots.png', dpi=300)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="vz3M0jw4B2ud" colab_type="code" colab={}
from google.colab import drive
import pandas as pd
import numpy as np
# + id="C68fp-rOCl12" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="0e8aded8-82d3-48ad-b68c-69e0042193f6" executionInfo={"status": "ok", "timestamp": 1581538566166, "user_tz": -60, "elapsed": 28302, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# !pip install datadotworld
# !pip install datadotworld[pandas]
# + id="-hqdj02oCvEo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="c97957c5-7baa-412e-b67a-c0044105f99d" executionInfo={"status": "ok", "timestamp": 1581538697339, "user_tz": -60, "elapsed": 15847, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# !dw configure
# + id="1vRNT-F7DSIQ" colab_type="code" colab={}
from google.colab import drive
import pandas as pd
import numpy as np
import datadotworld as dw
# + id="FxTB7NNYDgxl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="74e0ccf3-5bc3-496a-dee0-ba78aec5832b" executionInfo={"status": "ok", "timestamp": 1581539472752, "user_tz": -60, "elapsed": 648, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
drive.mount("/content/drive")
# + id="hPmAFe3BDmaO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b06068bb-80e2-43f9-ae10-c6f9ddd48006" executionInfo={"status": "ok", "timestamp": 1581539481327, "user_tz": -60, "elapsed": 1890, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# ls
# + id="swbzXJK3EUUE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="76539bb1-0d56-496e-9fe0-1b6036cf5773" executionInfo={"status": "ok", "timestamp": 1581539515698, "user_tz": -60, "elapsed": 658, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
cd- :"drive/My Drive/Colab Notebooks/matrix"
# + id="W_DXxAMTEbGk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="897e869e-1ac9-4810-84f9-01cf6bd00553" executionInfo={"status": "ok", "timestamp": 1581538988431, "user_tz": -60, "elapsed": 2286, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# ls
# + id="tLEIw-_0EchN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="df5dc557-8e43-4cab-ef59-e6824fd06296" executionInfo={"status": "error", "timestamp": 1581539330003, "user_tz": -60, "elapsed": 799, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
cd.. matrix_one
# + id="um7RkFiGEgsy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0caa9463-758a-49fc-cf5d-82008ca73f9f" executionInfo={"status": "ok", "timestamp": 1581539009109, "user_tz": -60, "elapsed": 1768, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# ls
# + id="_J0EUcLZEhsf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="6fe34181-71dd-455c-e3c0-92b1b7e0b178" executionInfo={"status": "error", "timestamp": 1581539022670, "user_tz": -60, "elapsed": 608, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
cd..
# + id="qCl7Pr8wElSg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e161aff0-0d6a-4002-814d-a974b6fffdf1" executionInfo={"status": "ok", "timestamp": 1581539027929, "user_tz": -60, "elapsed": 1828, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# ls
# + id="hRBX3P8OEmOX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="6ca799e4-1182-4e43-8666-0625b749d594" executionInfo={"status": "error", "timestamp": 1581539269286, "user_tz": -60, "elapsed": 615, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
.. "drive/My Drive/Colac Notebooks/matrix"
# + id="uEmaCroTEu6Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2e0adff4-c7c4-4d9a-afc8-974eaf3124c3" executionInfo={"status": "ok", "timestamp": 1581539102156, "user_tz": -60, "elapsed": 4207, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# ls
# + id="anf971j0E3ul" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="d896f3ce-f9ef-44d6-8b0f-c5b2a96fa744" executionInfo={"status": "error", "timestamp": 1581539258492, "user_tz": -60, "elapsed": 404, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
..
# + id="bW4DBkv4Fe06" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="09e52f57-551d-430a-dec4-2b1e0d376b4c" executionInfo={"status": "ok", "timestamp": 1581539341805, "user_tz": -60, "elapsed": 1581, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# ls
# + id="3KN95ad5Fy8S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="70322b5b-9efa-49bb-b600-05dad26e20d4" executionInfo={"status": "ok", "timestamp": 1581539454819, "user_tz": -60, "elapsed": 758, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
'drive/My Drive/Colab Notebooks/matrix'
# + id="PE6Mbv5jF5mi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4877a63a-0c5a-4430-8deb-8b1879dd1f04" executionInfo={"status": "ok", "timestamp": 1581539460127, "user_tz": -60, "elapsed": 2042, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# ls
# + id="eRaXHz-BGPuj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d23db740-aa64-4045-ee23-50a43351d16a" executionInfo={"status": "ok", "timestamp": 1581539537916, "user_tz": -60, "elapsed": 722, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# cd -
# + id="zsIpBAMyGh4M" colab_type="code" colab={}
# !mkdir data
# + id="vyYLRKvyGmc2" colab_type="code" colab={}
# !echo 'data' > .gitignore
# + id="a6L0b8KdGs9T" colab_type="code" colab={}
# !git add .gitignore
# + id="BPPEU-5FGxhT" colab_type="code" colab={}
data = dw.load_dataset('datafiniti/mens-shoe-prices')
# + id="QayL7soSHG_f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 121} outputId="dbf4bdc2-23b2-490c-b116-def350fca8f3" executionInfo={"status": "ok", "timestamp": 1581539740706, "user_tz": -60, "elapsed": 1878, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
df = data.dataframes['7004_1']
df.shape
# + id="5VIAIpf-HJte" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 736} outputId="4d7d0efa-5ab6-4168-f8c3-cea23fef8f59" executionInfo={"status": "ok", "timestamp": 1581539756431, "user_tz": -60, "elapsed": 819, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
df.sample(5)
# + id="MxIwp4LVHYX4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="4d02aa27-141f-47d3-9a2d-9e4cb74f5b5a" executionInfo={"status": "ok", "timestamp": 1581539779017, "user_tz": -60, "elapsed": 969, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
df.columns
# + id="excVzhpmHdyr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="ff3b5f6d-c01c-4765-805e-1995b858d389" executionInfo={"status": "ok", "timestamp": 1581539856622, "user_tz": -60, "elapsed": 657, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
df.prices_currency.unique()
# + id="fznybE_sHw3Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 252} outputId="bb2516ef-40d5-434b-c4d1-86fadb7bb14b" executionInfo={"status": "ok", "timestamp": 1581539897881, "user_tz": -60, "elapsed": 430, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
df.prices_currency.value_counts(normalize=True)
# + id="I5B5ZgAOHkJv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="97264c9d-a29e-4abd-c149-9c12ee4c614a" executionInfo={"status": "ok", "timestamp": 1581540003060, "user_tz": -60, "elapsed": 638, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
df_usd = df[df.prices_currency == 'USD'].copy()
df_usd.shape
# + id="svEisKA3IR2y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="1db09426-1cb9-463e-f742-9925da58c842" executionInfo={"status": "ok", "timestamp": 1581540213833, "user_tz": -60, "elapsed": 662, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
df_usd['prices_amountmin'] = df_usd.prices_amountmin.astype(np.float)
df_usd['prices_amountmin'].hist()
# + id="_uWroJQDIhj5" colab_type="code" colab={}
filter_max = np.percentile(df_usd['prices_amountmin'],99)
# + id="6aoK1XQWJU7L" colab_type="code" colab={}
df_usd_filter = df_usd[df_usd['prices_amountmin'] < filter_max]
# + id="zuBqbqdCJkPZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="32e762a5-855b-4850-b34a-40b53e4204f2" executionInfo={"status": "ok", "timestamp": 1581540482413, "user_tz": -60, "elapsed": 949, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
df_usd_filter.prices_amountmin.hist(bins=100)
# + id="Bnvy9fODJ-oI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7e42977c-e870-458f-ceac-5ef2768870a2" executionInfo={"status": "ok", "timestamp": 1581540607127, "user_tz": -60, "elapsed": 1973, "user": {"displayName": "Karolina W\u00f3jcik", "photoUrl": "", "userId": "02767242814180107368"}}
# ls
# + id="X7qbc9bLKnvQ" colab_type="code" colab={}
# ls matrix_one/
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import classification_report
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler#标准化
# 数据是否需要标准化
scale = True
# 载入数据
data_class = np.genfromtxt("G:/机器学习算法/Breast-Cancer/train.csv", delimiter=",")
# +
x_data = data_class[:,:-1]
y_data = data_class[:,-1]
def plot():
x0 = []
x1 = []
y0 = []
y1 = []
# 切分不同类别的数据
for i in range(len(x_data)):
if y_data[i]==0:
x0.append(x_data[i,0])
y0.append(x_data[i,1])
else:
x1.append(x_data[i,0])
y1.append(x_data[i,1])
# 画图
scatter0 = plt.scatter(x0, y0, c='b', marker='o')
scatter1 = plt.scatter(x1, y1, c='r', marker='x')
#画图例
plt.legend(handles=[scatter0,scatter1],labels=['label0','label1'],loc='best')
plot()
plt.show()
# +
# 数据处理,添加偏置项
x_data = data_class[:,:-1]
y_data = data_class[:,-1,np.newaxis]#增加一维
print(np.mat(x_data).shape)
print(np.mat(y_data).shape)
# 给样本添加偏置项
X_data = np.concatenate((np.ones((524,1)),x_data),axis=1)#能够一次完成多个数组的拼接
print(X_data.shape)
# +
#逻辑回归函数
def sigmoid(x):
return 1.0/(1+np.exp(-x))
#逻辑回归的代价函数
def cost(xMat, yMat, ws):
left = np.multiply(yMat, np.log(sigmoid(xMat*ws)))
right = np.multiply(1 - yMat, np.log(1 - sigmoid(xMat*ws)))
return np.sum(left + right) / -(len(xMat))
#随机梯度下降
def gradAscent(xArr, yArr):
#需要标准化
if scale == True:
xArr = preprocessing.scale(xArr)
xMat = np.mat(xArr)
yMat = np.mat(yArr)
#学习率、迭代周期、保存cost值
lr = 0.001
epochs = 10000
costList = []
# 计算数据行列数
# 行代表数据个数,列代表权值个数
m,n = np.shape(xMat)
# 初始化权值
ws = np.mat(np.ones((n,1)))
for i in range(epochs+1):
# xMat和weights矩阵相乘
h = sigmoid(xMat*ws)
# 计算误差
ws_grad = xMat.T*(h - yMat)/m
ws = ws - lr*ws_grad
if i % 50 == 0:
costList.append(cost(xMat,yMat,ws))
return ws,costList
# -
# 训练模型,得到权值和cost值的变化
ws,costList = gradAscent(X_data, y_data)
print(ws)
if scale == False:
# 画图决策边界
plot()
x_test = [[-4],[3]]
y_test = (-ws[0] - x_test*ws[1])/ws[2]
plt.plot(x_test, y_test, 'k')
plt.show()
# 画图 loss值的变化
x = np.linspace(0,10000,201)
plt.plot(x, costList, c='r')
plt.title('Train')
plt.xlabel('Epochs')
plt.ylabel('Cost')
plt.show()
# +
# 预测
def predict(x_data, ws):
if scale == True:
x_data = preprocessing.scale(x_data)
xMat = np.mat(x_data)
ws = np.mat(ws)
return [1 if x >= 0.5 else 0 for x in sigmoid(xMat*ws)]
predictions = predict(X_data, ws)
print(classification_report(y_data, predictions))
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 8, "height": 4, "hidden": true, "row": 33, "width": 4}, "report_default": {}}}}
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 8, "height": 4, "hidden": true, "row": 33, "width": 4}, "report_default": {}}}}
from IPython.display import HTML
HTML('''<script>
code_show_err=false;
function code_toggle_err() {
if (code_show_err){
$('div.output_stderr').hide();
} else {
$('div.output_stderr').show();
}
code_show_err = !code_show_err
}
$( document ).ready(code_toggle_err);
</script>
To toggle on/off output_stderr, click <a href="javascript:code_toggle_err()">here</a>.''')
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 4, "hidden": true, "row": 0, "width": 4}, "report_default": {}}}}
#@hidden_cell
# %pylab inline
# %load_ext autoreload
# %autoreload 2
import seaborn as sns
import quandl
import bt
import monthly_returns_heatmap as mrh
figsize(3,3)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
import sys,os
from backtester.analysis import *
from backtester.swarms.swarm import Swarm
from backtester.exoinfo import EXOInfo
from exobuilder.data.exostorage import EXOStorage
import pandas as pd
import numpy as np
import scipy
import pprint
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
# Loading global setting for MongoDB etc.
from scripts.settings import *
try:
from scripts.settings_local import *
except:
pass
# -
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
storage = EXOStorage(MONGO_CONNSTR, MONGO_EXO_DB)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
swm_info = storage.swarms_info()
pp = pprint.PrettyPrinter(indent=4)
# pp.pprint(swm_info)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
product_name = '*'
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
#instruments_filter = [product_name] # Select ALL
instruments_filter = ['*']
#exo_filter = ['CL_'] # Select ALL
exo_filter = ['*']
direction_filter = [0, -1, 1] # Select ALL
#direction_filter = [1]
# alpha_filter = ['March_30_2018','EXO'] # Select ALL
alpha_filter = ['May_7','*']
#alpha_filter = ['_AlphaV1Exposure_HedgedBy_V2_Index','Aug','Sept']
#alpha_filter = ['CL_ContFut']
swmdf, swm_data = storage.swarms_list(instruments_filter, direction_filter, alpha_filter, exo_filter)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
# [print("'{}',".format(s)) for s in sorted(swmdf.columns)];
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
campaign_exposure = {
'CL_ContFut_Long_EXO' : 0.5,
'CL_ContFut_Short_EXO': 1.9,
'CL_ContFut_Long_Strategy_DSP_LowPass__Bullish_2_Dec_18_custom': 2.0,
'CL_ContFut_Long_Strategy_DSP_LowPass__Bullish_Dec_11_custom': 2.0,
'CL_ContFut_Long_Strategy_DSP_LPBP_Combination__Bullish_1_March_30_2018_custom': 2.0,
'CL_ContFut_Short_Strategy_DSP_LPBP_Combination__Bearish_2_March_30_2018_custom': 2.0,
'CL_ContFut_Short_Strategy_DSP_LPBP_Combination__Bearish_2_Dec_19_custom': 2.0,
'CL_ContFut_Short_Strategy_DSP_LowPass__Bearish_Dec_18_custom': 2.0,
}
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 4, "height": 12, "hidden": true, "row": 0, "width": 4}, "report_default": {}}}}
campaign_dict = {}
campaign_stats = {'NetProfit': 0.0, 'TradesCount': 0, 'CommissionSum': 0.0}
campaign_deltas_dict = {}
for camp_name, exposure in campaign_exposure.items():
if camp_name in swmdf:
swarm_name = camp_name
campaign_dict[swarm_name] = swmdf[swarm_name] * exposure
# TODO: implement swarm statistics
for s in swm_data:
if s['swarm_name'] != swarm_name:
continue
series = s['swarm_series']
_delta_arr = campaign_deltas_dict.setdefault('', pd.Series(0, index=series['delta'].index))
campaign_deltas_dict[''] = pd.concat([_delta_arr, series['delta']*exposure], axis=1).sum(axis=1)
campaign_portfolio = pd.DataFrame(campaign_dict).ffill()
campaign_equity = campaign_portfolio.sum(axis=1)
campaign_deltas = pd.DataFrame(campaign_deltas_dict)
# print(campaign_deltas.tail())
# print(campaign_deltas.abs().max())
print('current Ursus CL futures holdings' + str(campaign_deltas.tail(2)))
total = campaign_portfolio.sum(axis=1)
total = pd.DataFrame(total)
total.columns = [ 'Futures_replication'] #change to the campaign
total.index.name = 'date'
print('Estimated Ursus CL futures PnL' + str(total.diff().tail(3) ))
total.diff().tail()
Ursus_Crude_Oil = total
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
capital = abs(campaign_deltas).max().mul(5).mul(5000)
capital = int(capital)
# + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 8, "height": 4, "hidden": true, "row": 0, "width": 4}, "report_default": {}}}}
# ### Campaign members equities
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 8, "height": 4, "hidden": true, "row": 0, "width": 4}, "report_default": {}}}}
import bt
import ffn
total = total + capital
total = total.asfreq('D', method='ffill').dropna()
long_sma = total.rolling(window=210,center=False).median()
# # target weights
tw = long_sma.copy()
tw[total > long_sma] = 2.0
tw[total <= long_sma] = 1.00
tw[long_sma.isnull()] = 0.0
##here we specify the children (3rd) arguemnt to make sure the strategy
## has the proper universe. This is necessary in strategies of strategies
Buy_and_Hold_CL_S = bt.Strategy('Ursus Crude Oil Index',
[bt.algos.WeighTarget(tw),
bt.algos.RunWeekly(),
bt.algos.Rebalance()])
# Buy_and_Hold_ES_L = bt.Strategy('Long CL_L',
# [bt.algos.RunMonthly(),
# bt.algos.SelectAll(),
# bt.algos.WeighEqually(),
# bt.algos.Rebalance()])
# cl_LS_alpha_portfolio = bt.Backtest(Buy_and_Hold_cl_LS, cl_LS)
CL_S_alpha_portfolio = bt.Backtest(Buy_and_Hold_CL_S, total,
# initial_capital=150000.0,
commissions=lambda q, p: max(10, abs(q) * 0.0021),
integer_positions=True,
progress_bar=True)
res_CL_S = bt.run(CL_S_alpha_portfolio)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
try:
sco_BM = bt.get('sco', start='2012-01-01')
except:
pass
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 4, "hidden": true, "row": 14, "width": 4}, "report_default": {}}}}
s_BM = bt.Strategy('Short_Crude_Oil_BM (SCO)', [bt.algos.RunMonthly(),
bt.algos.SelectAll(),
bt.algos.WeighEqually(),
bt.algos.Rebalance()])
test_BM = bt.Backtest(s_BM, sco_BM)
r = bt.run(CL_S_alpha_portfolio, test_BM)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
data_EW = res_CL_S.prices
returns = pd.DataFrame(data_EW)
# .asfreq('BM').tail(1400)
returns.index = pd.to_datetime(returns.index)
returns.columns = ['Ursus Crude Oil Index']
# returns = returns.add()
# + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 2, "hidden": false, "row": 39, "width": 10}, "report_default": {}}}}
# # Ursus Crude Oil Index
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 20, "hidden": false, "row": 19, "width": 10}, "report_default": {}}}}
# res_CL_S.plot()
r.plot(figsize=(25, 15))
# r.plot()
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 6, "height": 19, "hidden": false, "row": 0, "width": 6}, "report_default": {}}}}
# Data to plot
figsize(10,10)
labels = ['Active Long Biased','Active Short Biased', 'Passive Long Biased', 'Passive Short Biased']
sizes = [6.0, 6.0, 0.5, 1.9]
colors = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue']
explode = (0, 0, 0, 0) # explode 1st slice
# Plot
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=80)
plt.axis('equal')
plt.title('Ursus Crude Oil Strategic Weightings',)
plt.show()
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 1, "height": 21, "hidden": false, "row": 76, "width": 9}, "report_default": {}}}}
returns.plot_returns_heatmap(is_prices=True,eoy=True,figsize=(15,20),cbar=False, cmap='RdYlGn',square=True)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 35, "hidden": false, "row": 41, "width": 7}, "report_default": {}}}}
r.display()
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 10, "height": 9, "hidden": false, "row": 19, "width": 2}, "report_default": {}}}}
r.display_lookback_returns()
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python 101
# **Prepared By:**
# ```
# Ashish Sharma
#
# Email: accssharma@gmail.com
# ```
# # References
# - [0] [Preliminary Reading](https://docs.google.com/document/d/1jMcvpPM5a2NV-fWrqwHgdjxeB2UUSkDdvpWSUhGhI7s/edit?usp=sharing)
# - [1] [Python Overview](http://python-history.blogspot.com/2009/01/introduction-and-overview.html)
# - [2] [Python for Data Science](https://data36.com/python-for-data-science-python-basics-1/)
# - [3] [Mutable and Immutable Data Structures](https://medium.com/@meghamohan/mutable-and-immutable-side-of-python-c2145cf72747)
# - [4] [Official Python Documentation](https://docs.python.org/3/tutorial/index.html)
# - [CS231N course python tutorial](http://cs231n.github.io/python-numpy-tutorial/#python)
#
#
# # Introduction to Python
#
# **"Today, Python is used for everything from throw-away scripts to large scalable web servers that provide uninterrupted service 24x7. It is used for GUI and database programming, client- and server-side web programming, and application testing. It is used by scientists writing applications for the world's fastest supercomputers and by children first learning to program."** [1]
#
# ~ *Guido van Rossum, Creator of Python*
# `Python is a programming language that lets you work quickly and integrate systems more effectively.`
#
# `Python is a general purpose programming language and it’s not only for Data Science. This means, that you don’t have to learn every part of it to be a great data scientist. `
#
# `Python is a high-level language. It means that in terms of CPU-time it’s not the most effective language on the planet. But on the other hand it was made to be simple, “user-friendly” and easy to interpret. thus what you might lose on CPU-time, you might win back on engineering time.`
# - Python has been there since 1991, active open-source community.
# - Python is fairly easy to interpret and learn.
# - Python handles different data structures very well.
# - Python has very powerful statistical and data visualization libraries eg. numpy, scipy, pandas, etc.
# - Python has many package as suitable for simpler Analytics projects (eg. simple statistical analysis, exploratory data analysis, etc.) as advanced Data Science projects (eg. building machine learning models)
# - Python has a simple but effective approach to object-oriented programming.
# - Python has elegant syntax, dynamic typing and is interpreted nature.
# - Interactive: Read, Evaluate, Print, Loop (REPL)
# - Ideal language for scripting and rapid application development in many areas on most platforms.
#
# `The Python interpreter and the extensive standard library are freely available in source or binary form for all major platforms and may be freely distributed. `
import sys
help(sys)
help(help)
# ## Python vs C, Java, etc.
# ### Python is elegant to write (has brace-less syntax ;))
#
# ### C
# ```
# if (a < b) {
# max = b;
# } else {
# max = a;
# }
# ```
#
# ### Python
# ```
# if a < b:
# max = b
# else:
# max = a
# ```
# ### Python is a dynamically typed language
#
# In C, variables must always be explicitly declared (eg. specify type such as int or double) which is then used to perform static compile-time checks of the program as well as for allocating memory locations used for storing the variable’s value.
#
# In Python, variables are simply names that refer to objects. Variables do not need to be declared before they are assigned and they can even change type in the middle of a program. [1]
help(id)
id(2)
# Everything is object in Python
x = 2
print (x)
print (id (x))
type(x)
x = "AI is cool"
print (x)
print(id (x))
type (x)
y = 2
id(y)
# Functions in python is also object
def hey_function():
'''I am a doc string of hey_function(). I am a value of an attribute of a function object'''
print ("Printing from hey_function()!")
# function call - calls an Callable Object in Python
hey_function()
hey_function
type(hey_function)
# attribute of a function object
hey_function.__doc__
# ## Mutable and Immutable Object types [3]
#
# **Mutable objects:**
# list, dict, set
#
# **Immutable objects**
# int, float, complex, string, tuple, frozen set [note: immutable version of set], bytes
# Immutable Example 1
id()"ashish")
x = "ashish"
y = "ashish"
type(x)
id(x)
id(y)
x == y
x is y # or id(x) == id(y)
# Immutable Example 2
id(2)
# We are creating a single integer object type 2
# identifier A is tagged to the integer object
# another B identifier is tagged to the same integer object via A
A = 2
B = 2
type(A)
help(id)
id(A)
id(B)
A == B # Do A and B have same values?
A is B # or id(A) == id(B) # Are the A and B the same object?
id(B) is id(A) # Are identity of object A and identiy of object B the same object?
isinstance(A, int)
id(2.0)
# Mutable Objects Example 1
# +
A_list = list([1, 2, 3])
B_list = A_list
# We are creating a single object type (list([1,2,3]))
# identifier A_list is tagged to the list object
# another B_list identifier is tagged to the same object via A_list
# -
id(A_list) == id(B_list)
A_list is B_list
A_list.pop()
A_list
id(A_list) == id(B_list)
A_list is B_list
# ## Python Implementation
# - Python is typically implemented using a combination of a bytecode compiler and interpreter.
# - Compilation is implicitly performed as modules are loaded, and several language primitives require the compiler to be available at run-time.
#
#
# ## Other implementations
# - CPython - de-facto Python implementation and widely adopted
# - PyPy - JIT compilation (optimizing Python compiler/interpreter written in Python)
# - Jython (seamless Java integration)
# - IronPython - IronPython is an implementation of the Python programming language targeting the .NET Framework and Mono.
# - Stackless Python (variant of the C implementation - reduces reliance on the C stack for function/method calls, to allow co-routines, continuations, and microthreads)
#
# ### Cpython vs Cython
#
# `CPython is a Python Interpreter written in Python. Cython is a C extension of Python. Cython is really a different programming language, and is a superset of both C and Python`
import platform
def show_platform_detail():
print ("Python Implementation: %s" % platform.python_implementation())
print ("Machine: %s" % platform.machine())
print ("Uname: ", platform.uname())
print ("Architecture:", platform.architecture())
print ("System: %s" % platform.system())
print ("Node: %s" % platform.node())
print ("Release: %s" % platform.release())
print ("Version: %s" % platform.version())
print ("Processor: %s" % platform.processor())
show_platform_detail()
# ### Built-in functions
#
# - The Python interpreter has a number of functions and types built into it that are always available.
#
#
# abs()
#
# dict(), help(), min(), id(), object(), sorted()
#
# enumerate(), bin(), eval(), int(), open(), str()
#
# bool(), isinstance(), sum()
#
# bytearray(), issubclass(), pow(), super()
#
# bytes(), float(), iter(), print(), tuple()
#
# format(), len(), type()
#
# list(), range()
#
# zip()
#
# hasattr(), max(), round()
#
# set()
#
sorted([1,6,3,6,7,2,3,5])
hasattr(str, "count")
for i in range(5):
print (i)
sum([1,2])
# ### Python Installation
# ### Python Interpreter
# $ which python
#
# ` /usr/local/bin/python3.6 `
# ### Argument Passing (Scripts)
import sys
sys.argv[0]
# ### String [5]
'spam eggs' # single quotes
'doesn\'t' # use \' to escape the single quote...
"doesn't" # ...or use double quotes instead
'"Yes," he said.'
"\"Yes,\" he said."
'"Isn\'t," she said.'
print('C:\some\name') # here \n means newline!
# raw string
print(r'C:\some\name') # note the r before the quote
# unicode
print(u'C:\some\name') # note the r before the quote
print("""\
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
""")
# Concatenation with + and *
3 * 'un' + 'ium'
'AI Developers' + ' Boise'
"AI Developers" + " Boise"
# - Strings can be indexed (subscripted), with the first character having index 0. There is no separate character type; a character is simply a string of size one
word = 'Python'
print("word", word)
print("word[0]", word[0])
print("word[4]", word[4])
print("len(word)", len(word))
print("type(word)", type(word))
print("type(word[0])", type(word[0]))
# - Slicing and Indexing
# + active=""
#
# -
word[0:2]
# indexing from last
word[-1], word[-2]
# - Python strings cannot be changed — they are immutable. Therefore, assigning to an indexed position in the string results in an error.
word[0] = 'A'
# ### List [5]
# dynamic typed
ll = [2018, 5, 12, "AI Developers, Boise", "AI Saturdays"]
ll
type(ll[2])
type(ll[3])
# assigning to an indexed position of list is allowed
# - mutable data type
ll[3] = 12
print(ll)
print(id(ll))
new_sliced_list = ll[-3:] # slicing returns a new list
print(new_sliced_list)
print(id(new_sliced_list))
ll[:]
# +
# list concatenation
# -
ll1 = [3,4,5]
appended_list = ll1 + ll
appended_list
appended_list.append(15)
appended_list
# add the whole list to another list
list_to_be_extended = ["list", "to", "be", "extended"]
appended_list.extend(list_to_be_extended)
appended_list
id(appended_list)
# size of the list
len(appended_list)
# ## Simple Programming with Python
import time
# ### Simple Function
def display_execution_time(start, end, label):
print("Execution time ({}): {}".format(label, end-start))
# +
# Fibonacci series: 1
# +
start = time.time()
a,b = 0, 1 # multiple assignment
while b < 10: # WHILE loop
# print (b, end=", ") # replaces default /n by ", "
print (b) # replaces default /n by ", "
# expressions on the right-hand side are all evaluated
# first before any of the assignments take place.
a, b = b, a+b
end = time.time()
display_execution_time(start, end, "fibonacci while")
# +
# Fibonacci series: 2
# +
def fib(n): # FUNCTION DEFINITION
"display first n fibonacci series"
if n <= 2: # IF condition
return 1 # RETURN statement
res = fib(n-1) + fib(n-2) # RECURSIVE FUNCTION CALL
return res # RETURN statement
start = time.time()
res_fib = fib(6) # FUNCTION call
print (res_fib)
end = time.time()
display_execution_time(start, end, "fibonacci recursion")
# -
# ## Control Flow
#
# #### if-elif-else statements
#
# - There can be zero or more elif parts, and the else part is optional.
# - An if … elif … elif … sequence is a substitute for the switch or case statements found in other languages.
#
def if_else_example(n):
if type(n) is not int:
return "not a number"
if n < 0:
return "less than zero"
elif n == 0:
return "equal to zero"
else:
return "greater than zero"
if_else_example("a")
# #### for loops
words = ['cat', 'window', 'defenestrate']
for w in words:
print (w, len(w))
# #### Use Case: If you need to manipulate a list and then add/remove elements in the same list
words = ['cat', 'window', 'defenestrate']
for w in words[:]: # Loop over a slice copy of the entire list.
if len(w) > 6:
words.insert(0, w)
words
# #### range() Function
# +
for i in range(5): # if only one argument, starts from 0, with interval of 1
print (i, end =", ")
print()
for i in range(1,6): # starting is closed, ending is open
print (i, end =", ")
print()
for i in range(1,11, 2): # specifying interval
print (i, end =", ")
print()
# -
# #### iterate over indices of list
a = ['Mary', 'had', 'a', 'little', 'lamb']
for i in range(len(a)):
print(i, a[i])
# #### or use enumerate
for index, value in enumerate(a):
print (index, value)
# #### or use zip and enumerate to loop multiple list
b = [10,20,30,40,50]
for a_i, b_i in zip(a,b):
print(a_i, b_i)
# #### Iterate in reverse
for i in reversed(range(1, 10, 2)):
print(i)
# #### iterate in sorted order
list_unordered = [3,4,37,2,1,8,552,6]
for i in sorted(list_unordered):
print(i)
# #### break, continue and pass statements
# +
for i in range(5):
if i == 3:
break
print (i)
print()
for i in range(5):
if i == 3:
continue
print(i)
# pass statement does nothing
# generally used when a statement is required syntactically but
# the program requires no action
i = 0
while i > 0:
pass
# -
# ### Defining Functions
# - function definition
# - default argument values
# - keyword arguments-
# +
def f(a, L=[]): # optional arguments are assigned once i.e L
L.append(a)
return L
print(f(1))
print(f(2))
print(f(3))
# -
# #### Types of arguments
def cheeseshop(kind, *arguments, **keywords):
print("-- Do you have any", kind, "?")
print("-- I'm sorry, we're all out of", kind)
for arg in arguments:
print(arg)
print("-" * 40)
for kw in keywords:
print(kw, ":", keywords[kw])
cheeseshop("Limburger", "It's very runny, sir.",
"It's really very, VERY runny, sir.",
shopkeeper="Michael Palin",
client="John Cleese",
sketch="Cheese Shop Sketch")
# ### Comparing sequence/list types
# - Sequence objects may be compared to other objects with the same sequence type.
# - The comparison uses lexicographical ordering
# # Continue from Python3 documentation
# https://docs.python.org/3/tutorial/index.html
#
# - Data Structures
# - Modules
# - Packages
# - Errors and Exceptions
# - Classes
# - Virtual Environment
#
# ... etc.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Text to Speech Model
##Importing Libararies
import speech_recognition as sr
from gtts import gTTS
import os
#Choosing Languages
print("Here Are The Choice You Want The Audio In:")
print("1.Chinese ")
print("2.French")
print("3.Spanish")
print("4.Hindi")
print("5.Gujarati")
#Enter Your Choice
x=int(input("enter your choice:"))
#Input Your TExt
mytext=input("Enter Text As Per The Language Being Choosen :")
#Output of My TExt
mytext
#Conditioning
if x==1:
l="bn"
elif x==2:
l="fr"
elif x==3:
l="es"
elif x==4:
l="hi"
elif x==5:
l="gu"
else:
l="en"
#Voice Model
my_obj=gTTS(mytext,lang=l,slow=False)
my_obj.save('French.mp3')
os.system("French.mp3")
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Churn Prediction using AMLWorkbench
#
# This notebook will introduce the use of the churn dataset to create churn prediction models. The dataset used to ingest is from SIDKDD 2009 competition. The dataset consists of heterogeneous noisy data (numerical/categorical variables) from French Telecom company Orange and is anonymized.
#
# We will use the .dprep file created from the datasource wizard.
# +
import dataprep
from dataprep.Package import Package
import pickle
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeClassifier, export_graphviz
import pandas as pd
import numpy as np
import csv
from sklearn.metrics import accuracy_score
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import LabelEncoder
# -
# ## Read Data
#
# We first retrieve the data as a data frame using .dprep that we created using the datasource wizard. Print the top few lines using head()
# +
#Local via package
with Package.open_package('CATelcoCustomerChurnTrainingSample.dprep') as pkg:
df = pkg.dataflows[0].get_dataframe()
# Blob via package
#with Package.open_package('CATelcoCustomerChurnTrainingBlobSample.dprep') as pkg:
# df = pkg.dataflows[0].get_dataframe()
# if this one fails check to see that you are NOT exporting file to local/blob
# storage because it will fail to overwrite
# Local via CSV file
# df = pd.read_csv("./data/CATelcoCustomerChurnTrainingSample.csv")
# df = pd.read_csv("./data/CATelcoCustomerChurnTrainingCleaned.csv")
df.head(5)
# -
df.shape
# What is the index for the churn column
df.columns.get_loc("churn")
# ## Encode Columns
#
# Convert categorical variable into dummy/indicator variables using pandas.get_dummies. In addition, we will need to change the column names to ensure there are no multiple columns with the same name
# +
# Pick columns of the type category (object represents strings)
columns_to_encode = list(df.select_dtypes(include=['category','object']))
print(columns_to_encode)
# Create new columns for each value in the chosen columns. Use the the indicator value 1 for a value set.
for column_to_encode in columns_to_encode:
# Create a new column for each unique value a column has
# The value becomes a column name
# See: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html
dummies = pd.get_dummies(df[column_to_encode])
# Save column names in this variable
one_hot_col_names = []
for col_name in list(dummies.columns):
# Use the original column name followed by the value of the category
one_hot_col_names.append(column_to_encode + '_' + col_name)
# Assign new column names
dummies.columns = one_hot_col_names
# drop the original columns that are now one-hot-encoded (axis=1 means to drop columns)
df = df.drop(column_to_encode, axis=1)
# merge the original data frame with the newly constructed columns
df = df.join(dummies)
print("Encoded columns:")
print(df.columns)
# -
# Lets look at our newly created columns (one-hot-encoded)
df
# ## Modeling
#
# First, we will build a Gaussian Naive Bayes model using GaussianNB for churn classification. Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes theorem with the “naive” assumption of independence between every pair of features.
#
# In addition, we will also build a decision tree classifier for comparison:
#
# - min_samples_split=20 requires 20 samples in a node for it to be split
# - random_state=99 to seed the random number generator
# ## Investigate important features
# Not all features might be important. Try to find the features that influence the most bu using the TreeClassifier.
# +
from sklearn.ensemble import ExtraTreesClassifier
train, test = train_test_split(df, test_size = 0.3)
Y = train['churn'].values
X = train.drop('churn', 1)
X = X.values
model_feature_test = ExtraTreesClassifier()
model_feature_test.fit(X, Y)
features_ranked = model_feature_test.feature_importances_
print("Feature importance (the number indicates the importance of each column in the data frame:\n" + str(features_ranked))
# Pick the index for the top ranked features (20)
ind = np.argpartition(features_ranked, -20)[-20:]
print("Top Feature Index:\n" + str(ind))
print("Top feature values\n")
print(features_ranked[ind])
top_feature_names = train.columns[ind]
print("Top feature names:\n" + str(top_feature_names))
# -
# Index of the churn column
df.columns.get_loc("churn")
# Number of columns we now have
len(df.columns)
# ## Train on only top features
top_feature_names
# #### Decision tree classifier
# +
model = DecisionTreeClassifier(max_depth=20, max_features=20, random_state=98)
# top_feature_names.append(['churn'])
dfChurn = df[['churn']]
dfFeatures = df[top_feature_names]
dfTopFeaturesChurn = dfFeatures.join(dfChurn)
train, test = train_test_split(df, test_size = 0.3, random_state=98)
# Does top features only improve the classifier?
# train, test = train_test_split(dfTopFeaturesChurn, test_size = 0.3, random_state=98)
target = train['churn'].values
train = train.drop('churn', 1)
train = train.values
model.fit(train, target)
Y_test_val = test['churn'].values
X_test_val = test.drop('churn', 1).values
predicted = model.predict(X_test_val)
print("Decision Tree Classification Accuracy", accuracy_score(Y_test_val, predicted))
# -
print("#Expected churn: " + str(sum(Y_test_val[Y_test_val[:] > 0])))
print("#Predcited churn: " + str(sum(predicted[predicted[:] > 0])))
# Calculate confusion matrix
exp_inp = pd.Categorical(Y_test_val, categories=[0,1])
pred_inp = pd.Categorical(predicted, categories=[0,1])
print(pd.crosstab(exp_inp, pred_inp, colnames=["Predicted"], rownames=['Actual']))
# |Predicted| 0 | 1 |
# |---------|-------|----|
# |Actual | | |
# |0 | 2665| 49|
# |1 | 282| 4|
#
# Accuracy example: acc = correct predicted / all predictions<br>
# (2665 + 4)/ (2665 + 49 + 282 + 4) = 0.8897
# #### ROC curve
# +
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
predicted_prob = model.predict_proba(X_test_val)
# TASK: Rita ROC curve
# confidence_nb = predicted_prob.max(axis=1)
fpr, tpr, thresholds = roc_curve(Y_test_val, predicted_prob[:, 1])
roc_auc_nb = roc_auc_score(Y_test_val, predicted_prob[:, 1])
print("AUC:" + str(roc_auc_nb))
plt.figure(figsize=(10,10))
plt.title('Receiver operating characteristic example')
plt.plot(fpr, tpr, color='darkorange', lw = 2, label='ROC curve (area = %0.2f)' % roc_auc_nb)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.legend(loc="lower right")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
# -
# ### Precision and recall
# High recall means that we manage to pick most of the positive examples (churn)
# Low precision means that we also pick a lot of false positive for the churn which
# means that a high recall and a low precision catches most of the true churn at the
# cost of getting many false churn at the same time. Low precision = not so precise
#
# See: https://medium.com/@klintcho/explaining-precision-and-recall-c770eb9c69e9 for a good explanation.
#
# The chart below calculates the precision and recall for the different thresholds of the probability of our predicions.
# +
precision, recall, thresholds_pr = precision_recall_curve(Y_test_val, predicted_prob[:,1])
plt.figure(figsize=(10,10))
plt.title('Precision Recall curve')
plt.plot(precision, recall, color='darkorange', lw = 2, label='ROC curve (area = %0.2f)' % roc_auc_nb)
plt.legend(loc="lower right")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('precision')
plt.ylabel('recall')
plt.show()
print("Precision:\n" + str(precision.round(4)))
print("Recall:\n" + str(recall.round(4)))
#print("\nIf we want to catch every churn the precision would be only " + "{:.2%}".format(precision[0]))
#print("I.e. we would catch a huge number of false positive at the same time.")
#print("The other way around. If we want only True positive churn (no false) we would get a recall of " + "{:.2%}".format(recall[6]))
# -
# ### ExtraTrees Classifier
help(ExtraTreesClassifier)
# +
modelxt = ExtraTreesClassifier(max_depth=25, n_estimators=15, max_features=20, random_state=99)
# modelxt = RandomForestClassifier()
# top_feature_names.append(['churn'])
dfChurn = df[['churn']]
dfFeatures = df[top_feature_names]
dfTopFeaturesChurn = dfFeatures.join(dfChurn)
train, test = train_test_split(dfTopFeaturesChurn, test_size = 0.3, random_state=98)
target = train['churn'].values
train = train.drop('churn', 1)
train = train.values
modelxt.fit(train, target)
Y_test_val = test['churn'].values
X_test_val = test.drop('churn', 1).values
predicted = modelxt.predict(X_test_val)
print("ExtraTreesClassifier acc:", accuracy_score(Y_test_val, predicted))
# -
# #### Confusion matrix
print("#Actual churn: " + str(sum(Y_test_val[Y_test_val[:] > 0])))
print("#Predcited churn: " + str(sum(predicted[predicted[:] > 0])))
exp_inp = pd.Categorical(Y_test_val, categories=[0,1])
pred_inp = pd.Categorical(predicted, categories=[0,1])
print(pd.crosstab(exp_inp, pred_inp, colnames=["Predicted"], rownames=['Actual']))
# #### Confusion matrix results example
# Predicted 0 1
# Actual
# 0 2708 4
# 1 75 213
#
# We predicted 2708+75 as non-churn<br>
# We predicted 4 non-churn wrongly as churn<br>
# We predicted 217 as churn (4 was non-churn of these)<br>
# We did not catch 75 churn that was predcited as non-churn<br>
# ### ROC Curve ExtraTreesClassifier
# +
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
predicted_prob = modelxt.predict_proba(X_test_val)
# TASK: Rita ROC curve
# confidence_nb = predicted_prob.max(axis=1)
fpr, tpr, thresholds = roc_curve(Y_test_val, predicted_prob[:, 1])
roc_auc_nb = roc_auc_score(Y_test_val, predicted_prob[:, 1])
print("AUC:" + str(roc_auc_nb))
plt.figure(figsize=(10,10))
plt.title('Receiver operating characteristic example')
plt.plot(fpr, tpr, color='darkorange', lw = 2, label='ROC curve (area = %0.2f)' % roc_auc_nb)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.legend(loc="lower right")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
# -
# ### Precision and recall
# High recall means that we manage to pick most of the positive examples (churn)
# Low precision means that we also pick a lot of false positive for the churn which
# means that a high recall and a low precision catches most of the true churn at the
# cost of getting many false churn at the same time. Low precision = not so precise
#
# See: https://medium.com/@klintcho/explaining-precision-and-recall-c770eb9c69e9 for a good explanation.
#
# The chart below calculates the precision and recall for the different thresholds of the probability of our predicions.
# +
precision, recall, thresholds_pr = precision_recall_curve(Y_test_val, predicted_prob[:,1])
plt.figure(figsize=(10,10))
plt.title('Precision Recall curve')
plt.plot(precision, recall, color='darkorange', lw = 2, label='ROC curve (area = %0.2f)' % roc_auc_nb)
plt.legend(loc="lower right")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('precision')
plt.ylabel('recall')
plt.show()
print("Precision:\n" + str(precision))
print("Recall:\n" + str(recall))
print("\nIf we want to catch every churn the precision would be only " + "{:.2%}".format(precision[0]))
print("I.e. we would catch a huge number of false positive at the same time.")
print("The other way around. If we want only True positive churn (no false) we would get a recall of X? %")
# print("The other way around. If we want only True positive churn (no false) we would get a recall of " + "{:.2%}".format(recall[5]))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: NectarCamera
# language: python
# name: nectarcamera
# ---
# ### Code for running experiment
# ### Callin Switzer
# ### 21 Feb 2019
# +
## refref make environment that will run cameras and nectarlearning
## refref process images in real time (simple bee in / out at each timestep)
import nectarUtils
from nectarUtils import *
import nectarUtils as nu
import importlib
# %matplotlib inline
print(sys.version)
print(sys.executable)
# define directories
baseDir = os.getcwd()
# may want to make this directory somewhere else, if dropbox becomes a problem
dataDir = r"D:\Dropbox\AcademiaDropbox\UW\BeeDecisionProject\NectarData"
if not os.path.isdir(dataDir):
os.mkdir(dataDir)
figDir = r"D:\Dropbox\AcademiaDropbox\UW\BeeDecisionProject\NectarFigs"
if not os.path.isdir(figDir):
os.mkdir(figDir)
# -
# list serial ports
nu.serial_ports()
# connect to com8
PORT1 = "COM4"
connected1 = False
if "ser1" in globals():
ser1.close()
ser1 = serial.Serial(PORT1,9600, timeout=1.0) # stop if no data comes in 1 second
while not connected1:
serin1 = ser1.read()
connected1 = True
print("connected to arduino on " + PORT1)
str(ser1.readline().decode("UTF-8"))
ser1.write("ff".encode("utf-8"))
# +
#ser1.close()
# +
# reload package if changed
_ = importlib.reload(nectarUtils)
calb = nu.calibrate(ser1)
# +
# reload package if changed
_ = importlib.reload(nectarUtils)
# read and save data
stt = time.time()
newDat = nu.readAndSave(ser1, maxTime=600*2, saveData=True,
dataDir = dataDir, timeout = 600*2,
minRewardThreshold = int(1.10*calb["topBaseline"]),
colNames = calb["colNames"],
baseSensorThreshold = calb['base_dec_bound'],
calibrationInfo = calb)
print(time.time() - stt)
newDat.head()
newDat['timestamp'] = pd.to_datetime(newDat['timestamp'])
newDat['delta'] = (newDat['timestamp']-newDat['timestamp'].shift()).fillna(pd.Timedelta(seconds=0))
newDat.plot(y=['top', 'mid', 'base'], x = "timestamp", style='-', figsize=np.array([15, 5]))
plt.scatter(y=newDat['top'], x = newDat["timestamp"])
plt.vlines(newDat[newDat.notes == "reward Triggered"]["timestamp"], ymin = 0, ymax = 1000, label = "reward")
plt.show()
#newDat.plot(y=['delta'], x = "timestamp", style='-')
# +
# #!/usr/bin/env python
import subprocess
# #
# # Call a bash script
# subprocess.call(['./myBashScript.sh'])
# # Call a javascript script with node
# subprocess.call(['node', './myJSScript.js'])
# -
# +
# reload package if changed
_ = importlib.reload(nectarUtils)
# read and save data
stt = time.time()
newDat = nu.readOnly(ser1, maxTime=5, saveData=True, dataDir = dataDir, timeout = 600*2)
print(time.time() - stt)
newDat.head()
newDat['timestamp'] = pd.to_datetime(newDat['timestamp'])
newDat['delta'] = (newDat['timestamp']-newDat['timestamp'].shift()).fillna(pd.Timedelta(seconds=0))
newDat.plot(y=['top_sensor', 'mid_sensor', 'base_sensor'], x = "timestamp", style='-', figsize=np.array([15, 5]))
plt.scatter(y=newDat['top_sensor'], x = newDat["timestamp"])
plt.vlines(newDat[newDat.notes == "reward Triggered"]["timestamp"], ymin = 0, ymax = 1000, label = "reward")
#newDat.plot(y=['delta'], x = "timestamp", style='-')
# +
newDat.plot(y=['top_sensor', 'mid_sensor', 'base_sensor'], x = "timestamp", style='-', figsize=np.array([15, 5]))
plt.scatter(y=newDat['top_sensor'], x = newDat["timestamp"])
plt.vlines(newDat[newDat.notes == "reward Triggered"]["timestamp"], ymin = 0, ymax = 1000, label = "reward")
plt.hlines(y = 150, xmin = np.min(newDat["timestamp"]), xmax = np.max(newDat["timestamp"]), label = "thresh")
# -
int.from_bytes(b'b', byteorder='big')
numSteps = 4
[ser1.write("b".encode("utf-8")) for ii in range(numSteps)]
#
# # for com7
# df1[["base_sensor", "mid_sensor",
# "top_sensor", "limit_1", "limit_2"]] = \
# df1[["base_sensor", "mid_sensor",
# "top_sensor", "limit_1", "limit_2"]].astype(int)
#
# df1.head()
# +
# for com8, switch base and mid refref, double check
tt = readData(ser1, readlen=10, wait_time=0.0, save=True, returnVals = True)
df1 = pd.DataFrame(tt, columns=["base_sensor", "mid_sensor", "top_sensor", "limit_1", "limit_2", "timestamp"])
df1[["mid_sensor", "base_sensor",
"top_sensor", "limit_1", "limit_2"]] = \
df1[["base_sensor", "mid_sensor",
"top_sensor", "limit_1", "limit_2"]].astype(int)
print(df1.shape)
df1.head()
# -
df1.tail()
# +
#np.array(df1.iloc[:,0].astype(int))
# -
ax1 = df1.iloc[:, 0:3].plot()
lines, labels = ax1.get_legend_handles_labels()
ax1.legend(lines, labels, loc='best')
plt.plot(df1[["base_sensor", "mid_sensor",
"top_sensor"]])
plt.show()
plt.plot(np.array(df1.iloc[:,1].astype(int)))
plt.plot(np.array(df1.iloc[:,2].astype(int)))
(tt[0, 5])
# +
# for com7
# top sensor
tt[:, 2]
# mid sensor
tt[:, 1]
# base sensor
tt[:, 0]
# -
plt.plot(tt[:,2])
plt.plot(tt)
# +
#ser1.close()
# +
def moveToTop(serial_con, cutoff = 650):
# refref: may want to go 1 or two more moves forward after cutoff is passed ---
## the cutoff is the meniscus
[[topVal, bottomLim, topLim]] = readData(serial_con, 1, 0)[:, [1,3,4]]
print(topVal)
while (topVal > cutoff) and not topLim:
#move forward
ser1.write("f".encode("utf-8"))
# read data again
[[topVal, bottomLim, topLim]] = readData(serial_con, 1, 0)[:,[1,3,4]]
# rror if limit switch is hit
if topLim:
raise RuntimeError('Hit upper limit switch')
# +
# refref: problem -- liquid stays stuck on the sides -- may need to move back slower
def moveBack(serial_con, cutoff = 650):
[[topVal, bottomLim, topLim]] = readData(serial_con, 1, 0)[:, [1,3,4]]
while (topVal < cutoff) and not bottomLim:
#move backward
for jj in range(7):
ser1.write("b".encode("utf-8"))
time.sleep(0.3)
# read data again
[[topVal, bottomLim, topLim]] = readData(serial_con, 1, 0)[:, [1,3,4]]
# rror if limit switch is hit
if bottomLim:
raise RuntimeError('Hit lower limit switch')
# -
readData(ser1, 1, 0)
moveToTop(ser1)
readData(ser1, 1, 0)
moveBack(ser1)
readData(ser1, 1, 0)
ser1.write("f".encode("utf-8"))
[[vals, bottomLim, topLim]] = readData(ser1, 1, 0)[:, 2:]
readData(ser1, 1, 0)[:, 2:]
toplim = 1
not toplim
# for ii in range(20):
# written = ser1.write("f".encode("utf-8"))
#
#
# ser1.write("f".encode("utf-8"))
#
# for ii in range(100):
# written = ser1.write("b".encode("utf-8"))
#
# "c".encode("utf-8")
# int.from_bytes(b'c', byteorder='big') # this is what the arduino will see
#
# int.from_bytes(b'c', byteorder='big')
#
# ser1.write("c".encode("utf-8"))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="sU-GBUnyvj4Q"
# ### Visualizing Outliers in Python - Box Plots
#
# Box plots are a kind of data visualization that tell us some different things about our data than the kinds of visualizations we've worked with so far (namely, histograms and bar charts).
#
# Here, we'll be able to see the median of our data, as well as any outliers.
#
# *This demo is taken from [this](https://matplotlib.org/stable/gallery/pyplots/boxplot_demo_pyplot.html#sphx-glr-gallery-pyplots-boxplot-demo-pyplot-py) resource, if you'd like to play around a bit more!*
# + colab={"base_uri": "https://localhost:8080/"} id="vFs_lHv-th2Y" outputId="2c598ee1-9ea2-4f6f-b8d8-ad88da6e55b0"
# Let's import our libraries - numpy for mathematics and matplotlib for the visualization
import numpy as np
import matplotlib.pyplot as plt
# We're going to set a random seed - this allows us to reproduce this example
np.random.seed(19680801)
# fake up some data
spread = np.random.rand(50) * 100
center = np.ones(25) * 50
flier_high = np.random.rand(10) * 100 + 100
flier_low = np.random.rand(10) * -100
data = np.concatenate((spread, center, flier_high, flier_low))
data
# + colab={"base_uri": "https://localhost:8080/", "height": 420} id="83JyXe-2wTj0" outputId="81169b23-60c9-4185-82f5-3901c854fd8a"
fig1, ax1 = plt.subplots()
ax1.set_title('Basic Plot')
ax1.boxplot(data)
# + [markdown] id="5eQKaq7YwX7g"
# So what does the above chart represent?
#
# The *box* is the rectangle near the middle, and it shows us our **interquartile range** - our inner two *quartiles* of data (inner hear means *closest to the median*).
#
# The orange line represents our *median* value.
#
# The whiskers represent the outlier thresholds: the upper whisker is at Q3 + 1.5 * IQR and the lower whisker is at Q1 - 1.5 * IQR
#
# The unfilled circles outside of the whiskers represent any *outliers* in our data!
#
#
# + [markdown] id="QPFLJkyuxeBc"
# This is a great way to visualize outliers, but it's not the only way to find them!
# + [markdown] id="91oZwS40ynwM"
# We've just visualized our outliers using our **Interquartile Range**. Let's apply this concept to also find our outliers using a Python program!
#
# You were briefly just introduced to the concept of the **Interquartile Range**, but let's recap before we code:
# 1. To extract or reject outliers using this method, we first break our data up into *quartiles*.
# 2. We define our interquartile range or **IQR** by subtracting our 1st quartile of data from our third (in other words, we find our *inner two quartiles*).
# - IQR = Q3 - Q1
# 2. Then, we create a boundary using these quartiles. We define our "upper" boundary by taking our *third quartile* (Q3) and adding to 1.5 * IQR. We then take our *first quartile* (Q1) and subtract 1.5 * IQR.
# - if element in data is > Q3 + 1.5*IQR OR < Q1 - 1.5*IQR, we determine that element is an outlier.
#
# Make sense? Take a second to ask questions or recap [this](https://www.khanacademy.org/math/statistics-probability/summarizing-quantitative-data/box-whisker-plots/a/identifying-outliers-iqr-rule) article if you need a refresher!
#
# When you're ready, read [this](https://medium.com/datadriveninvestor/finding-outliers-in-dataset-using-python-efc3fce6ce32) resource, and pause at the **Using IQR** section to code along!
# + id="ZjYItC3tybZk"
# TODO: sort dataset
# IQR 1
# + id="uO6ErczAy0Ak"
# TODO: find first and third quartile using np.percentile
# IQR 2
# + id="TsG5PVIpy4rA"
# Find the difference between the first and third quartile: (hint: just uncomment the next line!)
# iqr = q3 - q1
# IQR 3
# + id="8XYo7I4Iy9mo"
# TODO: find lower and upper bound
# IQR 4
# + id="M8JbH0urzFlk"
# Based on our lower and upper bounds, what values would be our outliers? Discuss with your group, and add your answer below!
# IQR5
# + [markdown] id="v3NMpQBpzN40"
# *Your answer goes here!*
#
# + [markdown] id="0qSqZkaJyjCo"
# Next, go back to the first section of [the article](https://medium.com/datadriveninvestor/finding-outliers-in-dataset-using-python-efc3fce6ce32). This section uses **Z-Score** to find our outliers!
#
# **When would we use Z-Score versus IQR?**
#
# In the Z-Score method, data points are *standardized*. Their distance from the mean are expressed in multiples of the standard deviation σ.
#
# Points lying 3σ (3 standard deviations) away from the mean are identified as outliers. And just as in the IQR method, these outliers can be neatly clipped from the data if we need them to be.
#
# This method is best used when our data has an approximately *Gaussian* or *normal* distribution.
#
# **Why do you think that is?**
# + id="LsgvDz_8wWwg"
# TODO: import your libraries
# + id="L_jlwibbx8S8"
dataset= [10,12,12,13,12,11,14,13,15,10,10,10,100,12,14,13, 12,10, 10,11,12,15,12,13,12,11,14,13,15,10,15,12,10,14,13,15,10]
# + id="57lzZI5Cx-hI"
# TODO: following the article, use the function she wrote to write our detect_outlier() method
outliers=[]
# TODO: Call detect_outlier() on dataset
# Do you see the correct value printed?
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
sns.set(style='white', rc={'figure.figsize':(12,8)})
import requests
import tarfile
import imageio
import cv2
import glob
import os
import umap
import MulticoreTSNE
import fitsne
import LargeVis
import sklearn.manifold
# -
# ### Pull the data from the internet and write it to a file then unpack the file to disk.
#
# Don't bother running this if you've already downloaded the dataset.
#
# We are unpacking the file into the directory that the notebook is running in. Don't worry the coil-100 dataset should only take a few minutes to download on a good connection.
# %%time
if not os.path.exists('coil-100'):
results = requests.get('http://www.cs.columbia.edu/CAVE/databases/SLAM_coil-20_coil-100/coil-100/coil-100.tar.gz')
with open("coil_100.tar.gz", "wb") as code:
code.write(results.content)
images_zip = tarfile.open('coil_100.tar.gz', mode='r:gz')
images_zip.extractall()
# ### Read our images from disk via the wonders of imageio.
#
# They are read in as 128x128x3 ndarrays. We make use of flatten to collapse them down to a list of 7202, 49152 dimensional vectors.
feature_vectors = []
filelist = glob.glob('./coil-100/*.ppm')
for filename in filelist:
im = cv2.imread(filename)
feature_vectors.append(im.flatten())
# ### Now we have our data in a list of vectors. Let's extract the object id's from the files and cast to data frame (in case we want to explore things further)
labels = pd.Series(filelist).str.extract("obj([0-9]+)", expand=False)
# ### The pandas data frame here would be too expensive both in time and memory to construct
# %%time
data = np.vstack(feature_vectors).astype(np.float64, order='C')
print(data.shape)
# ### Now let's use UMAP to embed these points into a two dimensional space.
#
# A little parameter tweaking is required here in order find a particularly attractive embedding of our space.
fit = umap.UMAP(n_neighbors=5, random_state=42, min_dist=0.5, n_epochs=1000)
# %time u = fit.fit_transform(data)
plt.scatter(u[:,0], u[:,1], c=labels, cmap="Spectral", s=10, alpha=0.5)
# We see that are are able to preserve a number of the high dimensional structures within this data set.
# ### Now we need to run t-SNE on our data
fit_tsne = MulticoreTSNE.MulticoreTSNE(n_jobs=1, random_state=42)
# %time u_tsne = fit_tsne.fit_transform(data)
plt.scatter(u_tsne[:,0], u_tsne[:,1], c=labels, cmap="Spectral", s=10, alpha=0.5)
# ## FIt-SNE
# %time u_fitsne = fitsne.FItSNE(data, nthreads=1, rand_seed=42)
plt.scatter(u_fitsne[:,0], u_fitsne[:,1], c=labels, cmap="Spectral", s=10, alpha=0.5)
np.save('fitsne_coil100_embedding1.npy', u_fitsne)
output = pd.DataFrame(u_fitsne, columns=('x','y'))
output['labels']=labels
output.to_csv('embedding_coil100_fitsne1.csv')
# ## LargeVis
largevis_data = data.astype(np.float32, order='C')
LargeVis.loadarray(largevis_data)
largevis_n_samples = int(largevis_data.shape[0] / 100.0)
# %time u_largevis = LargeVis.run(2, 1, largevis_n_samples)
u_largevis = np.array(u_largevis)
plt.scatter(u_largevis[:,0], u_largevis[:,1], c=labels, cmap="Spectral", s=10, alpha=0.5)
np.save('largevis_coil100_embedding1.npy', u_largevis)
output = pd.DataFrame(u_largevis, columns=('x','y'))
output['labels']=labels
output.to_csv('embedding_coil100_largevis1.csv')
# ## Laplacian Eigenmaps
fit_laplacian = sklearn.manifold.SpectralEmbedding(n_neighbors=15)
# %time u_laplacian = fit_laplacian.fit_transform(data)
plt.scatter(u_laplacian[:,0], u_laplacian[:,1], c=labels, cmap="Spectral", s=10, alpha=0.5)
# ## Isomap
fit_isomap = sklearn.manifold.Isomap()
# %time u_isomap = fit_isomap.fit_transform(data)
plt.scatter(u_isomap[:,0], u_isomap[:,1], c=labels, cmap="Spectral", s=10, alpha=0.5)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# #!/usr/bin/python3
# -
#from collections import Counter
#import re
#import os
import time
#from collections import defaultdict
#from collections import deque
date = 0
dev = 0 # extra prints
part = 1 # 1,2, or 3 for both
# 0 or 1:
samp = 1
print("https://adventofcode.com/2021/day/{}".format(date))
# + [markdown] tags=[]
# ## Read the input data
# +
#time0 = time.time()
if samp == 1:
filename = "/sample.txt"
else:
filename = "/input.txt"
try:
with open(str(date) + filename,"r") as f:
t = f.readlines()
except FileNotFoundError:
with open("." + filename,"r") as f:
t = f.readlines()
t = [(x.strip().replace(' ',' ')) for x in t]
#t = [int(x) for x in t]
# -
# ## Part one
# + tags=[]
def day(te):
return 1
day(t)
# -
# ## Part two
# + tags=[]
def day2(te):
return 2
day2(t)
# -
# ## Run the programs
if 1:
time0 = time.time()
if part == 1:
print("Part 1: ", day(t))
elif part == 2:
print("Part 2: ", day2(t))
elif part == 3:
#run both
print("Part 1: ", day(t))
print("Part 2: ", day2(t))
tdif = time.time() - time0
print("Elapsed time: {:.4f} s".format(tdif))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
""" This file will be used as a playground while learning Python
"""
# // is integer division. / always gives float/double
10 / 3
10 // 3
# Boolean values are 'True' and 'False'
True
True or not True
(False == 0) and (True == 1)
True == 2
2 and 3 # interpreted as False and False, but False != 2 ???
3 != False != True
(3 != False) != True
3 != 4 != 3
3 != 3 != 4
# Chained comparison
1 < 2 < 4
1 < 2 < 2
# is compares objects, == compares values
a = [1, 2, 3]
b = [1, 2, 3]
c = a
b is a
b == a
c is a
'So long,' + "and thanks for " "all the fish"
len('Erkam')
# Format
'{} and {}'.format('Tom', 'Jerry')
'{0}, {0}, and more {0}'.format('bugs')
# For <=Python2.5
'old style %s' % ('formatting')
# Make comparisons with None object with 'is'
3 is None
# None, 0, and empty strings, lists, dicts, tuples are falsy
bool([])
print("print using 'print'")
# get input
input1 = input('enter a number')
input1
undefinedVar # raises error
# if is similar to ternary operation of C
'smaller' if 3 > 2 else 3
# +
# if 3 > 2 'bigger' else 'smaller'
# syntax error
# -
mList = []
mList.append('append new items')
mList
# look at last element
mList[-1]
# list[start:end:step]
list2 = [1, 3, 5, 7, 9]
list2[::2]
# del deletes elements
del list2[2]
list2
# list.remove(<item>) removes first occurance of <item>
list2.remove(9)
list2
# list.insert(index, element)
list2.insert(0, 0)
list2
# get index of first occurance
list2.index(7)
list2.index(9)
# add lists (create a new list)
list3 = mList + list2
list3
# extend lists
mList.extend(mList)
mList
# check existence
0 in list3
# Tuples are immutable lists
tup1 = (1, 2, 3)
tup1[0]
tup1[0] = 11
# tuples of length 1 must have a trailing comma
type((1))
type((1,))
a, b, c = 1, 2, 3 # 1, 2, 3 is a tuple here
a + b + c
x, *y, z = 1, 2, 3, 4, 5
y
# Swapping
a, b = b, a
print(a, b)
# Dictionaries stores associative array
mDict = {'similar to object': True}
mDict
# +
# Keys must me immutable; int, float, string, tuple
# -
mDict['1'] = 1
mDict
mDict[1] = -1
# mDict.update({1: -1})
mDict
mDict[1]
# get keys as an iterable
# need to store it in a list
keyList = list(mDict.keys())
keyList
# in checks existence within keys
-1 in mDict
# +
# mDict['myDict'] >> KeyError
# use get() to avoid this error
mDict.get('myDict')
# -
mDict.get('myDict', 'this is a default argument just in case')
mDict.setdefault(1, 2) # setdefault adds a new pair if key not exist
mDict[1]
# Set is like a mathematical set, does not contain multiple copies
mSet = {1, 1, 2, 4}
mSet
set1 = {1, 2, 3}
set1
## Intersection
set1 & mSet
## Union
set1 | mSet
## Set Difference
mSet - set1
## Symmetric Difference (A union B) - (A intersect B)
mSet ^ set1
## Check subset/superset
{1, 2} < {1, 2, 3}
1 in mSet
mVar = input('input a number')
mVar = int(mVar)
if mVar == 42:
print('Hooray')
elif mVar > 42:
print('not bad')
else:
print('come on')
for i in range(0,100,10):
print(str(i), end=' ')
list3 = ['fuck', 'this', 'shit', 'I\'m', 'out']
i = 0
while i < len(list3):
print(list3[i], end=" ")
i += 1
try:
list4 = [3, 5, 7]
list4[10]
except (IndexError):
print('as expected')
else:
print('needed due to except block')
finally:
print('the end')
## A great way to clean up
with open('somefile') as tempList:
for i in tempList:
print(i, end=' ')
## Iterable and Iterator
mIterable = mDict.keys() # returns an iterable object
for i in mIterable:
print(i, end=' ')
# Iterables can be looped but items cannot be addressed directly
# Iterators can be created from Iterable objects, it is iterator as you expect
mIterator = iter(mIterable)
# It keeps state
next(mIterator)
next(mIterator)
next(mIterator)
next(mIterator)
# can get a list from an Iterator
list(mIterator) # it returns according to current state
list(iter(mIterable))
## Functions
def fib(n):
if n <= 1:
return n
else:
return fib(n-1) + fib(n-2)
for i in range(1, 10):
print(fib(i), end=' ')
# +
# can call functions with keyword arguments
def fun1(x, y, z):
return (x*y+z)
fun1(y=3, x=5, z=10)
# +
# variable number of arguments
def fun2(*args):
for i in args:
print(str(i), len(str(i)))
fun2('ali', 'veli', 'selami')
# -
# variable number of keyword arguments
def fun3(**args):
nDict = dict()
for i in args:
nDict.update({i: len(i), args[i]: len(args[i])})
return nDict
mDict2 = {'x': 'alpha', 'y': 'beta'}
fun3(**mDict2)
# +
def swap(v1, v2):
return (v2, v1)
swap([1, 2, 3, 4], [5, 6, 7, 8])
# +
# Function scope is similar
# use global keyword
globalVar = 10
def incrementVar(i):
global globalVar
globalVar += i
print(globalVar)
incrementVar(32)
# +
# Functions can return functions
def multiplier(x):
def multiplyBy(y):
return x*y
return multiplyBy
makeDouble = multiplier(2)
print(makeDouble(21))
# -
# Our beloved lambda functions
(lambda i: i**i)(3)
## Functional language attributes
list(map(makeDouble, [10, 20, 30]))
list(filter((lambda x: x % 2 == 0), range(1,20)))
# List comprehension stores the output as a list
[x for x in [3, 4, 5, 6, 7] if x > 5]
[makeDouble(i) for i in [10, 20, 30]]
# Set comprehension
{x for x in {1,2,3,4,5} if x not in {2,4,6,8}}
# Dict comprehension
{x: x**x for x in range(1,5)}
## Imports
import math as m
m.sqrt(625)
# +
# dir(m) outputs functions and attributes
# +
## Class
class Phone:
# class attribute, shared among all instances
brand = "nokia"
# initializer
# all methods take 'self' as first parameter
def __init__ (self, model, year):
self.model = model
self.year = year
# let say owner is mutable, i.e a property
self.owner = 'default'
def getProductionYear(self):
return self.year
def getModel(self):
return self.model
# class method
# called with class as first argument
@classmethod
def getBrand(cls):
return cls.brand
# called without an instance reference
@staticmethod
def staticFunc():
return 'this is a static function'
@property # i.e getter
def owner(self):
return self._owner
@owner.setter
def owner(self, owner):
self._owner = owner
@owner.deleter
def owner(self):
del self._owner
# -
phone1 = Phone('3310', 2000)
phone2 = Phone('n95', 2008)
# +
print(phone1.staticFunc())
print(phone2.staticFunc())
# Should I be able to call static functions from instances?
# -
print(phone1.getModel())
print(phone2.getModel())
print(phone1.getBrand())
print(phone2.getBrand())
Phone.brand = 'Nokia'
phone1.getBrand()
phone2.owner
phone2.owner = 'Erkam'
phone2.owner
del phone2.owner
phone2.owner
phone2.model = 'n95 8gb'
phone2.model
# +
## Multiple Inheritence
class Human:
# A class attribute. It is shared by all instances of this class
species = "H. sapiens"
# Basic initializer, this is called when this class is instantiated.
# Note that the double leading and trailing underscores denote objects
# or attributes that are used by python but that live in user-controlled
# namespaces. Methods(or objects or attributes) like: __init__, __str__,
# __repr__ etc. are called magic methods (or sometimes called dunder methods)
# You should not invent such names on your own.
def __init__(self, name):
# Assign the argument to the instance's name attribute
self.name = name
# Initialize property
self.age = 0
# An instance method. All methods take "self" as the first argument
def say(self, msg):
print ("{name}: {message}".format(name=self.name, message=msg))
# Another instance method
def sing(self):
return 'yo... yo... microphone check... one two... one two...'
# A class method is shared among all instances
# They are called with the calling class as the first argument
@classmethod
def get_species(cls):
return cls.species
# A static method is called without a class or instance reference
@staticmethod
def grunt():
return "*grunt*"
# A property is just like a getter.
# It turns the method age() into an read-only attribute
# of the same name.
@property
def age(self):
return self._age
# This allows the property to be set
@age.setter
def age(self, age):
self._age = age
# This allows the property to be deleted
@age.deleter
def age(self):
del self._age
class Bat:
species = 'Baty'
def __init__(self, can_fly=True):
self.fly = can_fly
# This class also has a say method
def say(self, msg):
msg = '... ... ...'
return msg
# And its own method as well
def sonar(self):
return '))) ... ((('
# -
class Batman(Human, Bat):
# Batman has its own value for the species class attribute
species = 'Superhero'
def __init__(self, *args, **kwargs):
# Typically to inherit attributes you have to call super:
#super(Batman, self).__init__(*args, **kwargs)
# However we are dealing with multiple inheritance here, and super()
# only works with the next base class in the MRO list.
# So instead we explicitly call __init__ for all ancestors.
# The use of *args and **kwargs allows for a clean way to pass arguments,
# with each parent "peeling a layer of the onion".
Human.__init__(self, 'anonymous', *args, **kwargs)
Bat.__init__(self, *args, can_fly=False, **kwargs)
# override the value for the name attribute
self.name = 'Sad Affleck'
def sing(self):
return 'nan nan nan nan nan batman!'
sup = Batman()
if isinstance(sup, Human):
print('I am human')
if isinstance(sup, Bat):
print('I am bat')
if type(sup) is Batman:
print('I am Batman')
# Get the Method Resolution search Order used by both getattr() and super().
# This attribute is dynamic and can be updated
print(Batman.__mro__)
# MRO order is important. When a funtion or attribute is used, its existence checked in this order
sup.say('I agree')
# Calls method from Human, because inheritance order matters
# +
## Generators
# Generators are memory-efficient because they only load the data needed to
# process the next value in the iterable. This allows them to perform
# operations on otherwise prohibitively large value ranges.
def double_numbers(iterable):
for i in iterable:
yield i + i
# -
for i in double_numbers(range(1, 900000000)): # `range` is a generator.
print(i)
if i >= 30:
break
values = (-x for x in [1,2,3,4,5])
# generator is defined with () similar to sets
for x in values:
print(x)
values = (-x for x in [1,2,3,4,5])
gen_to_list = list(values)
print(values)
print(gen_to_list)
# +
## Decorators
from functools import wraps
def beg(target_function):
@wraps(target_function)
def wrapper(*args, **kwargs):
msg, say_please = target_function(*args, **kwargs)
if say_please:
return "{} {}".format(msg, "Please! I am poor :(")
return msg
return wrapper
@beg
def say(say_please=False):
msg = "Can you buy me a beer?"
return msg, say_please
# -
print(say())
print(say(say_please=True))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import dask.dataframe as dd
df = dd.read_csv('../data/minute/afl/2016-*.csv',parse_dates=['timestamp'])
df.head()
df
df.describe().compute()
# %matplotlib inline
df.groupby(df.timestamp.dt.date).close.mean().compute().plot(figsize=(10,8))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Lemma to frame relationship
import stats_utils
from collections import defaultdict, Counter
import pandas
import operator
fn = stats_utils.load_framenet(version='1.7')
# +
with_pos = True
lemma2frames = defaultdict(set)
frame2lemmas = defaultdict(set)
for frame in fn.frames():
frame_label = frame.name
for lu in frame.lexUnit.keys():
lemma, pos = lu.split('.')
if with_pos:
lemma2frames[(lemma, pos)].add(frame_label)
frame2lemmas[frame_label].add((lemma, pos))
else:
lemma2frames[lemma].add(frame_label)
frame2lemmas[frame_label].add(lemma)
# -
fn_polysemy = [len(value) for value in lemma2frames.values()]
for lemma, frames in lemma2frames.items():
if len(frames) == 11:
print(lemma, frames)
max(fn_polysemy)
average_polysemy = sum(fn_polysemy) / len(fn_polysemy)
print(len(lemma2frames), average_polysemy)
Counter(fn_polysemy)
distribution = [pos
for lemma, pos in lemma2frames]
counts = Counter(distribution)
# +
lists_of_lists = []
headers = ['Part of speech', 'Framenet 1.7', 'PropBank 3.1']
total = sum(counts.values())
for pos, freq in sorted(counts.items(),
key=operator.itemgetter(1),
reverse=True):
perc = 100 * (freq / total)
value = f'{round(perc, 2)}% ({freq})'
if pos == 'v':
one_row = [pos, value, '100% (7,311)']
else:
one_row = [pos, value, '-']
lists_of_lists.append(one_row)
df = pandas.DataFrame(lists_of_lists, columns=headers)
print(df.to_latex(index=False))
# -
variance = [len(value) for value in frame2lemmas.values()]
counts = Counter(variance)
# +
lists_of_lists = []
headers = ['Variance class', 'Framenet 1.7', 'PropBank 3.1']
total = sum(counts.values())
for freq_class, freq in sorted(counts.items(),
key=operator.itemgetter(1),
reverse=True):
perc = 100 * (freq / total)
value = f'{round(perc, 2)}% ({freq})'
if freq_class == 1:
one_row = [freq_class, value, '100% (10,672)']
else:
one_row = [freq_class, value, '-']
lists_of_lists.append(one_row)
df = pandas.DataFrame(lists_of_lists, columns=headers)
print(df.to_latex(index=False))
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# data set illustrated 100 customer in a shop, and their shoping habit
# -
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(2)
x = np.random.normal(3,1,100)
y = np.random.normal(150,40,100) / x
plt.scatter(x,y)
plt.show()
# +
# the x-axis represented the number of minutes before makig a purchase
# y =axis represented the amount of money spent on the purchase
# -
train_x = x[:80]
train_y = y[:80]
test_x = x[80:]
test_y = y[80:]
plt.scatter(train_x,train_y)
plt.show()
plt.scatter(test_x,test_y)
plt.show()
mymodel = np.poly1d(np.polyfit(train_x,train_y,4))
myline = np.linspace(0,6,100)
plt.scatter(train_x,train_y)
plt.plot(myline,mymodel(myline))
plt.show()
from sklearn.metrics import r2_score
r2 = r2_score(train_y,mymodel(train_x))
print(r2)
r2 = r2_score(test_y,mymodel(test_x))
r2
print(mymodel(5.5))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# Lambda School Data Science
#
# *Unit 2, Sprint 3, Module 4*
#
# ---
# # Model Interpretation
#
# You will use your portfolio project dataset for all assignments this sprint.
#
# ## Assignment
#
# Complete these tasks for your project, and document your work.
#
# - [ ] Continue to iterate on your project: data cleaning, exploratory visualization, feature engineering, modeling.
# - [ ] Make at least 1 partial dependence plot to explain your model.
# - [ ] Make at least 1 Shapley force plot to explain an individual prediction.
# - [ ] **Share at least 1 visualization (of any type) on Slack!**
#
# If you aren't ready to make these plots with your own dataset, you can practice these objectives with any dataset you've worked with previously. Example solutions are available for Partial Dependence Plots with the Tanzania Waterpumps dataset, and Shapley force plots with the Titanic dataset. (These datasets are available in the data directory of this repository.)
#
# Please be aware that **multi-class classification** will result in multiple Partial Dependence Plots (one for each class), and multiple sets of Shapley Values (one for each class).
# ## Stretch Goals
#
# #### Partial Dependence Plots
# - [ ] Make multiple PDPs with 1 feature in isolation.
# - [ ] Make multiple PDPs with 2 features in interaction.
# - [ ] Use Plotly to make a 3D PDP.
# - [ ] Make PDPs with categorical feature(s). Use Ordinal Encoder, outside of a pipeline, to encode your data first. If there is a natural ordering, then take the time to encode it that way, instead of random integers. Then use the encoded data with pdpbox. Get readable category names on your plot, instead of integer category codes.
#
# #### Shap Values
# - [ ] Make Shapley force plots to explain at least 4 individual predictions.
# - If your project is Binary Classification, you can do a True Positive, True Negative, False Positive, False Negative.
# - If your project is Regression, you can do a high prediction with low error, a low prediction with low error, a high prediction with high error, and a low prediction with high error.
# - [ ] Use Shapley values to display verbal explanations of individual predictions.
# - [ ] Use the SHAP library for other visualization types.
#
# The [SHAP repo](https://github.com/slundberg/shap) has examples for many visualization types, including:
#
# - Force Plot, individual predictions
# - Force Plot, multiple predictions
# - Dependence Plot
# - Summary Plot
# - Summary Plot, Bar
# - Interaction Values
# - Decision Plots
#
# We just did the first type during the lesson. The [Kaggle microcourse](https://www.kaggle.com/dansbecker/advanced-uses-of-shap-values) shows two more. Experiment and see what you can learn!
# ### Links
#
# #### Partial Dependence Plots
# - [Kaggle / Dan Becker: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots)
# - [Christoph Molnar: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904)
# - [pdpbox repo](https://github.com/SauceCat/PDPbox) & [docs](https://pdpbox.readthedocs.io/en/latest/)
# - [Plotly: 3D PDP example](https://plot.ly/scikit-learn/plot-partial-dependence/#partial-dependence-of-house-value-on-median-age-and-average-occupancy)
#
# #### Shapley Values
# - [Kaggle / Dan Becker: Machine Learning Explainability — SHAP Values](https://www.kaggle.com/learn/machine-learning-explainability)
# - [Christoph Molnar: Interpretable Machine Learning — Shapley Values](https://christophm.github.io/interpretable-ml-book/shapley.html)
# - [SHAP repo](https://github.com/slundberg/shap) & [docs](https://shap.readthedocs.io/en/latest/)
# +
# %%capture
import sys
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/bsmrvl/DS-Unit-2-Applied-Modeling/tree/master/data/'
# !pip install category_encoders==2.*
else:
DATA_PATH = '../data/'
# +
import pandas as pd
pd.options.display.max_columns = 100
import numpy as np
np.random.seed = 42
import matplotlib.pyplot as plt
from category_encoders import OrdinalEncoder
from scipy.stats import uniform, truncnorm, randint
from xgboost import XGBClassifier
from sklearn.inspection import permutation_importance
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, plot_confusion_matrix, precision_score, recall_score
from sklearn.model_selection import RandomizedSearchCV, cross_val_score, train_test_split
from sklearn.pipeline import make_pipeline
# +
## Changing directions a bit, I'm going to try and predict occupation type from
## a variety of political questions. I'm reading these cleaned csv's from my last
## build.
AB_demo = pd.read_csv(DATA_PATH + 'AB_demo.csv').drop(columns=['Unnamed: 0','id'])
AB_opinions = pd.read_csv(DATA_PATH + 'AB_opinions.csv').drop(columns=['Unnamed: 0','id'])
# +
## I will remove all the "other", essentially unemployed categories,
## and group the rest into small business and government/big business
smallbiz = ['Private sector employee',
'Owner of a shop/grocery store',
'Manual laborer',
'Craftsperson',
'Professional such as lawyer, accountant, teacher, doctor, etc.',
'Agricultural worker/Owner of a farm',
'Employer/director of an institution with less than 10 employees'
]
govbigbiz = ['A governmental employee',
'A student',
'Working at the armed forces or the police',
'Director of an institution or a high ranking governmental employee',
'Employer/director of an institution with 10 employees or more'
]
other = ['A housewife',
'Unemployed',
'Retired',
'Other'
]
def maketarget(cell):
if cell in smallbiz:
return 0
elif cell in govbigbiz:
return 1
else:
return np.NaN
# -
AB_demo['occu_cat'] = AB_demo['occupation'].apply(maketarget).astype(float)
AB_opinions = AB_opinions.merge(AB_demo[['occu_cat']], left_index=True, right_index=True)
AB_opinions = AB_opinions.dropna()
# +
X = AB_opinions.drop(columns='occu_cat')
y = AB_opinions['occu_cat']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42)
# +
classy = XGBClassifier(
random_state=42,
max_depth=2,
)
params = {
'subsample': truncnorm(a=0,b=1, loc=.5, scale=.1),
'learning_rate': truncnorm(a=0,b=1, loc=.1, scale=.1),
'scale_pos_weight': uniform(.1, .3)
}
prec = .5
recall = .05
while prec < .9 or recall < .06:
rand_state = np.random.randint(10, 90)
# print('RANDOM STATE:',rand_state)
searcher = RandomizedSearchCV(
classy,
params,
n_jobs=-1,
# random_state=rand_state,
random_state=25, #### 16 for smallbiz, 25 for govbigbiz
verbose=1,
scoring='precision'
)
searcher.fit(X_train, y_train)
model = searcher.best_estimator_
prec = precision_score(y_test, model.predict(X_test))
recall = recall_score(y_test, model.predict(X_test))
# print('RANDOM STATE:',rand_state)
print(classification_report(y_test, model.predict(X_test)))
# -
per_imps = permutation_importance(model, X_test, y_test,
scoring='precision', random_state=42, n_repeats=10)
more_important = pd.Series(per_imps['importances_mean'], index=X.columns)
top5 = more_important.sort_values(ascending=False).head()
top5
predictions = pd.Series(model.predict(X_test), index=X_test.index, name='predictions')
AB_opinions = AB_opinions.merge(predictions, left_index=True, right_index=True)
positives = AB_opinions[AB_opinions['predictions'] == 1]
positives[top5.index].head()
# +
from pdpbox.pdp import pdp_isolate, pdp_plot
feat = 'q6105'
isolate = pdp_isolate(
model=model,
dataset=X_test,
model_features=X_test.columns,
feature=feat
)
pdp_plot(isolate, feature_name=feat);
# +
from pdpbox.pdp import pdp_interact, pdp_interact_plot
feats = ['q6105', 'q812a1']
interact = pdp_interact(
model=model,
dataset=X_test,
model_features=X_test.columns,
features=feats
)
fig, ax = pdp_interact_plot(interact,
feature_names=feats,
plot_params={
'title': '',
'subtitle': '',
'cmap': 'inferno',
},
plot_type='contour')
ax['pdp_inter_ax'].set_title('Questions determining government or large\nbusiness \
employee (as opposed to working\nclass/small biz)', ha='left', fontsize=17, x=0, y=1.1)
ax['pdp_inter_ax'].text(s='Brighter colors = more likely to be gov/big biz', fontsize=13, x=-2, y=2.25, color='#333333')
ax['pdp_inter_ax'].set_xlabel('Do you attend Friday\nprayer/Sunday services?', fontsize=13, labelpad=-5)
ax['pdp_inter_ax'].set_ylabel('How important that\nconstitution insures\n\
equal rights for men\n and women?', ha='right', fontsize=13, rotation=0, labelpad=0, y=0.45)
ax['pdp_inter_ax'].set_xticks([-1.7,1.7])
ax['pdp_inter_ax'].set_xticklabels(['Never','Always'])
ax['pdp_inter_ax'].set_yticks([-1.15,2])
ax['pdp_inter_ax'].set_yticklabels(['Not important at all','Very important'], rotation=90)
ax['pdp_inter_ax'].tick_params(axis='both', length=10, color='white')
fig.set_facecolor('white')
plt.show()
# -
row = X_test.loc[[2455]]
row
# +
import shap
explainer = shap.TreeExplainer(model)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=explainer.shap_values(row),
features=row,
link='logit'
)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/thiru2024/Swidish-leaf-prediction/blob/main/Code-1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="b5wp17hrq1ww"
from tensorflow.keras.applications import VGG19
from tensorflow.keras.layers import Input,Dense,Flatten
from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img
from tensorflow.keras.models import Model,Sequential
#importing other required libraries
import numpy as np
import pandas as pd
from sklearn.utils.multiclass import unique_labels
import os
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
import itertools
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from keras import Sequential
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import SGD,Adam
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.layers import Flatten,Dense,BatchNormalization,Activation,Dropout
from tensorflow.keras.utils import to_categorical
# + id="CNd_9uqeq-RA" outputId="b3b77a78-055f-4832-94c2-2a8704a74508" colab={"base_uri": "https://localhost:8080/"}
image_size = [256,256]
#model = VGG19(input_shape = image_size+[3],include_top=False,weights='imagenet')
model = VGG19(include_top = False, weights = 'imagenet', input_shape = (256,256,3))
# + id="XtDwc_fWq-cq" outputId="8dfed9e9-0ad4-44fa-a2b5-ad9ff2a7fad7" colab={"base_uri": "https://localhost:8080/"}
for layer in model.layers:
layer.trainable = False
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="jhWpxOxs8vfQ" outputId="2e10d453-d5e3-41ec-878d-ccf7f46bad72"
model.summary()
# + id="fA39Mvlhq-i4"
final = Model(inputs = model.input,outputs = Dense(15,activation = 'softmax')(Flatten()(model.output)))
# + colab={"base_uri": "https://localhost:8080/"} id="lsN5KIOeq-nZ" outputId="fbd266eb-12d2-4eac-834f-e87a2ed8bf7e"
final.compile(loss = 'sparse_categorical_crossentropy',optimizer='adam',metrics = ['accuracy'])
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="2Fp5KrAf83C7" outputId="997c1f54-ba89-4e32-f616-4bcdb85e727e"
final.summary()
# + id="GXlXAgLcq-sx"
train = '/content/drive/My Drive/swidish leaf prediction'
# + id="ha-bAvK8q-2B" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="cc9a952c-3c22-4da3-9d03-964d6f82643d"
import matplotlib.pyplot as plt
import random
import os
import cv2
categories = ['Acer', 'Alnus incana', 'Betula pubescens', 'Fagus silvatica', 'Populus', 'Populus tremula', 'Quercus', 'Salix alba', 'Salix aurita', 'Salix sinerea', 'Sorbus aucuparia', 'Sorbus intermedia', 'Tilia', 'Ulmus carpinifolia', 'Ulmus glabra']
data = []
for cat in categories:
folder = os.path.join(train,cat)
label = categories.index(cat)
for img in os.listdir(folder):
img_path = os.path.join(folder,img)
img_arr = cv2.imread(img_path)
img_arr = cv2.resize(img_arr,(256,256))
data.append([img_arr,label])
plt.imshow(img_arr)
# + id="wceoXGVIq-8K" colab={"base_uri": "https://localhost:8080/"} outputId="2b72eec1-7464-488e-fd7b-66da9774728d"
x = []
y = []
for features,labels in data:
x.append(features)
y.append(labels)
import numpy as np
x = np.array(x)
y = np.array(y)
x.shape
# + id="9YmTAwo1q_BT"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
train_generator = ImageDataGenerator(rotation_range=2, horizontal_flip=True, zoom_range=.1)
test_generator = ImageDataGenerator(rotation_range=2, horizontal_flip= True, zoom_range=.1)
#Fitting the augmentation defined above to the data
train_generator.fit(X_train)
test_generator.fit(X_test)
# + id="kskZXHb3q_F-" colab={"base_uri": "https://localhost:8080/"} outputId="965e018c-c5fb-4324-e514-d7aa803797e5"
vgg19 = final.fit(X_train,y_train,epochs=15,batch_size=100,validation_data=(X_test, y_test))
# + id="h219YJlcq_J4"
final.save('/content/drive/My Drive/imagevgg19.h5')
# + id="UpXA2WrBXXBz"
# import tensorflow as tf
# final = tf.keras.models.load_model('/content/drive/My Drive/imagevgg19.h5')
# Show the model architecture
# + id="I3JzP8-Nq_Nx" colab={"base_uri": "https://localhost:8080/"} outputId="96081486-6a52-4a06-8c63-6c9de030bc2f"
import numpy as np
predictions = final.predict(
X_test, steps=None, callbacks=None, max_queue_size=10, workers=1,
use_multiprocessing=False, verbose=0)# Vector of probabilities
pred_labels = np.argmax(predictions, axis = 1) # We take the highest probability
# z = []
# def get_class(prediction):
# if prediction > 0.5:
# z.append(1)
# else:
# z.append(0)
# for prediction in predictions:
# get_class(prediction)
# z = np.array(z)
pred_labels
# + id="2xWJL6-vq_Rz" colab={"base_uri": "https://localhost:8080/"} outputId="bc667b51-f253-499a-b1d1-326f78e826a4"
from sklearn.metrics import accuracy_score
acc = accuracy_score(y_test, pred_labels)
print('Accuracy: %.2f' % acc)
# + id="QKKisjVJq_kt" colab={"base_uri": "https://localhost:8080/", "height": 600} outputId="94a631ef-2903-4aa6-b671-042985d84512"
# %matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import plot_confusion_matrix
acc = vgg19.history['accuracy']
val_acc = vgg19.history['val_accuracy']
loss = vgg19.history['loss']
val_loss = vgg19.history['val_loss']
epochs = range(15)
plt.figure(figsize=(5, 5))
plt.plot(epochs,acc, label='Training Accuracy')
plt.plot(epochs,val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.show()
plt.subplot(1, 1, 1)
plt.plot(epochs, loss, label='Training Loss')
plt.plot(epochs, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# + id="a8NYvSeXq_o2" colab={"base_uri": "https://localhost:8080/", "height": 589} outputId="4149e519-9be6-4f19-ac21-ef32536bb8f3"
# %matplotlib inline
from sklearn.metrics import confusion_matrix
import itertools
import matplotlib.pyplot as plt
cm = confusion_matrix(y_true=y_test, y_pred=pred_labels)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm_plot_labels = ['Acer', 'Alnus incana', 'Betula pubescens', 'Fagus silvatica', 'Populus', 'Populus tremula', 'Quercus', 'Salix alba', 'Salix aurita', 'Salix sinerea', 'Sorbus aucuparia', 'Sorbus intermedia', 'Tilia', 'Ulmus carpinifolia', 'Ulmus glabra']
plot_confusion_matrix(cm=cm, classes=cm_plot_labels, title='Confusion Matrix')
# + id="ToYEm8nwrpUq" colab={"base_uri": "https://localhost:8080/", "height": 877} outputId="ef551f6e-bbfb-494a-db0e-a2bf802ae6ea"
cm_plot_labels = ['Acer', 'Alnus incana', 'Betula pubescens', 'Fagus silvatica', 'Populus', 'Populus tremula', 'Quercus', 'Salix alba', 'Salix aurita', 'Salix sinerea', 'Sorbus aucuparia', 'Sorbus intermedia', 'Tilia', 'Ulmus carpinifolia', 'Ulmus glabra']
plt.figure(figsize=(20, 8))
plot_confusion_matrix(cm=cm, classes=cm_plot_labels, title='Confusion Matrix')
# + id="kvEiy611rphg" colab={"base_uri": "https://localhost:8080/"} outputId="dab86cab-5607-4f08-ee1d-8657619ac9fe"
from sklearn.datasets import make_circles
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
from keras.models import Sequential
from keras.layers import Dense
# predict probabilities for test set
yhat_probs = final.predict(X_test, verbose=0)
# predict crisp classes for test set
yhat_classes = pred_labels
# reduce to 1d array
# accuracy: (tp + tn) / (p + n)
accuracy = accuracy_score(y_test, yhat_classes)
print('Accuracy: %f' % accuracy)
# precision tp / (tp + fp)
print("Precision Score : ",precision_score(y_test, pred_labels,
pos_label='positive',
average='micro'))
print("Recall Score : ",recall_score(y_test, pred_labels,
pos_label='positive',
average='micro'))
# f1: 2 tp / (2 tp + fp + fn)
print("f1_Score : ",f1_score(y_test, pred_labels,
pos_label='positive',
average='micro'))
# kappa
kappa = cohen_kappa_score(y_test, yhat_classes)
print('Cohens kappa: %f' % kappa)
# ROC AUC
auc = roc_auc_score(y_test, yhat_probs,multi_class="ovr",average='macro')
print('ROC AUC: %f' % auc)
# confusion matrix
matrix = confusion_matrix(y_test, yhat_classes)
print(matrix)
# + id="EGQoI98Kfwu_" colab={"base_uri": "https://localhost:8080/"} outputId="9e1c8867-baf7-4abc-ad85-c53a8327c94e"
import pandas as pd
import numpy as np
from scipy import interp
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import LabelBinarizer
def class_report(y_true, y_pred, y_score=None, average='micro'):
if y_true.shape != y_pred.shape:
print("Error! y_true %s is not the same shape as y_pred %s" % (
y_true.shape,
y_pred.shape)
)
return
lb = LabelBinarizer()
if len(y_true.shape) == 1:
lb.fit(y_true)
#Value counts of predictions
labels, cnt = np.unique(
y_pred,
return_counts=True)
n_classes = len(labels)
pred_cnt = pd.Series(cnt, index=labels)
metrics_summary = precision_recall_fscore_support(
y_true=y_true,
y_pred=y_pred,
labels=labels)
avg = list(precision_recall_fscore_support(
y_true=y_true,
y_pred=y_pred,
average='weighted'))
metrics_sum_index = ['precision', 'recall', 'f1-score', 'support']
class_report_df = pd.DataFrame(
list(metrics_summary),
index=metrics_sum_index,
columns=['Acer', 'Alnus incana', 'Betula pubescens', 'Fagus silvatica', 'Populus', 'Populus tremula', 'Quercus', 'Salix alba', 'Salix aurita', 'Salix sinerea', 'Sorbus aucuparia', 'Sorbus intermedia', 'Tilia', 'Ulmus carpinifolia', 'Ulmus glabra'])
support = class_report_df.loc['support']
total = support.sum()
class_report_df['avg / total'] = avg[:-1] + [total]
class_report_df = class_report_df.T
class_report_df['pred'] = pred_cnt
class_report_df['pred'].iloc[-1] = total
if not (y_score is None):
fpr = dict()
tpr = dict()
roc_auc = dict()
for label_it, label in enumerate(labels):
fpr[label], tpr[label], _ = roc_curve(
(y_true == label).astype(int),
y_score[:, label_it])
roc_auc[label] = auc(fpr[label], tpr[label])
if average == 'micro':
if n_classes <= 2:
fpr["avg / total"], tpr["avg / total"], _ = roc_curve(
lb.transform(y_true).ravel(),
y_score[:, 1].ravel())
else:
fpr["avg / total"], tpr["avg / total"], _ = roc_curve(
lb.transform(y_true).ravel(),
y_score.ravel())
roc_auc["avg / total"] = auc(
fpr["avg / total"],
tpr["avg / total"])
elif average == 'macro':
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([
fpr[i] for i in labels]
))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in labels:
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["avg / total"] = auc(fpr["macro"], tpr["macro"])
class_report_df['AUC'] = pd.Series(roc_auc)
return class_report_df
report_with_auc = class_report(
y_true=y_test,
y_pred=pred_labels,
y_score=yhat_probs)
print(report_with_auc)
# + id="iGYjS8QKrpqt" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="a50bb38a-761a-4338-c8c6-abfeca3cc81b"
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, yhat_probs, average='macro', sample_weight=None, max_fpr=None, multi_class='ovo', labels=None)
fpr = {}
tpr = {}
thresh ={}
n_class = 15
for i in range(n_class):
fpr[i], tpr[i], thresh[i] = roc_curve(y_test, yhat_probs[:,i], pos_label=i)
# plotting
fig = plt.figure(figsize=(15,8))
plt.plot(fpr[0], tpr[0], linestyle='--',color='orange', label='Acer vs Rest',linewidth=3)
plt.plot(fpr[1], tpr[1], linestyle='--',color='green', label='Alnus incana vs Rest',linewidth=3)
plt.plot(fpr[2], tpr[2], linestyle='--',color='blue', label='Betula pubescens vs Rest',linewidth=3)
plt.plot(fpr[3], tpr[3], linestyle='--',color='yellow', label='Fagus silvatica vs Rest',linewidth=3)
plt.plot(fpr[4], tpr[4], linestyle='--',color='pink', label='Populus vs Rest',linewidth=3)
plt.plot(fpr[5], tpr[5], linestyle='--',color='black', label='Populus tremula vs Rest',linewidth=3)
plt.plot(fpr[6], tpr[6], linestyle='--',color='aqua', label='Quercus vs Rest',linewidth=3)
plt.plot(fpr[7], tpr[7], linestyle='--',color='purple', label='Salix alba vs Rest',linewidth=3)
plt.plot(fpr[8], tpr[8], linestyle='--',color='gray', label='Salix aurita vs Rest',linewidth=3)
plt.plot(fpr[9], tpr[9], linestyle='--',color='brown', label='Salix sinerea vs Rest',linewidth=3)
plt.plot(fpr[10], tpr[10], linestyle='--',color='gold', label='Sorbus aucuparia vs Rest',linewidth=3)
plt.plot(fpr[11], tpr[11], linestyle='--',color='silver', label='Sorbus intermedia vs Rest',linewidth=3)
plt.plot(fpr[12], tpr[12], linestyle='--',color='lime', label='Tilia vs Rest',linewidth=3)
plt.plot(fpr[13], tpr[13], linestyle='--',color='red', label='Ulmus carpinifolia vs Rest',linewidth=3)
plt.plot(fpr[14], tpr[14], linestyle='--',color='maroon', label='Ulmus glabra vs Rest',linewidth=3)
plt.plot(np.arange(0,1.01,0.01), np.arange(0,1.01,0.01), linewidth=3)
plt.title('Multiclass ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive rate')
plt.legend(loc='best')
plt.savefig('Multiclass ROC',dpi=300);
# + colab={"base_uri": "https://localhost:8080/"} id="QkQmNgTAHTZo" outputId="77dfaf23-5376-443f-d2c4-c856fbed57b9"
from sklearn.metrics import matthews_corrcoef
matthews_corrcoef(y_test, yhat_classes)
# + colab={"base_uri": "https://localhost:8080/"} id="F-wKSBt6Hl3E" outputId="5ac48ee9-3ac7-44fe-fa8b-d58bbc54c0e4"
from sklearn.metrics import jaccard_score
sc = jaccard_score(y_test, yhat_classes, average=None)
sc
# + colab={"base_uri": "https://localhost:8080/", "height": 468} id="CHfE48CTJ5BR" outputId="11fee6cc-81f3-4ea1-b2f3-a822cf3b5b67"
import numpy as np
import matplotlib.pyplot as plt
data = {}
labelss = ['Acer', 'Alnus incana', 'Betula pubescens', 'Fagus silvatica', 'Populus', 'Populus tremula', 'Quercus', 'Salix alba', 'Salix aurita', 'Salix sinerea', 'Sorbus aucuparia', 'Sorbus intermedia', 'Tilia', 'Ulmus carpinifolia', 'Ulmus glabra']
for i in range(len(sc)):
data[labelss[i]] = sc[i]
# creating the dataset
courses = list(data.keys())
values = list(data.values())
fig = plt.figure(figsize = (10, 5))
# creating the bar plot
plt.bar(courses, values, color ='maroon',
width = 0.5)
plt.xticks(range(len(courses)), courses, rotation='vertical',fontweight ='bold',fontsize = 10)
plt.xlabel("leaf types",fontweight ='bold',fontsize = 15)
plt.ylabel("jaccard score",fontweight ='bold',fontsize = 15)
plt.title("jaccard score for all leaf labels",fontweight ='bold',fontsize = 15)
plt.legend(loc='best')
plt.show()
# + id="gpPig_JoVi1e"
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Util
# language: python
# name: util
# ---
# # Import stuff
# +
# %matplotlib notebook
import cv2
from matplotlib import pyplot as plt
import skimage.io
import numpy as np
import os
from shutil import copyfile
from tqdm.notebook import tqdm as tqdm
import scipy
import xml.etree.ElementTree as ET
# -
# # Define Paths
#Path to train/val images
path_trainval = "/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/images_and_gt_combined/"
#Path to test images
path_test = "/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/images_and_gt_test/"
#Name of the validation output folder, mustn't exist
folder1 = "darknet_val"
#Name of the training output folder, mustn't exist
folder2 = "darknet_train"
#Name of the test output folder, mustn't exist
folder3 = "darknet_test"
#Define train val split, 1/split images of all original images will be used for validation (e.g 1/10 = 10%)
split = 10
# # Load images (Train/Val)
# +
def load_images_from_folder(path, split):
os.mkdir( folder1)
os.mkdir( folder1+"/images")
os.mkdir( folder1+"/groundtruth")
os.mkdir( folder1+"/groundtruth_voc")
os.mkdir( folder2)
os.mkdir( folder2+"/images")
os.mkdir( folder2+"/groundtruth")
os.mkdir( folder2+"/groundtruth_voc")
train_txt = open(folder2+"/train.txt","w")
test_txt = open(folder1+"/val.txt","w")
voc_train_txt = open(folder2+"/trainval_voc.txt","w")
voc_val_txt = open(folder1+"/val_voc.txt","w")
#we want to iterate through darknet groundtruths
paths = os.listdir(path + "/groundtruth/")
valid = 0
non_valid = 0
i = 0
soldier_train = 0
soldier_val = 0
civilian_train = 0
civilian_val = 0
print(len(paths))
for x in tqdm(range(len(paths))):
single_path = paths[x]
#check if file is a text file
if single_path.endswith('.txt'):
#check if file is empty, if not continue
if os.stat(path+ "/groundtruth/" +single_path).st_size > 0:
#############################################################
#Note: Change .png to .jpg according to your image format
#############################################################
image = cv2.imread(path + "images/"+ single_path[:-4] + ".png")
#check if images exists
if image is None:
#print("Could not load image: " + single_path)
non_valid += 1
continue
destination= open( path+ "/groundtruth/" +single_path[:-4] + "_clean" + ".txt", "w" )
source= open( path+ "/groundtruth/" +single_path, "r" )
#############################################################
#clean darknet groundtruth from crowd and military vehicle.
#############################################################
#NOTE: Comment or remove this if you use a different dataset!
#############################################################
good = 1
for line in source:
if line[0] == "1":
destination.write( "0" + line[1:] )
good += 1
elif line[0] == "2":
destination.write( "1" + line[1:] )
good += 1
elif line[0] == "3":
destination.write( "2" + line[1:] )
good += 1
else:
continue
source.close()
destination.close()
if good == 0:
non_valid += 1
continue
#############################################################
#Cleaning over
#############################################################
valid += 1
if i < 10:
#Count class occurances of civilian and soldier
root = ET.parse(path + "/groundtruth_voc/" + single_path[:-4] + ".xml").getroot()
for child in root:
if(child.tag == "object"):
if(child.find('name').text == "soldier"):
soldier_train += 1
if(child.find('name').text == "civilian"):
civilian_train += 1
#write to darknet txt
train_txt.write("x64/Release/data/img_train/" + str(x) + ".jpg\n")
i+=1
#write to voc txt
voc_train_txt.write(str(x) + "\n")
copyfile(path+ "/groundtruth/" +single_path[:-4] + "_clean" + ".txt", folder2+"/groundtruth/" + str(x) + ".txt")
copyfile(path + "/groundtruth_voc/" + single_path[:-4] + ".xml", folder2+"/groundtruth_voc/" + str(x) + ".xml")
scipy.misc.imsave((folder2+"/images/" + str(x) + ".jpg"), image)
else:
#Count class occurances of civilian and soldier
root = ET.parse(path + "/groundtruth_voc/" + single_path[:-4] + ".xml").getroot()
for child in root:
if(child.tag == "object"):
if(child.find('name').text == "soldier"):
soldier_val += 1
if(child.find('name').text == "civilian"):
civilian_val += 1
#write to darknet txt
test_txt.write("x64/Release/data/img_val/" + str(x) + ".jpg\n")
i = 0
#write to voc txt
voc_val_txt.write(str(x) + "\n")
#copy groundtruths and save the image as JPG
#Note: For some reason OpenCV doesn't work here (as darknet can't load the images), so we use scipy.
copyfile(path+ "/groundtruth/" +single_path[:-4] + "_clean" + ".txt", folder1+"/groundtruth/" + str(x) + ".txt")
copyfile(path + "/groundtruth_voc/" + single_path[:-4] + ".xml", folder1+"/groundtruth_voc/" + str(x) + ".xml")
scipy.misc.imsave((folder1+"/images/" + str(x) + ".jpg"), image)
else:
non_valid += 1
train_txt.close()
test_txt.close()
voc_train_txt.close()
voc_val_txt.close()
print (soldier_train, soldier_val, civilian_train, civilian_val, valid, non_valid)
return valid, non_valid
x = load_images_from_folder(path_trainval, split)
# -
# # Load images (Test)
# +
def load_images_from_folder_test(path):
os.mkdir( folder3)
os.mkdir( folder3+"/images")
os.mkdir( folder3+"/groundtruth")
os.mkdir( folder3+"/groundtruth_voc")
test_txt = open(folder3+"/test.txt","w")
voc_test_txt = open(folder3+"/test_voc.txt","w")
#we want to iterate through darknet groundtruths
paths = os.listdir(path + "/groundtruth/")
valid = 0
non_valid = 0
i = 0
soldier_test = 0
civilian_test = 0
print(len(paths))
for x in tqdm(range(len(paths))):
single_path = paths[x]
#check if file is a text file
if single_path.endswith('.txt'):
#check if file is empty, if not continue
if os.stat(path+ "/groundtruth/" +single_path).st_size > 0:
#############################################################
#Note: Change .png to .jpg according to your image format
#############################################################
image = cv2.imread(path + "images/"+ single_path[:-4] + ".png")
#check if images exists
if image is None:
non_valid += 1
continue
#############################################################
#clean darknet groundtruth from crowd and military vehicle.
#############################################################
#NOTE: Comment or remove this if you use a different dataset!
#############################################################
destination= open( path+ "/groundtruth/" +single_path[:-4] + "_clean" + ".txt", "w" )
source= open( path+ "/groundtruth/" +single_path, "r" )
good = 1
for line in source:
if line[0] == "1":
destination.write( "0" + line[1:] )
good += 1
elif line[0] == "2":
destination.write( "1" + line[1:] )
good += 1
elif line[0] == "3":
destination.write( "2" + line[1:] )
good += 1
else:
continue
source.close()
destination.close()
if good == 0:
non_valid += 1
continue
#############################################################
#Cleaning over
#############################################################
valid += 1
#Count class occurances of civilian and soldier
root = ET.parse(path + "/groundtruth_voc/" + single_path[:-4] + ".xml").getroot()
for child in root:
if(child.tag == "object"):
if(child.find('name').text == "soldier"):
soldier_test += 1
if(child.find('name').text == "civilian"):
civilian_test += 1
#write to darknet txt
test_txt.write("x64/Release/data/img_test/" + str(x) + ".jpg\n")
i+=1
#write to voc txt
voc_test_txt.write(str(x) + "\n")
#copy groundtruths and save the image as JPG
#Note: For some reason OpenCV doesn't work here (as darknet can't load the images), so we use scipy.
copyfile(path+ "/groundtruth/" +single_path[:-4] + "_clean" + ".txt", folder3+"/groundtruth/" + str(x) + ".txt")
copyfile(path + "/groundtruth_voc/" + single_path[:-4] + ".xml", folder3+"/groundtruth_voc/" + str(x) + ".xml")
scipy.misc.imsave((folder2+"/images/" + str(x) + ".jpg"), image)
else:
non_valid += 1
test_txt.close()
voc_test_txt.close()
print (soldier_test, civilian_test, valid, non_valid)
return valid, non_valid
x = load_images_from_folder_test(path_test)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
import seaborn as sns
from bokeh.io import output_notebook, show
from bokeh.plotting import figure, output_file, show
from bokeh.models import ColumnDataSource
from bokeh.models.tools import HoverTool
from bokeh.models import Range1d
from math import atan2, pi, sqrt, pow
from scipy.stats import linregress
output_notebook()
# +
att_f = 'Vissim George CBD - Current Existing_Link Segment Results 02.att'
att_path = 'C:/Users/shafeeq.mollagee/OneDrive - Aurecon Group/GIPTN Traffic Modelling/04 - CBD Modelling/08 - Micro Model/03 - Final Base Model/Rev 08/Scenarios/S000001/S000001.results/%s' % (att_f)
att = pd.read_table(att_path, sep = ";", header=20, dtype = {'$LINKEVALSEGMENTEVALUATION:SIMRUN' : object})
save_path = 'D:/001_Projects/01 - GIPTN/07 - CBD Micro Model/CBD Vissim Model/%s.csv' % ("combinetest")
simrun = '339'
df = combine(process_link_volumes (att, simrun), simrun, save_path)
# +
def process_link_volumes (df, simrun):
# filter to selected simulation run and summarise link evaluation results to madfimum value
y = str (simrun)
df['$LINKEVALSEGMENTEVALUATION:SIMRUN'] = df['$LINKEVALSEGMENTEVALUATION:SIMRUN'].apply(str)
df['LINKEVALSEGMENT'] = df['LINKEVALSEGMENT'].str.split('-').str[0]
df = df.groupby(by = ['$LINKEVALSEGMENTEVALUATION:SIMRUN', 'TIMEINT', 'LINKEVALSEGMENT', 'LINKEVALSEGMENT\LINK\AM_PEAK_HOUR_COUNTS'])[
'DENSITY(ALL)',
'DELAYREL(ALL)',
'SPEED(ALL)',
'VOLUME(ALL)'].max().reset_index()
df["LINKEVALSEGMENT"] = pd.to_numeric(df["LINKEVALSEGMENT"])
df = df.loc[df['$LINKEVALSEGMENTEVALUATION:SIMRUN'] == y]
return df
def GEH (x,y):
# calculate GEH statistic
if x + y == 0:
g = 0
else:
g = sqrt(2*(pow(x-y,2))/(x+y))
return g
def combine (df, simrun, path):
# prepare the input dataframe for the Bokeh graph
df = df.rename(index=str, columns={'$LINKEVALSEGMENTEVALUATION:SIMRUN': 'SIMRUN',
'VOLUME(ALL)': 'VISVOL',
'LINKEVALSEGMENT\LINK\AM_PEAK_HOUR_COUNTS': 'BALANCED_COUNT'})
# apply GEH statistic calculation to count and modelled volumes
df['GEH'] = df.apply(lambda x: GEH(x['VISVOL'], x['BALANCED_COUNT']), axis=1)
# calculate glyph colour based on GEH band
df['COLOUR'] = np.where(df['GEH']<5, '#a8c686', np.where(df['GEH']>10,'#e4572e','#f3a712'))
df.to_csv(path)
return df
#combine(count, att, 2, save_path)
def qreg(att):
# plot a quick regression curve in seaborn
sns.lmplot(x='BALANCED_COUNT', y='VISVOL', data = att)
def geh5():
x = df[df["GEH"]>5].count()[0]
y = len(df)
z = (y-x)/y
return z
def geh10():
x = df[df["GEH"]>10].count()[0]
y = len(df)
z = (y-x)/y
return z
def rsq():
return linregress(df['BALANCED_COUNT'], df['VISVOL'])
# -
qreg(df)
# +
regression = np.polyfit(df['BALANCED_COUNT'], df['VISVOL'], 1)
r_x, r_y = zip(*((i, i*regression[0] + regression[1]) for i in range(len(df))))
yDiff = r_y[len(df)-1] - r_y[0]
xDiff = r_x[len(df)-1] - r_x[0]
ang = atan2(yDiff, xDiff)
source = ColumnDataSource(df)
p = figure(width=650, height=650)
p.circle(x='BALANCED_COUNT', y='VISVOL',
source=source,
size=10, color='COLOUR', alpha=0.5)
p.title.text = 'Modelled vs Balanced Observed Counts by GEH'
p.xaxis.axis_label = 'Balanced Observed Volume'
p.yaxis.axis_label = 'Modelled Volume'
hover = HoverTool()
hover.tooltips=[
('Turn Link Number', '@LINKEVALSEGMENT'),
('Simulation Run', '@SIMRUN'),
('Time Interval', '@TIMEINT'),
('Modelled Volume', '@VISVOL'),
('Balanced Volume', '@BALANCED_COUNT'),
('GEH Statistic', '@GEH')
]
p.add_tools(hover)
p.line(r_x, r_y, color="#669bbc", line_width=1.25)
p.ray(x=[1, r_x[0]],
y=[1, r_y[0]],
length=0,
angle=[pi/4, ang],
color=["#29335c", "#669bbc"],
line_width=[2, 1.25])
p.y_range = Range1d(0, 1200)
p.x_range = Range1d(0, 1200)
show(p)
# -
geh5()
geh10()
ang
rsq()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_mxnet_p36
# language: python
# name: conda_mxnet_p36
# ---
# ## Exporting ONNX Models with MXNet
#
# The [Open Neural Network Exchange](https://onnx.ai/) (ONNX) is an open format for representing deep learning models with an extensible computation graph model, definitions of built-in operators, and standard data types. Starting with MXNet 1.3, models trained using MXNet can now be saved as ONNX models.
#
# In this example, we show how to train a model on Amazon SageMaker and save it as an ONNX model. This notebook is based on the [MXNet MNIST notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_mnist/mxnet_mnist.ipynb) and the [MXNet example for exporting to ONNX](https://mxnet.incubator.apache.org/tutorials/onnx/export_mxnet_to_onnx.html).
# ### Setup
#
# First we need to define a few variables that we'll need later in the example.
# +
import boto3
from sagemaker import get_execution_role
from sagemaker.session import Session
# AWS region
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = Session().default_bucket()
# Location to save your custom code in tar.gz format.
custom_code_upload_location = 's3://{}/customcode/mxnet'.format(bucket)
# Location where results of model training are saved.
model_artifacts_location = 's3://{}/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
# -
# ### The training script
#
# The ``mnist.py`` script provides all the code we need for training and hosting a SageMaker model. The script we will use is adaptated from Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
# !pygmentize mnist.py
# ### Exporting to ONNX
#
# The important part of this script can be found in the `save` method. This is where the ONNX model is exported:
#
# ```python
# import os
#
# from mxnet.contrib import onnx as onnx_mxnet
# import numpy as np
#
# def save(model_dir, model):
# symbol_file = os.path.join(model_dir, 'model-symbol.json')
# params_file = os.path.join(model_dir, 'model-0000.params')
#
# model.symbol.save(symbol_file)
# model.save_params(params_file)
#
# data_shapes = [[dim for dim in data_desc.shape] for data_desc in model.data_shapes]
# output_path = os.path.join(model_dir, 'model.onnx')
#
# onnx_mxnet.export_model(symbol_file, params_file, data_shapes, np.float32, output_path)
# ```
#
# The last line in that method, `onnx_mxnet.export_model`, saves the model in the ONNX format. We pass the following arguments:
#
# * `symbol_file`: path to the saved input symbol file
# * `params_file`: path to the saved input params file
# * `data_shapes`: list of the input shapes
# * `np.float32`: input data type
# * `output_path`: path to save the generated ONNX file
#
# For more information, see the [MXNet Documentation](https://mxnet.incubator.apache.org/api/python/contrib/onnx.html#mxnet.contrib.onnx.mx2onnx.export_model.export_model).
# ### Training the model
#
# With the training script written to export an ONNX model, the rest of training process looks like any other Amazon SageMaker training job using MXNet. For a more in-depth explanation of these steps, see the [MXNet MNIST notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_mnist/mxnet_mnist.ipynb).
# +
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.3.0',
hyperparameters={'learning-rate': 0.1})
train_data_location = 's3://sagemaker-sample-data-{}/mxnet/mnist/train'.format(region)
test_data_location = 's3://sagemaker-sample-data-{}/mxnet/mnist/test'.format(region)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
# -
# ### Next steps
#
# Now that we have an ONNX model, we can deploy it to an endpoint in the same way we do in the [MXNet MNIST notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_mnist/mxnet_mnist.ipynb).
#
# For examples on how to write a `model_fn` to load the ONNX model, please refer to:
# * the [MXNet ONNX Super Resolution notebook](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/sagemaker-python-sdk/mxnet_onnx_superresolution)
# * the [MXNet documentation](https://mxnet.incubator.apache.org/api/python/contrib/onnx.html#mxnet.contrib.onnx.onnx2mx.import_model.import_model)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Klasyfikacja recenzji aplikacji mobilnych
# # Podsumowanie
#
# Poniżej przedstawiam wykres metryki zbalansowanej metryki jakości dla testowanych klasyfikatorów. Jak możemy zauważyć, najlepsze wartości osiąga klasyfikator używający LSTM z biblioteki `pytorch`. Co ciekawe, prosty SVM z wygrywa z gęstą siecią neuronową perceptronów, co jest dobrym sygnałem dla metody SVM -- jest też możliwym wynikiem wektoryzacji która wybierała tylko słowa występujące w maksymalnie 95% dokumentów, przez co prawdpodobnie pomijała częste wyrazy nie mające wpływu na treść recenzji.
#
# Dla ostatecznego rozwiązania 58% nie jest bardzo wysokim wynikiem, ale połącznie faktu iż jest to bardzo zbalansowany klasyfikator (57% na metryce zbalansowanej) z praktycznie takim samym wynikiem na zwykłej prezycji może prowadzić do wniosku, iż problem ten jest trudniejszy niż mogłoby się zdawać. Możliwe jest także, iż podobne recenzje posiadają różne wyniki, co prowadziłoby do właśnie takich wyników.
#
# W celach dalszej poprawy wyników możemy zastosować lepszą metodę wektoryzacji, w szczególności dającą wektory większych rozmiarów. Możliwe jest także zastosowanie trudniejszych architektur do oceny zdań, np. [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)). Innym pomysłem jest także wyuczenie sieci neuronowych do regresji, abyśmy otrzymywali liczbę zmiennoprzecinkową i stosowali odpowednią funckję straty np. [MSE](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html?highlight=mse%20loss#torch.nn.MSELoss).
from emd_2.barplot import plot_graph
plot_graph(scores)
# ## Uruchomienie kodu
#
# Kod do załadowania i uruchomienia modelu końcowego jest dostępny w pliku `validate.py`. Wymagania pythonowe znajdują się w pliku `requirements.txt` i mogą zostać zainstalowane komendą `pip install -r requirements.txt`.
#
# Kod może być wykorzystany dwojako, potrzebujemy jednak pliku `.csv` analogicznego do wejściowego:
#
# * `python validate.py --input_path input.csv --output_path out.txt`
# * `out = get_output(preprocess(load_data('input.csv')))`
#
# W pierwszym z nich, `out.txt` to plik w którym w każdej linii będzie kolejna predykcja.
#
# Druga wersja jest analogiczna do przedstawionej w tym sprawozdaniu i pozwala otrzymać macierz w kodzie, musimy jednak zaimportować kod:
#
# `from validate import get_output, load_data, preprocess`
#
#
# # Wstęp
# ## Zbiór danych i jego cechy
#
# Zbiór danych jest zbiorem oceny i opinii na temat aplikacji mobilnych. Został on złączony do zadania jako link. Do jego przetwarzania wykorzystałem paczki widoczne poniżej, a wersja pythona to `3.7.2`.
# Paczki:
# * pandas
# * scikit-learn
# * torch
# ### Ładowanie danych
#
# Ładowanie danych zostało przeprowadzone za pomocą paczki `pandas`, która potrafi ładować dane w formacie `csv`. Warto tutaj dodać, iż dane muszą być na dysku lokalnym -- link do pliku w zadaniu nie jest dostępny z publicznego URL.
def load_data(path: str, delimiter=',', quotechar='"') -> pd.DataFrame:
return pd.read_csv(path, delimiter=delimiter, quotechar=quotechar)
# ### Preprocessing danych
#
# Dane posiadają kilkadziesiąt wartości `NaN`, które zostały usunięte. Dodatkowo, atrybut `helpful` został zamieniony na listę dwóch liczb, aby można było na nim wygodnie operować. Atrybut `score` nie został zmieniony na liczbę, gdyż nie ma takiej potrzeby -- jest to na dobrą sprawę 5 różnych klas, gdyż ocena nie może leżeć pomiędzy nimi. Sam preprocessing danych nie jest potrzebny, ze względu na fakt, iż są to dane tekstowe i będą one przetwarzane inaczej dla różnych klasyfikatorów.
def preprocess(data: pd.DataFrame, column_name: str = 'helpful'):
data[column_name] = data[column_name].apply(lambda x: tuple(json.loads(x)))
data['score'] = data['score'].astype(str)
return data.dropna()
# ### Opis atrybutów
#
# Dane zawierają 9 atrybutów, w tym jeden do klasyfikacji, o nazwie `score`. Należy tutaj zwrócić uwagę na najbardziej obiecujące do nauczania atrybuty: `reviewText` -- zawierający treść recenzji, `summary` -- streszczenie recenzji oraz `helpful` -- ocena jak pomocna była ta opinia. Są one obiecujące, gdyż mają największą szansę być niezależnie wpływające na klasyfikację. Inne atrybuty, takie jak `asin` -- identyfikator aplikacji lub `reviewerID` (albo `reviewerName`, wydają się być tym samym) mogą nam posłużyć do określenia zbiorów uczących i testowych -- tak, aby ta sama aplikacja lub ten sam recenzent nie znajdował się w obu zbiorach uczących, gdyż może to prowadzić do nauczenia się danej aplikacji lub recenzenta, a nie właściwej funkcji oceny. Pozostałe atrybuty, czyli czasu napisania recenzji, zostały pominięte w tej klasyfikacji.
# ### Podział na zbioru treningowe i testowe
#
# Podział został przeprowadzny za pomocą paczki `scikit-learn`. Zbiór treningowy to około 90% danych, a testowy -- 10%. Warto tutaj także wspomnieć, iż ta sama aplikacja (tj. ten sam atrybut `asin`) nie znajduje się w obu tych zbiorach -- tak, aby dla nowych danych i nowych aplikacji model radził sobie porównywalnie dobrze. Powtarzalność podziału została zapewniona poprzez podanie `random_state`.
# ### Metryki oceny jakości
#
# Do wszystkich klasyfikacji została zastosawana ta sama metryka jakości: [`balanced_accuracy_score`](https://scikit-learn.org/stable/modules/model_evaluation.html#balanced-accuracy-score) z bibliotek `scikit-learn`. Bierze ona pod uwagę każdą klasę tak samo (tak więc oceny od 1.0 do 5.0) i wyciąga z nich ważoną przez ilosć występowania sumę jakości.
def score_metric(y_true, y_pred) -> float:
return balanced_accuracy_score(y_true, y_pred)
# ## Klasyfikacja
#
# Faktyczne załadowanie danych widoczne jest poniżej. Następnie widoczne są klasyfikacje na różnych klasyfikatorach dla załadownych danych.
# +
from emd_2.data import split_train_test
X_train, y_train, X_test, y_test = split_train_test(
preprocess(load_data(DATA_FILEPATH)))
scores = {}
# -
# ### Klasyfikator większościowy oraz losowy
#
# Przeprowadzone zostały eksperymenty mające na celu ustalenie linii bazowej możliwej jakości klasyfikacji. Wytrenowano klasyfikatory: klasy większościowej oraz losowy (ale uwzględniający rozłożenie atrybutu klasyfikacji `score`) i sprawdzono ich jakość używająć ww. metryki.
dummy_frequent = DummyClassifier(strategy='most_frequent')
dummy_frequent.fit(X_train, y_train)
predictions = dummy_frequent.predict(X_test)
scores['frequent'] = (
score_metric(y_test, predictions),
precision_score(y_test, predictions, average='macro'))
score_metric(y_test, predictions)
dummy_random = DummyClassifier(strategy='stratified')
dummy_random.fit(X_train, y_train)
predictions = dummy_random.predict(X_test)
scores['random'] = (
score_metric(y_test, predictions),
precision_score(y_test, predictions, average='macro'))
score_metric(y_test, predictions)
# Jak można zauwać, wynik nie jest najlepszy -- co ma sens, gdyż klasyfikator większościowy odgadł tylko 1 z 5 klas (czyli 0.2), a losowy odgadł około 1/5 (dla każdego z przykładów ma szansę około 1/5 iż odgadnie ją prawidłowo gdyż posiada ich rozkład w zbiorze).
# ### Klasyfikatory sklearn
#
# Wytrenowano dwa klasyfikatory (oraz zoptymalizowane ich parametry przy użyciu [HalvingGridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.HalvingGridSearchCV.html#sklearn-model-selection-halvinggridsearchcv)): SVM oraz klasyfikator bayesa. Do każdego z nich zwektoryzowano dane przy użyciu `TfIdfVectorizer`, który jest często stosowany do klasyfikacji tekstu. Wektoryzator ten przetwarza tekst do macierzy (także może pomijać słowa, które występują zbyt często lub rzadko). Warto wspomnieć, iż użyto tutaj tylko atrybutu `reviewText`.
#
# Co ciekawe, dla obu klasyfikatorów najlepsze parametry wektoryzatora to ngram'y od 1 do 4 słów. Dodatkowo, SVC wybrało tylko słowa pojawiające się w nie więcej niż 95% recenzji.
#
# Kod treningowy dostępny jest w pliku `emd_2/sklearn_models.py`. Pod spodem wczytuje najlepsze modele i otrzymuje wynik predykcji z test setu.
# #### SVM
# +
with open('model/sklearn-svc.pkl', 'rb') as out:
svm = pickle.load(out)
predictions = svm.predict(X_test['reviewText'])
scores['svm'] = (
score_metric(y_test, predictions),
precision_score(y_test, predictions, average='macro'))
score_metric(y_test, predictions)
# -
# #### Bayes
# +
with open('model/sklearn-bayes.pkl', 'rb') as out:
bayes = pickle.load(out)
predictions = bayes.predict(X_test['reviewText'])
scores['bayes'] = (
score_metric(y_test, predictions),
precision_score(y_test, predictions, average='macro'))
score_metric(y_test, predictions)
# -
# ### Klasyfikator neuronowy
#
# Użyto biblioteki [`pytorch`](https://pytorch.org/) w celu nauczenia modeli neuronowych na karcie graficznej z pamięcią 8 GB. Wszystkie teksty zostały poddane "lowercasowaniu", a wektory słów obliczone za pomocą biblioteki [`spacy`](https://spacy.io/), gdzie długość jednego tokenu dla wyrazu to 96.
#
# Funkcja straty została dopasowana do problemu -- jest to cross entropia pomnożona o odległość od poprawnej klasy, co zapewnia większą stratę klasyfikatora jeśli predykcja jest bardzo różna od wyniku recenzji. Dodatkowo zostały podane wagi poszczególnych klas w celu polepszenia klasyfikacji każdej z nich -- wagi te są odwrotnością znormalizowanej macierzy ich występowania w zbiorze treningowym.
#
# Wyuczono dwa modele, jeden podstawowy, drugi oparty na architekturze LSTM. Kod treningowy dostępny jest w pliku `emd_2/neural_models.py`. Pod spodem wczytuje najlepsze wagi i dokonuję predykcji.
# #### Basic Perceptrons:
# Podstwowa sieć neuronowa ma wymiary (193, 32, 5), gdyż do pierwszej warstwy podajemy 2 wektory, odpowiednio dla recenzji i podsumowania oraz jedną ocenę tej recenzji jako wartość znormalizowaną.
# +
from validate import get_output
predictions = get_output(X_test, 'model/basic-net.pt', basic=True)
scores['neural'] = (
score_metric(y_test, predictions),
precision_score(y_test, predictions, average='macro'))
score_metric(y_test, predictions)
# -
# #### LSTM:
#
#
#
# Sieć LSTM jest bardziej skomplikowana, posiada w sobie dwie sieci LSTM, jedną do recenzji (posiadająca 4 warstwy) i podsumowania (posiadająca 2 warstwy). Obie z nich to sieci dwukierunkowe, z wielkością warstwy ukrytej równej 32. Po przetworzeniu tokenów (które muszą być padowane do rozmiarów maksymalnego) przez LSTM, wartości z obu sieci ukrytych trafiają do gęstej sieci perceptronów o wymiarach (4 * 32, 32, 5). Mnożenie razy 4 wynika z faktu dwukierunkowej sieci.
#
# +
from validate import get_output
predictions = get_output(X_test, 'model/lstm-net.pt')
scores['lstm'] = (
score_metric(y_test, predictions),
precision_score(y_test, predictions, average='macro'))
score_metric(y_test, predictions)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import h5py
import time
import progressbar
import sys
from Qutils import *
import progressbar
def calc_dipol(psi, dx, n0, n1, nt, x,nstep):
"""
Calculates the Dipol Moment for a given
wave-function psi for nt timesteps. Since
it is just the mean of the location operator x,
also a numpy array containing the corresponding
discrete spatial values is required.
"""
roh = get_prob_dens(psi, nt, nstep)
print(roh.shape)
print(x.shape)
res = np.zeros(psi.shape[0])
print("Calculating Dipole moment")
with progressbar.ProgressBar(max_value=int(psi.shape[0])) as bar:
for i in range(0, psi.shape[0]):
res[i] = np.trapz(roh[i,:]*x, dx=dx)
bar.update(i)
return res
def calc_dist(x, t):
"""
Calculate the disturbance term
"""
a = 6.9314718055994524e-07
b = 0.0069314718056
t0 = 50.0
w = 1.51939
k = w/137
I = 20.0
res = np.zeros([t.size,x.size])
for i in range(0, t.size):
if t[i] < 50:
g = t[i]/t0
else:
g = 1.0
res[i] = I * np.sin(w*t[i]-k*x)*g
return res
def int_dist(vals, h):
"""
"""
res = np.zeros(vals.shape[0])
for i in range(0, vals.shape[0]):
res[i] = np.trapz(vals[i],dx=h)
return res
# +
filepath = "../../build/res.h5"
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot()
nx = np.int32(1e5)
nt = np.int32(1e5)
xmax = 30.0
xmin = -xmax
tmax = 100.0
tmin = 0
nstep = 100
t = np.linspace(tmin, tmax, int(nt/nstep))
h = 0.0006
n0 = 50000
n1 = 66667
dx = 0.0006
psi = load_vals(filepath, nt, nx, nstep)
x = np.linspace(xmin, xmax, psi.shape[1])
p = calc_dipol(psi, dx, n0, n1, nt, x, nstep)
p *= 1/np.max(p)
ax1.plot(t, p, color="r",lw=2,label=r"$\lambda_1=29.98 \, nm , \; I = 0.816 \, keV$")
vals = calc_dist(x,t)
res = int_dist(vals, h)
res *= 1/np.max(res)
ax1.set_xlabel("t $(at.u.)$",size=20)
ax1.set_ylabel("Normed quantities $(arb. u.)$",size=20)
ax1.plot(t,res,"g",label=r"$\int \, V(x) \, dx$ normed")
plt.legend(loc='best',prop={'size':15})
plt.title(r"$\vec{P}=-e<x>\vec{e_x}$",size=20)
plt.show()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from numpy import expand_dims
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from matplotlib import pyplot
# %matplotlib inline
img = load_img('ziffer2.jpg')
data = img_to_array(img)
samples = expand_dims(data, 0)
#Image_Augmentation_Parameters
Shift_Range = 0
Brightness_Range = 0
Rotation_Angle=0
ZoomRange = 0.8
datagen = ImageDataGenerator(width_shift_range=[-Shift_Range,Shift_Range],
height_shift_range=[-Shift_Range,Shift_Range],
brightness_range=[1-Brightness_Range,1+Brightness_Range],
zoom_range=[1-ZoomRange, 1+ZoomRange],
rotation_range=Rotation_Angle)
it = datagen.flow(samples, batch_size=1)
for i in range(9):
pyplot.subplot(330 + 1 + i)
batch = it.next()
image = batch[0].astype('uint8')
pyplot.imshow(image)
pyplot.show()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.5 64-bit
# metadata:
# interpreter:
# hash: fd69f43f58546b570e94fd7eba7b65e6bcc7a5bbc4eab0408017d18902915d69
# name: python3
# ---
# +
# Load the TensorBoard notebook extension
# %load_ext tensorboard
import torch
from torch import nn,optim
from utils import Dataset,injectOutliers,ClassificationModel as Model
from metrician import MetricWriter,SimpleClf
from tqdm import tqdm
N_EPOCHS = 10
dataset = Dataset().tabular_classification
dataset = injectOutliers( dataset )
model = Model( dataset.input_size, dataset.output_size )
criterion = optim.SGD( model.parameters(),lr=.001 )
mw = MetricWriter( SimpleClf() )
loss_fn = nn.CrossEntropyLoss()
# -
# %tensorboard --logdir runs
for e in range(N_EPOCHS):
for x,y in dataset:
criterion.zero_grad()
yhat = model( x )
loss = mw( y,yhat, loss_fn )
loss.backward()
criterion.step()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import sys
import os
REPO_PATH = os.getenv('REPO_PATH')
sys.path.insert(0, os.path.join(REPO_PATH, "py_notebooks/"))
figFolder = os.path.join(REPO_PATH, "py_notebooks/figures/")
import pandas as pd
import numpy as np
import json
from subprocess import call
import myunits
u = myunits.units()
import matplotlib.pyplot as plt
import plotting
import plotly
import plotly.graph_objs as go
plotly.offline.init_notebook_mode()
from plotly.colors import DEFAULT_PLOTLY_COLORS
import plotly.io as pio
cm_to_in = 0.393701
single_col = 8.3
double_col = 17.1
GRAPH_CONFIG = {'showLink': False, 'displaylogo': False,
'modeBarButtonsToRemove':['sendDataToCloud']}
SAVEFIGS = True
# -
# # Figure 5
# +
folderNm = os.path.join(REPO_PATH, "data/jl_out/demand_1.0/jlInput_x2_100000.0/")
df = pd.read_csv(folderNm + "data.csv", parse_dates=True,
index_col="Date")
df.index -= pd.Timedelta('7h') # convert to CA time to do stuff
fig = plt.figure(figsize=(single_col*cm_to_in,1.2))
ax = fig.add_axes([0, 0, 1, 1])
plotting.plotHeatmap(df, "elecGrid", fillValue=0,
cbar_label="Carbon-aware grid imports (MWh)",
scaling=1e-3, fig=fig, ax=ax, ynTicks=4, xnTicks=4,
transpose=True, cbar_nTicks=5,vmax=45, vmin=20)
xlab = plt.xlabel("day")
if SAVEFIGS:
plt.savefig(figFolder+"/fig5b.eps", format='eps', dpi=200,
bbox_extra_artists=(xlab,), bbox_inches='tight')
folderNm = os.path.join(REPO_PATH, "data/jl_out/demand_1.0/jlInput_x2_0.0/")
df = pd.read_csv(folderNm + "data.csv", parse_dates=True,
index_col="Date")
df.index -= pd.Timedelta('7h') # convert to CA time to do stuff
fig = plt.figure(figsize=(single_col*cm_to_in,1.2))
ax = fig.add_axes([0, 0, 1, 1])
plotting.plotHeatmap(df, "elecGrid", fillValue=0,
cbar_label="BAU grid imports (MWh)", scaling=1e-3,
fig=fig, ax=ax, ynTicks=4, xnTicks=4, transpose=True,
cbar_nTicks=5,vmax=45, vmin=20)
xlab = plt.xlabel("day")
if SAVEFIGS:
plt.savefig(figFolder+"fig5a.eps", format='eps', dpi=200,
bbox_extra_artists=(xlab,), bbox_inches='tight')
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
def is_prime(I):
if I % 2==0 : return False
for i in range(3, int (I**0.5)+ 1,2):
if I % i ==0: return False
return True
n = int(2**64) +1
display (n)
# %time is_prime(100109100129162907)
# -
import numba
from numba import jit
is_prime_nb = numba.jit(is_prime)
# %time is_prime_nb(100109100129162907)
# +
import random
import numpy as np
from pylab import mpl,plt
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
# %matplotlib inline
rn = [(random.random() * 2 - 1, random.random() * 2 - 1) for _ in range(10000)]
rn = np.array(rn)
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(1, 1, 1)
circ = plt.Circle((0, 0), radius=1, edgecolor='g', lw=2.0,
facecolor='None')
box = plt.Rectangle((-1, -1), 2, 2, edgecolor='b', alpha=0.3)
ax.add_patch(circ)
ax.add_patch(box)
plt.plot(rn[:, 0], rn[:, 1], 'r.')
plt.ylim(-1.1, 1.1)
plt.xlim(-1.1, 1.1)
# -
def mcs_pi_py(n):
circle = 0
for _ in range(n):
x, y = random.random(), random.random()
if (x ** 2 + y ** 2) ** 0.5 <= 1:
circle += 1
return (4 * circle) / n
# +
n = int(1e9)
display(mcs_pi_py(100000))
py_num= numba.jit(mcs_pi_py)
display(py_num(1e8))
display(22/7)
# -
import math
S0 = 36.
T = 1.0
r = 0.06
sigma = 0.2
np.set_printoptions(formatter={'float': lambda x: '%6.2f' % x})
def simulate_tree(M):
dt = T / M
u = math.exp(sigma * math.sqrt(dt))
d = 1 / u
S = np.zeros((M + 1, M + 1))
S[0, 0] = S0
z = 1
for t in range(1, M + 1):
display(z)
display(S)
for i in range(z):
S[i, t] = S[i, t-1] * u
S[i+1, t] = S[i, t-1] * d
z += 1
return S
np.set_printoptions(formatter={'float': lambda x: '%6.2f' % x})
simulate_tree(5)
# +
import numpy as np
import numpy.random as npr
import math
from pylab import mpl,plt
S0= 100
r= 0.05
sigma = 0.025
T = 2.0
I = 1000
ST1 = S0 * np.exp((r - 0.5 * sigma ** 2) * T +
sigma * math.sqrt(T) * npr.standard_normal(I))
plt.figure(figsize=(10,6))
plt.plot(ST1[:], lw=1.5)
plt.xlabel('time')
plt.ylabel('index level')
# -
# $\int_a^b f(x) = F(b) - F(a)$
# +
sigma=0.01
I = 10000
M = 50
dt = T / M
S = np.zeros((M + 1, I))
S[0] = S0
for t in range(1, M + 1):
S[t] = S[t - 1] * np.exp((r - 0.5 * sigma ** 2) * dt + sigma * math.sqrt(dt) * npr.standard_normal(I))
plt.figure(figsize=(10,6))
plt.plot(S[:,:10], lw=1.5)
plt.xlabel('time')
plt.ylabel('index level')
# +
x0 = 1.35
kappa = 1.35
theta = 1.35
sigma = 0.6
I = 10000
M = 2000
dt = T / M
def srd_exact():
x = np.zeros((M + 1, I))
x[0] = x0
for t in range(1, M + 1):
df = 4 * theta * kappa / sigma ** 2
c = (sigma ** 2 * (1 - np.exp(-kappa * dt))) / (4 * kappa)
nc = np.exp(-kappa * dt) / c * x[t - 1]
x[t] = c * npr.noncentral_chisquare(df, nc, size=I)
return x
x2 = srd_exact()
plt.figure(figsize=(10,6))
plt.plot(x2[:,:5], lw=1.5)
plt.xlabel('time')
plt.ylabel('index level')
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# ## Sampling $\pi$ : the circle in a sphere
#
# Let us try to throw peebles in a square (with coordinate x[-1:1] and y[-1:1]): they will fall inside the unit circle with probability $\pi/4$. This gives us a way to compute $\pi$ numerically with a Monte-Carlo simulation.
#
# The following Python code is doing just that!
# +
import random #Library of random numbers
import matplotlib.pyplot as plt #Ploting Library
# %matplotlib inline # In order to plot in the notebook
n_trials = 4000 #Number of thrown peebles
num_inside = 0
data_inside_x = []
data_inside_y = []
data_outside_x = []
data_outside_y = []
for i in range(n_trials):
x, y = random.uniform(-1.0, 1.0), random.uniform(-1.0, 1.0)
if x**2 + y**2 < 1.0: #Testing if we are inside the circle
num_inside += 1
data_inside_x.append(x)
data_inside_y.append(y)
else:
data_outside_x.append(x)
data_outside_y.append(y)
print "pi is approximatily", 4.0 * num_inside / float(n_trials)
plt.plot(data_inside_x,data_inside_y,'.')
plt.plot(data_outside_x,data_outside_y,'x')
plt.axes().set_aspect('equal')
plt.show()
# -
# This is cute, but this is just one run. Let's try to see what happens when you repeat many runs, and how our estimate for $\pi$ is fluctuating.
# +
import random
def direct_pi(N):
num_inside = 0
for i in range(N):
x, y = random.uniform(-1.0, 1.0), random.uniform(-1.0, 1.0)
if x ** 2 + y ** 2 < 1.0:
num_inside += 1
return num_inside
# -
# Perhaps things will be more interesting if we plot an histogram?
# +
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# Let us do more runs!
n_runs = 1000
n_trials = 4000
list_pi = []
for run in range(n_runs):
list_pi.append(4*direct_pi(n_trials) / float(n_trials))
# prepare the histogram of the data
n, bins, patches = plt.hist(list_pi, 50, normed=1, facecolor='green', alpha=0.75)
plt.show()
# -
# The most likely value seem to be close to $\pi$. This is normal since we have an unbiased estimator, but there is some dispertion around this "true" value. If we repeat this plot with 20000 trials instead of 4000, things are much nicer (but it takes longer!)
# +
n_runs = 1000
n_trials = 20000
list_pi = []
for run in range(n_runs):
list_pi.append(4*direct_pi(n_trials) / float(n_trials))
# prepare the histogram of the data
n, bins, patches = plt.hist(list_pi, 50, normed=1, facecolor='green', alpha=0.75)
plt.show()
# -
# See how the new distribution is less dispersed? We could go on and bound the probability of errors (for instance using Chernoff theorem) but instead, let us move to Markov-Chain Monte-Carlo sampling.
#
# We now implement the MCMC strategy where we have one single path, and each steps is typically of lenght 0.1. Of course, if we fall outside the circle, we have to stay inside AND count it as a trial anyway.
# +
import random
import matplotlib.pyplot as plt #Ploting Library
x, y = 1.0, 1.0 #Initial position, in the middle
step_size = 0.1
n_trials = 4000
n_inside = 0
data_inside_x = []
data_inside_y = []
data_outside_x = []
data_outside_y = []
for i in range(n_trials):
#now we move randomly with our step size
del_x, del_y = random.uniform(-step_size, step_size), random.uniform(-step_size, step_size)
if abs(x + del_x) < 1.0 and abs(y + del_y) < 1.0:
x, y = x + del_x, y + del_y
#if we are still inside the square, we move
if x**2 + y**2 < 1.0:
n_inside += 1
#if we are still inside the circle, we count it
data_inside_x.append(x)
data_inside_y.append(y)
else:
data_outside_x.append(x)
data_outside_y.append(y)
print "pi is approximatily", 4.0 * n_inside / float(n_trials)
plt.plot(data_inside_x,data_inside_y,'.')
plt.plot(data_outside_x,data_outside_y,'x')
plt.axes().set_aspect('equal')
plt.show()
# -
# This does not look like we are really sampling the square uniformly, but if you are not convinced, try to run it longer!
# +
n_trials = 25000
n_inside = 0
data_inside_x = []
data_inside_y = []
data_outside_x = []
data_outside_y = []
for i in range(n_trials):
#now we move randomly with our step size
del_x, del_y = random.uniform(-step_size, step_size), random.uniform(-step_size, step_size)
if abs(x + del_x) < 1.0 and abs(y + del_y) < 1.0:
x, y = x + del_x, y + del_y
#if we are still inside the square, we move
if x**2 + y**2 < 1.0:
n_inside += 1
#if we are still inside the circle, we count it
data_inside_x.append(x)
data_inside_y.append(y)
else:
data_outside_x.append(x)
data_outside_y.append(y)
print "pi is approximatily", 4.0 * n_inside / float(n_trials)
plt.plot(data_inside_x,data_inside_y,'.')
plt.plot(data_outside_x,data_outside_y,'x')
plt.axes().set_aspect('equal')
plt.show()
# -
# Looks better isn't it?
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
import sagemaker
training_input_s3_uri = sagemaker.session.Session().upload_data(path='./data/2-1/training/', bucket=sagemaker.session.Session().default_bucket(), key_prefix='qiita-sagemaker-training/2-1/training')
validation_input_s3_uri = sagemaker.session.Session().upload_data(path='./data/2-1/validation/', bucket=sagemaker.session.Session().default_bucket(), key_prefix='qiita-sagemaker-training/2-1/validation')
test_input_s3_uri = sagemaker.session.Session().upload_data(path='./data/2-1/test/', bucket=sagemaker.session.Session().default_bucket(), key_prefix='qiita-sagemaker-training/2-1/test')
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(
entry_point='check.py',
source_dir = './src/2-1',
py_version='py38',
framework_version='2.6.0',
instance_count=1,
instance_type='ml.m5.xlarge',
role=sagemaker.get_execution_role(),
hyperparameters={ # ハイパーパラメータはダミーです
'first-num':5,
'second-num':2,
'operator':'m',
'sagemaker_s3_output':f's3://{sagemaker.session.Session().default_bucket()}/intermediate'
},
volume_size=50, # 50GBを指定
)
estimator.fit({
'training':training_input_s3_uri,
'validation':validation_input_s3_uri,
'test': test_input_s3_uri
})
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(
entry_point='check.py',
source_dir = './src/2-1',
py_version='py38',
framework_version='2.6.0',
instance_count=1,
instance_type='ml.g4dn.xlarge',
role=sagemaker.get_execution_role(),
hyperparameters={ # ハイパーパラメータはダミーです
'first-num':5,
'second-num':2,
'operator':'m'
},
volume_size=50, # 50GBを指定
)
estimator.fit()
from sagemaker.tensorflow import TensorFlow
git_config = {'repo': 'https://github.com/kazuhitogo/qiita-sagemaker-training'}
estimator = TensorFlow(
entry_point='check.py',
source_dir = './src/2-1',
git_config=git_config,
py_version='py38',
framework_version='2.6.0',
instance_count=1,
instance_type='ml.m5.xlarge',
role=sagemaker.get_execution_role(),
hyperparameters={ # ハイパーパラメータはダミーです
'first-num':5,
'second-num':2,
'operator':'m',
'sagemaker_s3_output':f's3://{sagemaker.session.Session().default_bucket()}/intermediate'
},
volume_size=50, # 50GBを指定
)
estimator.fit({
'training':training_input_s3_uri,
'validation':validation_input_s3_uri,
'test': test_input_s3_uri
})
# 前回のイメージとコードを利用する
image_uri = estimator.latest_training_job.describe()['AlgorithmSpecification']['TrainingImage']
source_tar_gz = estimator.latest_training_job.describe()['HyperParameters']['sagemaker_submit_directory']
# 前回のイメージとコードを利用する
image_uri = estimator.latest_training_job.describe()['AlgorithmSpecification']['TrainingImage']
source_tar_gz = estimator.latest_training_job.describe()['HyperParameters']['sagemaker_submit_directory']
estimator = sagemaker.estimator.Estimator(
image_uri=image_uri,
role=sagemaker.get_execution_role(),
hyperparameters={
'first-num':5,
'second-num':2,
'operator':'m',
'sagemaker_s3_output':f's3://{sagemaker.session.Session().default_bucket()}/intermediate',
'sagemaker_program' : 'check.py',
'sagemaker_submit_directory' : source_tar_gz
},
instance_count=1,
instance_type='ml.g4dn.xlarge',
)
estimator.fit({
'training':training_input_s3_uri,
'validation':validation_input_s3_uri,
'test': test_input_s3_uri
})
# +
import boto3
sm_client = boto3.client('sagemaker')
# トレーニングジョブのリストを取得
print(sm_client.list_training_jobs())
# 最後のトレーニングジョブの詳細を取得
print(sm_client.describe_training_job(TrainingJobName=sm_client.list_training_jobs()['TrainingJobSummaries'][0]['TrainingJobName']))
# -
sm_client.list_training_jobs()['TrainingJobSummaries'][0]['TrainingJobName']
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# # Find out what may be parallelized by looking at a single file
# The dataset has files named `votes_{k}.csv` with `k` starting at 0 and going up to 60.
# Let us read a single file. We will use it in order to extract schema information
small_data = pd.read_csv("data/votes_0.csv")
small_data.info()
small_data.head()
small_data.describe()
# # Reload data
#
# After we've learned a bit, we can be smart about how we load data.
small_data = pd.read_csv("data/votes_0.csv",
parse_dates=["timestamp"],
dtype={"region": "category",
"vote": "category"})
small_data.info()
# # Let's count the votes
#
# At least for this small file; working out what needs to be done on a small sample will be useful later, when we work at large.
#
# - Figure out the number of votes per candidate per region
small_data["result"] = 1
count_per_region = (
small_data
.groupby(["region", "vote"])
.result.agg("count")
.reset_index()
)
count_per_region.head()
# - Figure out the candidate who won in each region
# +
results = list()
for region, df in count_per_region.groupby("region"):
results.append(
{"region": region,
"winner": df.set_index("vote").result.idxmax()}
)
winner_per_region = pd.DataFrame(results)
# -
winner_per_region.head()
# - After putting together a list of candidates who voted in each region, find out delegates per candidate per region
delegates_per_region = pd.read_csv("data/region_delegates.csv")
delegates_per_region.head()
winner_region_delegates = pd.merge(winner_per_region, delegates_per_region, on="region")
winner_region_delegates.head()
# - Aggregate in order to find total number of delegates. The candidate with most delegates wins
winner_region_delegates.groupby("winner").delegates.sum().sort_values(ascending=False)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Peeking Inside a Neural Network
#
# Welcome to the second part of the parctical section of module 5.4 on neural networks. In this part we'll try to take a peek inside a neural network and see how it handles the data. We won't probably be able to understand anything of the internals of the network, but we might get an intution about how it works. We'll train our neural network on the [MNIST handwritten digits dataset](http://yann.lecun.com/exdb/mnist/): one of the most widely used datasets in image processing and machine learning. The dataset consists of a number of 28x28 images and their corresponding labels. Each image is a handwritten digit and the label is the value of that digit in the image. We'll train our network to classify these images, i.e. to recognize the digits in the image, then we'll delve inside the network and try to see what's going on!
#
# As noted before in the first part, the current stable version of scikit-learn doesn't provide any APIs for neural networks, so we'll be using the third-party library [scikit-neuralnetwork](https://scikit-neuralnetwork.readthedocs.io/en/latest/).
# +
import pandas as pd
import numpy as np
from sknn.mlp import Classifier, Layer
from sklearn.datasets import fetch_mldata
from sklearn.utils import shuffle
# %matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import cm
plt.rcParams['figure.figsize'] = (10, 10)
# -
# # Preparing the data
#
# To save us a lot of headache of downloading the images of the MNIST dataset and loading the images into memory to build the dataset, we'll use one of scikit-learn's builtin method in the **datasets** package to automatically retrieve the dataset and load it into memory; this is the **fetch_mldata** method. This method retrieves by name a dataset from the [mldata.org](http://mldata.org/) data repository.
# +
# you may have to wait a bit for the data to be downlaoded
mnist = fetch_mldata("MNIST original")
print "Data min. value: %d" % (mnist.data.min())
print "Data max. value: %d" % (mnist.data.max())
print "Data instance size: %d" % (mnist.data.shape[1])
# -
# We can see that the values of the pixels are between 0 and 255. To have an effcient training experince, we should scale the data to be between 0 and 1. This is a classic case of **feature sacling** which would make the training process faster. In this case we do not need a sophistacted scalar like we used before, all we need to do is to divide the data by 255.0.
# +
X, y = mnist.data / 255., mnist.target
print "Data min. value: %.2f" % (X.min())
print "Data max. value: %.2f" % (X.max())
# -
# Now we can view some of the images to see what we're dealing with here. We can show an image from the raw pixel data using **metaplotlib**'s method **matshow** which takes a matrix and dispalys it as image. Note that each image is represented as a flat 1D array of size 784, we'll need to reshape it to a 2D 28x28 array to be able to display it.
# +
instances = X[0:9]
instances = instances.reshape(-1, 28, 28)
# here we create a 3x3 grid of figures (9 plots)
fig, axes = plt.subplots(3, 3)
axes = axes.ravel() # flat the 3x3 array of references to plots into a 1d array of size 9
for i,instance in enumerate(instances):
axes[i].matshow(instance, cmap=cm.gray)
plt.show()
# -
# For this visulaization of the first 9 images, it appears that the data is ordered by the value of the digit. This ordering might result in a biased test set, so we need to shuffle the data to ensure statsically unbiased sets.
# +
# random_state is a seed value to enure we get the same shuffling each run
X, y = shuffle(X, y, random_state=0)
instances = X[0:9]
instances = instances.reshape(-1, 28, 28)
# here we create a 3x3 grid of figures (9 plots)
fig, axes = plt.subplots(3, 3)
axes = axes.ravel() # flat the 3x3 array of references to plots into a 1d array of size 9
for i,instance in enumerate(instances):
axes[i].matshow(instance, cmap=cm.gray)
plt.show()
# -
# Now the data is shuffled, and due to the shuffling we also got to take a diverse look at the data images. Now we're one step away from building our neural network, we just need to split our dataset into training and testing parts and we're ready!
# +
dataset_size = X.shape[0]
train_size = np.floor(dataset_size * 0.7).astype(int)
X_train, y_train = X[:train_size], y[:train_size]
X_test, y_test = X[train_size:], y[train_size:]
# -
# # Building and Training the Neural Network
#
# In the same way described in the previous part, we'll build a small neural network with one hidden layer of 64 neurons, and one softmax output layer. We're keeping it small to be able to peek inside it later and visualize the weights and try to get an intution of what they do.
# +
# the use of batch_size in the following specifies that instead of
# training the network on all the data instances every iteration
# we train on a random sample of size 128 from the data
# this is called stochastic gradient descent (SGD) and it's much faster
# than the regular gradient descent without much loss in accuracy
nn = Classifier(
layers = [
Layer("Rectifier", units=64),
Layer("Softmax")
],
verbose = True,
n_iter=100,
random_state=1,
batch_size=128
)
# This may take some time, be patient!
nn.fit(X_train, y_train)
# -
# Now we evaluate our model's accuracy using the mean accuracy metric:
# $$\frac{1}{n}\sum_{i = 1}^{n} 1(y_i = \widehat{y_i}) \hspace{0.5em} \text{where } \widehat{y_i} \text{ is the predecited value}$$
#
# $$,1(P) =
# \left\{
# \begin{matrix}
# 1 \hspace{1.5em} \text{if } P \text{ is true}\\
# 0 \hspace{1.5em} \text{otherwise}
# \end{matrix}
# \right.$$
# We'll see that the model is pretty accurate and we didn't lose much with using SGD.
# +
predections = nn.predict(X_train)
predections = predections.reshape((-1,))
mean_accuracy = np.mean(predections == y_train)
print "Mean Accuracy: %.2f" % (mean_accuracy)
# -
# # Peeking Inside The Network
#
# Now we attempt to take a look inside the network and see what it's doing to the inputs. Our first step is to visualize the weigths of each layer and the transformation they perform on an input data.
#
# The hidden layer has 64 neuron and takes as input our flat data image of size 784. So the weigths of the first layer is 784x64 matrix, or a 28x28x64 *rank-3 tesnor* (you can think of a **rank-k tensor** as a multidimensional array of k dimensions. A matrix is a rank-2 tensor). We then visualize each neuron weights as a 28x28 image just as we did with the input images.
# +
hidden0 = nn.get_parameters()[0][0]
vmin, vmax = hidden0.min(), hidden0.max()
fig, axes = plt.subplots(8, 8)
axes = axes.ravel()
for i, axis in enumerate(axes):
weights = hidden0[:,i].reshape(28, 28)
axis.matshow(weights, cmap=cm.gray)
plt.show()
# -
# This looks like a useless bunch of static, but if we looked carefully at the visualization of the weights we'll be able to see some **spatial structures (dents and bulges)** in the static. What do these spatial structure represent? I have no clue! But it seems that the weigths act as filters on the input image to retrieve some spatial charactaristics that helps in defining each class.
#
# This gives us a general intution about how neural networks work: as the input propagates through the layers, the network learns and exatracts relevant, and very complex and nonlinear, features that aid in identifying the data instnace at question. The more the hidden, the richer and the more complicated features learned and extracted form the input, the more accurate is the performance of the network.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cobra
import d3flux as d3f
from d3flux.core.flux_layouts import render_model
from jinja2 import Template
custom_css = \
"""
{% for item in items %}
text#{{ item }} {
font-weight: 900;
}
{% endfor %}
text.cofactor {
fill: #778899;
}
"""
css = Template(custom_css).render(items=['succ_c', 'ac_c', 'etoh_c', 'for_c', 'co2_c', 'lac__D_c'])
# +
model = cobra.io.load_json_model('asuc_v1.json')
# model.add_reaction(ecoli.reactions.ALDD2x)
# d3f.update_cofactors(model, ['nadh_c'])
model.reactions.EX_glc_e.lower_bound = -1
model.reactions.EX_xyl_e.lower_bound = -6
import itertools
for obj in itertools.chain(model.reactions, model.metabolites):
try:
del obj.notes['map_info']['flux']
except KeyError:
pass
model.reactions.PPCK.notes['map_info']['group'] = 2
model.reactions.MDH.notes['map_info']['group'] = 2
model.reactions.FUM.notes['map_info']['group'] = 2
model.reactions.PFL.notes['map_info']['group'] = 'ko'
model.reactions.ACKr.notes['map_info']['group'] = 'ko'
# Just a single example, but many metabolite names could be better aligned
model.metabolites.get_by_id('f6p_c').notes['map_info']['align'] = 'center left'
from cobra.flux_analysis import pfba
pfba(model)
# -
html = d3f.flux_map(model, custom_css=css, figsize=(520, 660), default_flux_width=2.5, fontsize=14)
html
len(model.reactions)
model.reactions[0]
str(model.reactions[0])
model.reactions[0].reactants
model.objective
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import statements
import keras
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.utils.np_utils import to_categorical
# Loading the training set and test set
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Normalizing the data
X_train = (X_train-X_train.mean())/(X_train.max()-X_train.min())
X_test = (X_test-X_test.mean())/(X_test.max()-X_test.min())
# Reshaping the data
X_train = X_train.reshape(X_train.shape[0],784)
X_test = X_test.reshape(X_test.shape[0],784)
# Hot encoding the data
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Initializing the model
model = Sequential()
# Adding a hidden layer of 512 size
model.add(Dense(512, input_dim=784, activation='relu'))
# Adding an output layer of 10 size
model.add(Dense(10, activation='softmax'))
# Compiling the model
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
# Fitting the model
model.fit(X_train, y_train, epochs=10, batch_size=32)
# Evaluating the model
loss, acc = model.evaluate(X_test, y_test)
print("Loss = ", loss)
print("Accuracy = ", acc)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + slideshow={"slide_type": "skip"}
from __future__ import print_function
# + [markdown] nbsphinx="hidden" slideshow={"slide_type": "skip"}
# This notebook is part of the $\omega radlib$ documentation: http://wradlib.org/wradlib-docs.
#
# Copyright (c) 2018, $\omega radlib$ developers.
# Distributed under the MIT License. See LICENSE.txt for more info.
# + [markdown] slideshow={"slide_type": "slide"}
# # NumPy: manipulating numerical data
# + [markdown] slideshow={"slide_type": "fragment"}
# *NumPy* is the key Python package for creating and manipulating (multi-dimensional) numerical arrays. *NumPy* arrays are also the most important data objects in $\omega radlib$. It has become a convention to import *NumPy* as follows:
# + slideshow={"slide_type": "fragment"}
import numpy as np
# + [markdown] slideshow={"slide_type": "slide"}
# ## Creating and inspecting NumPy arrays
# + [markdown] slideshow={"slide_type": "fragment"}
# The `ndarray`, a numerical array, is the most important data type in NumPy.
# + slideshow={"slide_type": "fragment"}
a = np.array([0, 1, 2, 3])
print(a)
print(type(a))
# + [markdown] slideshow={"slide_type": "slide"}
# Inspect the `shape` (i.e. the number and size of the dimensions of an array).
# + slideshow={"slide_type": "fragment"}
print(a.shape)
# This creates a 2-dimensional array
a2 = np.array([[0, 1], [2, 3]])
print(a2.shape)
# + [markdown] slideshow={"slide_type": "slide"}
# There are various ways to create arrays: from lists (as above), using convenience functions, or from file.
# + slideshow={"slide_type": "fragment"}
# From lists
a = np.array([0, 1, 2, 3])
print("a looks like:\n%r\n" % a)
# Convenience functions
b = np.ones( shape=(2,3) )
print("b looks like:\n%r\nand has shape %r\n" % (b, b.shape) )
c = np.zeros( shape=(2,1) )
print("c looks like:\n%r\nand has shape %r\n" % (c, c.shape) )
d = np.arange(2,10)
print("d looks like:\n%r\nand has shape %r\n" % (d, d.shape) )
e = np.linspace(0,10,5)
print("e looks like:\n%r\nand has shape %r\n" % (e, e.shape) )
# + [markdown] slideshow={"slide_type": "slide"}
# You can change the shape of an array without changing its size.
# + slideshow={"slide_type": "fragment"}
a = np.arange(10)
b = np.reshape(a, (2,5))
print("Array a has shape %r.\nArray b has shape %r" % (a.shape, b.shape))
# + [markdown] slideshow={"slide_type": "slide"}
# ## Indexing and slicing
# + [markdown] slideshow={"slide_type": "fragment"}
# You can index an `ndarray` in the same way as a `list`:
# + slideshow={"slide_type": "fragment"}
a = np.arange(10)
print(a)
print(a[0], a[2], a[-1])
# + [markdown] slideshow={"slide_type": "slide"}
# Just follow your intuition for indexing multi-dimensional arrays:
# + slideshow={"slide_type": "fragment"}
a = np.diag(np.arange(3))
print(a, end="\n\n")
print("Second row, second column: %r\n" % a[1, 1])
# Setting an array item
a[2, 1] = 10 # third line, second column
print(a, end="\n\n")
# Acessing a full row
print("Second row:\n%r" % a[1])
# + [markdown] slideshow={"slide_type": "slide"}
# Slicing is just a way to access multiple array items at once:
# + slideshow={"slide_type": "fragment"}
a = np.arange(10)
print(a, end="\n\n")
print("1st:", a[2:9])
print("2nd:", a[2:])
print("3rd:", a[:5])
print("4th:", a[2:9:3]) # [start:end:step]
print("5th:", a[a>5]) # using a mask
# + [markdown] slideshow={"slide_type": "slide"}
# Get further info on NumPy arrays [here](http://www.scipy-lectures.org/intro/numpy/array_object.html#indexing-and-slicing)!
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.4 64-bit (''.venv'': venv)'
# name: python3
# ---
# +
import pandas as pd
import seaborn as sns
import numpy as np
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# -
df = pd.read_csv('data/drug_consumption_clean.csv', index_col=0)
df.head()
# removing columns for individual drugs
drug_columns = ['Amphet', 'Amyl', 'Benzos', 'Coke', 'Crack', 'Ecstasy', 'Heroin',
'Ketamine', 'Legalh', 'LSD', 'Meth', 'Shrooms', 'Semer', 'VSA']
df = df.drop(drug_columns, axis = 1)
df.head()
df.head()
df.corr()
sns.heatmap(df.corr(), annot=True);
sns.pairplot(df, kind='reg')
# +
# Define non-categorical variables
non_cat_vars = ['Nscore', 'Escore', 'Oscore', 'Ascore', 'Cscore', 'Impulsive', 'SS']
# Define categorical variables
cat_vars = [col for col in df.columns.to_list() if col not in non_cat_vars]
# Cast variable type for categorical variables
for feat in cat_vars:
df[feat] = df[feat].astype('category')
df.info()
# -
# replacing categorical features with dummy columns
for i in df.columns:
if i != 'User':
if df[i].dtype == 'category':
df = pd.get_dummies(df, columns=[i], prefix=i, prefix_sep="_", drop_first=True)
df.info()
# +
# Starting applying the ML models
from sklearn.model_selection import train_test_split
RSEED = 42
y=df['User']
X=df.drop('User', axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=RSEED)
# -
X_train.shape, X_test.shape
X_train.info()
# +
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB, CategoricalNB, ComplementNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import ExtraTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
from sklearn.metrics import confusion_matrix, accuracy_score, recall_score, precision_score, f1_score, classification_report
from sklearn.metrics import plot_roc_curve
import matplotlib.pyplot as plt
# +
lr_model = LogisticRegression(random_state=RSEED)
nb_gaus_model = GaussianNB()
nb_catg_model = CategoricalNB() #failed! does not accept negative values
nb_comp_model = ComplementNB() #failed! does not accept negative values
knn_model = KNeighborsClassifier()
svm_model = LinearSVC(max_iter=2000, random_state=RSEED)
svc_model = SVC(random_state=RSEED)
dt_model = DecisionTreeClassifier(random_state=RSEED)
et_model = ExtraTreeClassifier(random_state=RSEED)
rf_model = RandomForestClassifier(random_state=RSEED)
ada_model = AdaBoostClassifier(random_state=RSEED)
xgb_model = XGBClassifier() #failed!
# -
from warnings import simplefilter
# ignore all future and user warnings
simplefilter(action='ignore', category=FutureWarning)
simplefilter(action='ignore', category=UserWarning)
# +
beta = 2
models = [ lr_model, nb_gaus_model, knn_model, svm_model,
svc_model, dt_model, et_model, rf_model, ada_model]
model_names = []
f_beta_results = []
f_beta_train_results = []
for model in models:
print("="*50)
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
cm = confusion_matrix(y_test,y_pred)
Recall = recall_score(y_test,y_pred)
Precision = precision_score(y_test,y_pred)
Accuracy = accuracy_score(y_test, y_pred)
F_score = f1_score(y_test,y_pred)
F_beta = fbeta_score(y_test, y_pred, beta = beta)
f_beta_results.append(F_beta)
y_pred_train = model.predict(X_train)
F_beta_train = fbeta_score(y_train, y_pred_train, beta = beta)
f_beta_train_results.append(F_beta_train)
model_names.append(model.__class__.__name__)
print(cm)
#print(classification_report(y_test,y_pred))
print(f"Accuracy = {round(Accuracy,2)}")
print(f"Recall = {round(Recall,2)}")
print(f"Precision = {round(Precision,2)}")
print(f"f1_score = {round(F_score,2)}")
print(f"f_beta = {round(F_beta,2)}")
for model in models:
print("="*50, "\n", model.__class__.__name__, "\n", model.get_params())
y_pred = model.predict(X_test)
roc = plot_roc_curve(model, X_test, y_test, name = "")
plt.show()
# for model, f_beta, f_beta_train in zip(model_names, f_beta_results, f_beta_train_results):
# print(model, "F_beta Score: {:.3f}".format(f_beta)), "F_beta_train Score: {:.3f}".format(f_beta_train))
for model, f_beta in zip(model_names, f_beta_results):
print(model, "F_beta Score: {:.3f}".format(f_beta))
# -
# # F_beta values
# ### Log Reg : 0.71
# ### GaussianNB : 0.37
# ### KNeighborsClassifier : 0.69
# ### LinearSVC : 0.72
# ### SVC : 0.73
# ### DecisionTree : 0.63
# ### ExtraTree : 0.63
# ### RandomForest : 0.73
# ### Adaboost : 0.72
# Naive Bayes assumes that predictors are conditionally independent => this might not be the case here (SS and Impulsive for example). Could this explain the bad results from NB?
# # Create a scorer
# +
from sklearn.metrics import make_scorer
def min_cost_scorer(y_test, y_pred, test_cost = 100, training_cost = 10000, prob_clean = 0.5, user_cost = 50000):
# get confusion matrix from y_test and y_predict
cm = confusion_matrix(y_true=y_test, y_pred=y_pred)
# cost of students taking the test
test_costs = test_cost * sum(sum(cm))
# cost of a student falsely identified to not consume drugs
FN_cost = user_cost * cm[1][0]
# cost of a student having training due to falsely being identified as a drug user
FP_cost = training_cost * cm[0][1]
# cost of a student having training due to correctly being identified as a potential drug user considering
# that the training is prob_clean % effective
TP_cost = training_cost * cm[1][1] + user_cost * cm[1][1] * (1 - prob_clean)
return test_costs + FN_cost + FP_cost + TP_cost
cost_scorer = make_scorer(min_cost_scorer, greater_is_better=False)
# -
# # Randomized Search CV for better Hyperparameters
# +
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import fbeta_score, make_scorer
ftwo_scorer = make_scorer(fbeta_score, beta=2)
param_grid = {
'n_estimators': range(100,500,10),
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : range(3, 17, 2),
'criterion' :['gini', 'entropy']
}
rf_model = RandomForestClassifier(random_state=RSEED)
randCV_rfc = RandomizedSearchCV(estimator = rf_model, param_distributions = param_grid, n_iter = 60, scoring= cost_scorer, refit=True, verbose = 1)
randCV_rfc.fit(X_train, y_train)
# +
print(randCV_rfc.best_params_)
print(randCV_rfc.best_score_)
y_pred = randCV_rfc.predict(X_test)
print("Optimized costs: {:,}".format( min_cost_scorer(y_test,y_pred)))
cm = confusion_matrix(y_test,y_pred)
y_pred_all = np.ones(len(y_test))
y_pred_zero = np.zeros(len(y_test))
cost_all_user = min_cost_scorer(y_test, y_pred_all)
cost_no_user = min_cost_scorer(y_test, y_pred_zero, test_cost= 0)
print("Costs of predicting all to be a user: {:,}".format(cost_all_user))
print("Costs of not implementing the detection system: {:,}".format(cost_no_user))
# -
# # Grid-parameter search for RandomForestClassifier
# +
from sklearn.model_selection import GridSearchCV
param_grid = {
'n_estimators': [200, 500],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,5,6,7,8,20],
'criterion' :['gini', 'entropy']
}
rf_model = RandomForestClassifier(random_state=RSEED)
CV_rfc = GridSearchCV(estimator=rf_model, param_grid=param_grid, cv= 5, scoring=ftwo_scorer, verbose = 1)
CV_rfc.fit(X_train, y_train)
CV_rfc.best_params_
CV_rfc.best_score_
# -
# # Best score for RandomForest is 0.74
# <br>
# Best parameters for this score are : {'criterion': 'gini','max_depth': 8,'max_features': 'auto','n_estimators': 500}
# # Grid-parameter search for SVCClassifier
# +
param_grid = [{"kernel": ["rbf", "linear"], "gamma": [1e-2, 1e-3, 1e-4], "C": [1, 10, 100]},
{'C': [1.0], 'break_ties': [False], 'cache_size': [200], 'class_weight': [None], 'coef0': [0.0], 'decision_function_shape': ['ovr'], 'degree': [3], 'gamma':[ 'scale'], 'kernel':[ 'rbf'], 'max_iter':[ -1], 'probability':[ False], 'random_state':[ 42], 'shrinking':[ True], 'tol':[ 0.001], 'verbose':[ False]}]
#param_grid = {'C': [1.0], 'break_ties': [False], 'cache_size': [200], 'class_weight': [None], 'coef0': [0.0], 'decision_function_shape': ['ovr'], 'degree': [3], 'gamma':[ 'scale'], 'kernel':[ 'rbf'], 'max_iter':[ -1], 'probability':[ False], 'random_state':[ 42], 'shrinking':[ True], 'tol':[ 0.001], 'verbose':[ False]}
CV_svc = GridSearchCV(estimator=SVC(), param_grid=param_grid, cv= 5, scoring=ftwo_scorer, verbose = 1)
CV_svc.fit(X_train, y_train)
CV_svc.best_params_
CV_svc.scorer_ , CV_svc.best_score_
# +
svc_best = SVC(**CV_svc.best_params_)
svc_best.fit(X_train,y_train)
y_pred_svc = svc_best.predict(X_test)
recall_score(y_test, y_pred_svc)
precision_score(y_test, y_pred_svc)
fbeta_score(y_test, y_pred_svc, beta = 2)
confusion_matrix(y_test, y_pred_svc)
# +
params = {'C': 1.0, 'break_ties': False, 'cache_size': 200, 'class_weight': None, 'coef0': 0.0, 'decision_function_shape': 'ovr', 'degree': 3, 'gamma': 'scale', 'kernel': 'rbf', 'max_iter': -1, 'probability': False, 'random_state': 42, 'shrinking': True, 'tol': 0.001, 'verbose': False}
svc_basic = SVC(**params)
svc_basic.fit(X_train,y_train)
y_pred_svc_basic = svc_basic.predict(X_test)
recall_score(y_test, y_pred_svc_basic)
precision_score(y_test, y_pred_svc_basic)
fbeta_score(y_test, y_pred_svc_basic, beta = 2)
confusion_matrix(y_test, y_pred_svc_basic)
# -
sns.countplot(y_train)
sns.countplot(y_test)
sns.countplot(y_pred_svc)
sns.countplot(y_pred_svc_basic)
# # Best score for SVC is 0.76
# <br>
# ## Best parameters are {'C': 100, 'gamma': 0.001, 'kernel': 'rbf'}
# TODO:
# - optimize hyperparameters further
# - ensemble models
#
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.0 64-bit ('3.7.0')
# metadata:
# interpreter:
# hash: 8985b82909d25526004b9b913e7e441072f5d58beb1c864a00eebe761508239a
# name: python3
# ---
# ## Ch2 파이썬 프로그래밍의 기초, 자료형
t1 = ()
t2 = (1,)
t3 = (1, 2, 3)
t4 = 1, 2, 3
t5 = ('a', 'b', ('ab', 'cd'))
t1 = (1, 2, 'a', 'b')
t1[0]
t1[3]
t1[1:]
t2 = (3, 4)
t1 + t2
t2 * 3
len(t1)
dic = {'name':'pey', 'phone':'0104471009', 'birth': '1009'}
a = {1: 'hi'}
a = {'a': [1,2,3]}
a = {1: 'a'}
a[2] = 'b'
a
a['name'] = 'pey'
a[3] = [1,2,3]
a
del a[1]
grade = {'pey': 10, 'julliet': 99}
grade['pey']
grade['julliet']
a = {1: 'a', 2: 'b'}
a[1]
a[2]
a = {'a': 1, 'b': 2}
a['a']
a['b']
dic = {'name': 'pey', 'phone': '01084471889', 'birth': '0915'}
dic['name']
dic['phone']
dict['birth']
dic['birth']
a = {1: 'a', 1: 'b'}
a[1]
a = {[1,2] : 'hi'}
a = {'name': 'pey', 'phone': '01099348471', 'birth': '0934'}
a.keys()
for k in a.keys():
print(k)
list(a.keys())
a.values()
a.items()
a.clear()
a
a = {'name': 'pey', 'phone': '01084491029', 'birth': '0242'}
a.get('name')
a.get('phone')
a.get('nokey')
print(a.get('nokey'))
print(a['nokey'])
a.get('foo', 'bar')
'name' in a
'email' in a
s1 = set([1,2,3])
s1
s2 = set("Hello")
s2
s1 = set([1, 2, 3])
s1
l1 = list(s1)
l1
l1[0]
t1 = tuple(s1)
t1
l1
test = tuple(l1)
test
test2 = list(t1)
test2
s1 = set([1, 2, 3, 4, 5, 6])
s2 = set([4, 5, 6, 7, 8, 9])
s1 & s2
s1.intersection(s1)
s1.intersection(s2)
s1 | s2
s1.union(s2)
s1 - s2
s1.difference(s2)
s2.difference(s1)
s1 = set([1, 2, 3])
s1.add(4)
s1
s1 = set([1, 2, 3])
s1
s1.update([4, 5, 6])
s1
s1 = set([1, 2, 3])
s1.remove(2)
s1
a = True
b = False
type(a)
type(b)
1 == 1
2 > 1
2 < 1
""
"python"
a = [1, 2, 3, 4]
while a:
a.pop()
a
if []:
print("참")
else:
print("거짓")
if [1, 2, 3]:
print("참")
else:
print("거짓")
bool('python')
bool('')
bool([1, 2, 3])
bool([])
bool(0)
bool(3)
a = 1
b = "python"
c = [1, 2, 3]
id(a)
id(b)
id(c)
a = [1, 2, 3]
b = a
id(b)
id(a) == id(b)
a is b
a[1] = 4
a
b
a = [1, 2, 3]
b = a[:]
id(a) == id(b)
a[1] = 4
a
b
from copy import copy
a = [1, 2, 3]
b = copy(a)
id(a) == id(b)
c = a[:]
id(b) == id(c)
a, b = ('python', 'life')
(a, b) = 'python', 'life'
[a, b] = ['python', 'life']
a = b = 'python'
id(a)
id(b)
a = 3
b = 5
a, b = b, a
a
b
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# +
ourList = [1, 3, 12, -100, 30, -90]
def maximum_number(list):
max = list[0]
for a in list:
if a > max:
max = a
return max
print(maximum_number(ourList))
# +
dict = {}
while True:
print("1. Show Reminder")
print("2. New reminder")
print("3. Exit")
print("")
choice = int(input("Enter a number"))
if choice == 1:
if len(dict.keys())==0:
print("There are no event(s) stored!")
print("")
else:
name = input("Enter a name to search for an event...")
theDate = dict.get(name, "No data found!")
print(theDate)
print("")
elif choice == 2:
name = input("Enter an event...")
date = input("Enter a date...")
dict[name]=date
print("Event added")
print("")
elif choice == 3:
break
else:
print("Please enter a valid number...")
print("")
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# + deletable=true editable=true
from __future__ import division
from __future__ import print_function
import matplotlib
% matplotlib inline
# matplotlib.use('Agg')
import matplotlib.pyplot as plt
import random
import numpy as np
import numpy.linalg as alg
import scipy as spy
import networkx as nx
import time
from itertools import *
import sys
import numpy.linalg as LA
import pickle
# set hyperparameter Lambda and Rho
Lambda = 0.1
Rho = 1
import random
import numpy as np
import numpy.linalg as LA
import scipy as spy
import time
from itertools import *
import sys
import cvxpy as cvx
from random import randint
import numpy as np
import random
from scipy.sparse import csc_matrix
from scipy import sparse as sp
import networkx as nx
from multiprocessing import Pool
import multiprocessing
from scipy.special import expit
from sklearn import linear_model, datasets
# -
class ADMM:
'''
ADMM for graph regularization Python class
input:
X: feature matrix, N*d matrix
y: N*1 label vector, where y_i = 0, if node i is in test indices
G: graph with N nodes as a nested dictionary
Lambda: hyperparameter to control graph regularization
Rho: hyperparameter to control ADMM stepsize
train_mask: N*1 boolean vector
test_mask: N*1 boolean vector
y_true: N*1 label vector
Threshold:hyperparameter as the stopping criteria of ADMM algorithm
paramters, features, labels, and graph structure
output:
W: estimated W, d*1 vector
b: estimated b, N*1 vector
losses: losses per iteration
'''
def __init__(self, X, y, G, nodes, edgeNbr, Lambda, Rho, train_mask, test_mask, y_true, Threshold, initialW, initialb):
self.X = X
self.y = y
self.Threshold = Threshold
self.y_true = y_true
self.train_mask = train_mask
self.test_mask = test_mask
self.dim = X.shape[1]
self.Lambda = Lambda
self.Rho = Rho
self.graph = G
self.nodes = nodes
row=[]
col=[]
for i, js in self.graph.items():
for j in js:
row.append(i)
col.append(j)
initialZ=np.random.rand(len(row))
initialU=np.random.rand(len(row))
self.Z = collections.defaultdict(dict)
self.U = collections.defaultdict(dict)
k=0
for i, js in self.graph.items():
for j in js:
self.Z[i][j]=initialZ[k]
self.U[i][j]=initialU[k]
k+=1
# set the initial value of W and b with the logistics regression result
self.W = initialW.reshape(X.shape[1])
self.b = np.ones(X.shape[0])*initialb
def dumpWb(self, filename):
dict = {"W": self.W, "b": self.b}
with open(filename, "wb") as f:
pickle.dump( dict, f)
def deriv_b(self, b, C1, C2, C3, eC1):
if (eC1 == float('inf')):
return C2 + C3 *b
return 1/(1+ eC1* math.exp(-1.0*b)) + C2 + C3*b
def deriv_b_negy(self, b, C1, C2, C3, eC1):
if (eC1 == float('inf')):
return C2 + C3 *b
return -1.0/(1+ eC1* math.exp(b)) + C2 + C3*b
def update_b(self):
'''
update the value of b, check line 4 of the ADMM algorithm for the math
cvxpy is conducted independently for each node
'''
B = []
num_nodes = len(self.nodes)
kk=0
for i in self.nodes:
#if kk%1000000==0:
# self.logger.info('update_b: {0} %{1:4.2f}'.format(i, kk *1.0 /num_nodes *100 ))
# kk+=1
sumdiffZU = 0
neighborCnt = 0
for Id in self.graph[i]:
sumdiffZU += (self.Z[i][Id]-self.U[i][Id])
neighborCnt += 1
if (neighborCnt == 0):
raise ValueError('{0} has no neighbor'.format(i))
b1 = sumdiffZU /neighborCnt
#in case of missing value, we have analytical solution for b
if (self.y[i]==0):
self.b[i]= b1
continue
tol = 1e-5
#the optimial value is within the interval [b1, b2]
if (self.y[i]==1):
b2 = b1 + 1/self.Rho/neighborCnt
#bisection method to find a better b
C1 = -1.0 * self.X[i].dot(self.W) #C1 = -1.0 * self.X[i].dot(self.g[i,:])
C2 = -1-self.Rho * sumdiffZU
C3 = self.Rho * neighborCnt
eC1 = 0
try:
eC1 = math.exp(C1)
except OverflowError:
eC1 = float('inf')
while(b2-b1 > tol):
Db1 = self.deriv_b(b1, C1, C2, C3, eC1)
Db2 = self.deriv_b(b2, C1, C2, C3, eC1)
if (math.fabs(Db1)<tol):
b2 = b1
break;
if (math.fabs(Db2)<tol):
b1 = b2
break;
if (not(Db1<=tol and Db2>=-1.0*tol)):
raise ValueError('Db1 and Db2 has same sign which is impossible! Db1={0}, Db2={1}, b1={2}, b2={3}'.format(Db1, Db2, b1, b2))
b3 = (b1 + b2)/2
Db3 = self.deriv_b(b3, C1, C2, C3, eC1)
if (Db3 >=0):
b2=b3
else:
b1=b3
self.b[i] = (b1 + b2)/2
continue
if (self.y[i]==-1):
b2 = b1
b1 = b2 - 1/self.Rho/neighborCnt
C1 = self.X[i].dot(self.W) #C1 = self.X[i].dot(self.g[i,:])
C2 = 1-self.Rho * sumdiffZU
C3 = self.Rho * neighborCnt
eC1 = 0
try:
eC1 = math.exp(C1)
except OverflowError:
eC1 = float('inf')
while(b2-b1 > tol):
Db1 = self.deriv_b_negy(b1, C1, C2, C3, eC1)
Db2 = self.deriv_b_negy(b2, C1, C2, C3, eC1)
if (math.fabs(Db1)<tol):
b2 = b1
break;
if (math.fabs(Db2)<tol):
b1 = b2
break;
if (not(Db1<=tol and Db2>=-1.0*tol)):
raise ValueError('Db1 and Db2 has same sign which is impossible! Db1={0}, Db2={1}, b1={2}, b2={3}, C1={4}, C2={5}, C3={6}'.format(
Db1, Db2, b1, b2, C1, C2, C3))
b3 = (b1 + b2)/2
Db3 = self.deriv_b_negy(b3, C1, C2, C3, eC1)
if (Db3 >=0):
b2=b3
else:
b1=b3
self.b[i] = (b1 + b2)/2
continue
raise ValueError('impossible value for y={0}'.format(self.y[i]))
def update_Z(self):
'''
update the value of Z, check line 6 of the ADMM algorithm for the math
rho is lambda times rho2
f is L_{rho}(W_t^{k+1}, b_t^{k+1}, g^{k+1}, (z_{ij}, z_{ji}, z_{(ij)^c}^k, u^k, h^k; t)
see page 5 of https://arxiv.org/pdf/1703.07520.pdf Social discrete choice model
'''
for k in self.graph:
for j in self.graph[k]:
A = self.b[j] + self.U[j][k]
B = self.b[k] + self.U[k][j]
self.Z[k][j] = (2*self.Lambda*A + (2*self.Lambda+self.Rho)*B)/(self.Lambda*4+self.Rho)
def update_U(self):
'''
update the value of U, check line 7 of the ADMM algorithm for the math
'''
for i in self.graph:
for Id in self.graph[i]:
self.U[i][Id] = self.U[i][Id] + self.b[i] - self.Z[i][Id]
'''
using a simple gradient descent algorithm to update W
https://www.cs.cmu.edu/~ggordon/10725-F12/slides/05-gd-revisited.pdf
learning rate is chosen using Backtracking linear search. see page 10 of the slides above
'''
def update_W(self, iteration):
featureCnt = len(self.W)
maxiter = 2
oldloss = self.cal_LL()
newloss = oldloss
for k in range(maxiter):
learningrate = 0.00001
oldloss = newloss
print('update W iteration {0}.{1}'.format(iteration, k))
gradient = np.zeros(featureCnt)
for i in self.graph:
if (self.y[i]==0):
continue
C1 = -1.0 * self.y[i]* (self.X[i].dot(self.W) + self.b[i])
eC1 = 0
multiplier = 0
try:
eC1 = math.exp(C1)
except OverflowError:
eC1 = float('inf')
if (eC1==float('inf')):
multiplier = -1.0*self.y[i]
else:
multiplier = (1 - 1.0/(1.0+eC1)) * (-1.0)*self.y[i]
gradient = np.add(gradient, multiplier * self.X[i])
gradientNorm = np.linalg.norm(gradient)
if (gradientNorm == float('inf')):
raise ValueError('norm of gradient is infinity') #should never happen
gradientNorm2 = gradientNorm * gradientNorm
oldW = np.copy(self.W)
kk=0
newloss = 0
tol = 1e-5
while (True):
np.copyto(self.W, oldW)
self.W -= learningrate * gradient
anticipateddecrease = learningrate * gradientNorm2 /2.0
print('anticipate the loss to decrease from {0} by {1}'.format(oldloss, anticipateddecrease))
targetloss = oldloss - anticipateddecrease
try:
newloss = self.cal_LL()
except OverflowError:
learningrate = learningrate / 2
kk+=1
print('get infinite loss, reduce learning rate to {0}, kk={1}'.format(learningrate, kk))
continue
if (newloss <= targetloss + tol):
break;
learningrate = learningrate / 2
kk+=1
print('loss is not decreasing below anticipated value, reduce learning rate to {0}, kk={1}'.format(learningrate, kk))
if(kk>1000):
raise ValueError('cannot find a good learning rate to get finite loss.learningrate={0}'.format(learningrate))
print('learning rate: {0}'.format(learningrate))
print('max in gradient is :{0}'.format(np.max(np.abs(gradient))))
print('oldloss:' + str(oldloss) + ',newloss:' + str(newloss))
if(math.fabs(newloss-oldloss) < 0.00001 * oldloss):
return newloss
return newloss
def optimize_b(self, iterations, old_loss, verbose=False):
kk = 0
maxiter = 5
while (True):
start2 = time.time()
self.update_b()
end2 = time.time()
if(verbose):
print('finished b {0} seconds at iteration {1}'.format(end2-start2, iterations))
start2 = time.time()
self.update_Z()
end2 = time.time()
if(verbose):
print('finished Z {0} seconds at iteration {1}'.format(end2-start2, iterations))
start2 = time.time()
self.update_U()
end2 = time.time()
if(verbose):
print('finished U {0} seconds at iteration {1}'.format(end2-start2, iterations))
loss = self.cal_LL()
print('loss is {0}, old loss is {1} at iteration {2}.{3}'.format(loss, old_loss, iterations, kk))
kk+=1
if(np.absolute(old_loss-loss)<=self.Threshold):
return loss
if (kk > maxiter):
return loss
def runADMM_Grid(self):
'''
runADMM Grid iterations
The stopping criteria is when the difference of the value of the objective function in current iteration
and the value of the objective function in the previous iteration is smaller than the Threshold
'''
resultdump = 'result.dump'
# self.dumpWb(resultdump + ".initial")
self.losses = []
self.times = []
loss = self.cal_LL()
self.losses.append(loss)
print('iteration = 0')
print('objective = {0}'.format(loss))
old_loss = loss
loss = float('inf')
iterations = 0
import time
start = time.time()
while(True):
loss = self.optimize_b(iterations, old_loss)
start2 = time.time()
loss = self.update_W(iterations)
end2 = time.time()
print('finished w {0} seconds at iteration {1}'.format(end2-start2, iterations))
print('loss is {0}, old loss is {1} at iteration {2}'.format(loss, old_loss, iterations))
loss = self.cal_LL()
self.losses.append(loss)
if(np.absolute(old_loss- loss) <= self.Threshold):
break
old_loss = loss
iterations += 1
# if (iterations % 2 == 0):
# self.dumpWb(resultdump + "." + str(iterations))
print('total iterations = ' + str(iterations))
end = time.time()
print('total time = {0}'.format(end-start))
# self.dumpWb(resultdump + ".final" )
def cal_LL(self):
'''
function to calculate the value of loss function
'''
W = np.array(self.W).flatten()
b = np.array(self.b).flatten()
loss = 0
for i in self.nodes:
r = np.log(1 + np.exp(-self.y[i]*(np.dot(self.X[i], W) + b[i])))
if(r == float('inf')):
raise OverflowError('loss is infinity')
loss += r
for i, js in self.graph.items():
for j in js:
loss += self.Lambda*(self.b[i]-self.b[j])**2
return loss
# !ls graph1/
# +
G3 = pickle.load(open("graph1/G.p"))
for u,v in G3.edges():
G3[u][v]['pos_edge_prob'] = 1
for i in range(G3.number_of_nodes()):
G3.node[i]['pos_node_prob'] = 1
# get all the nodes of the graph
nodes = G3.nodes()
# get some statistics about the graph
print('number of nodes',G3.number_of_nodes())
print('number of edges',G3.number_of_edges())
y_train = pickle.load( open( "graph1/y_train.p", "rb" ) )
y_true = pickle.load( open( "graph1/Y_true.p", "rb" ) )
y_test = pickle.load( open( "graph1/y_test.p", "rb" ) )
train_mask = pickle.load( open( "graph1/train_mask.p", "rb" ) )
test_mask = pickle.load( open( "graph1/test_mask.p", "rb" ) )
Y_train = np.zeros(G3.number_of_nodes())
for i in range(len(Y_train)):
if y_train[i,0]==1:
Y_train[i] = -1
if y_train[i,1] ==1:
Y_train[i]=1
Y_true = np.zeros(G3.number_of_nodes())
for i in range(len(Y_true)):
if y_true[i,0]==1:
Y_true[i] = -1
if y_true[i,1] ==1:
Y_true[i]=1
# Load feature matrix, select two features for the ADMM training
X = pickle.load( open( "graph1/X.p", "rb" ) )
print(X.shape)
X = X[:,[2,116]]
# +
import collections
import math
import csv
import pandas as pd
import datetime
expcntDict = collections.defaultdict(dict)
expamountDict = collections.defaultdict(dict)
comset = set()
nodes = set()
edgecnt = 0;
for edge in G3.edges():
src = edge[0]
target = edge[1]
edgecnt = edgecnt+1
nodes.add(src)
nodes.add(target)
expamountDict[src][target]=1
expamountDict[target][src]=1
expcntDict[src][target]=1
expcntDict[target][src]=1
nodecnt = len(nodes)
# nodes is the set of nodes
# nodecnt is the count of nodes in the graph
# egdecnt is the count of edges in the graph
print('number of nodes', nodecnt)
print('number of edges', edgecnt)
# -
Lambda = 0.1
Rho = 1.0
Threshold = 5.0
logistic = linear_model.LogisticRegression()
logistic.fit(X[train_mask], Y_train[train_mask])
# + deletable=true editable=true
import time
C = ADMM(X, Y_train, expamountDict, nodes, edgecnt,
Lambda, Rho, train_mask,test_mask,Y_true, Threshold, logistic.coef_, logistic.intercept_)
start = time.time()
C.runADMM_Grid()
end = time.time()
print(end- start)
# -
C.W, C.b
# +
import matplotlib.pyplot as plt
% matplotlib inline
plt.figure(figsize=(10,6))
plt.title('Training loss per iteration',fontsize=30)
plt.plot(range(len(C.losses)),C.losses)
plt.show()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:gis]
# language: python
# name: conda-env-gis-py
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
from pathlib import Path
import numpy as np
import geopandas as gpd
import rasterio as rio
from rasterio.plot import show
print(f'Using rasterio version {rio.__version__} (via conda-forge)')
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import DivergingNorm
from matplotlib import cm
from matplotlib.colors import ListedColormap
import copy
sns.set_style('ticks', {'font.family': 'FreeSans'})
master_dir = '../'
# -
# Import GO summary
df_COVID = pd.read_excel(master_dir + 'output/disease_comparisons/gex_high_vs_non-PE/Monocyte/GO summary.xlsx',
sheet_name='COVID', header=12)
df_sepsis = pd.read_excel(master_dir + 'output/disease_comparisons/gex_high_vs_non-PE/Monocyte/GO summary.xlsx',
sheet_name='Sepsis', header=12)
df_HIV = pd.read_excel(master_dir + 'output/disease_comparisons/gex_high_vs_non-PE/Monocyte/GO summary.xlsx',
sheet_name='HIV', header=12)
df_COVID.head()
# +
# Dataframe of interesting GO processes in COVID, vs. other diseases
df_COVID = df_COVID.add_prefix('COVID_')
#df_COVID = df_COVID[df_COVID['COVID_upload_1 (FDR)']<0.0001]
df_COVID = df_COVID[df_COVID['COVID_upload_1 (fold Enrichment)'] != ' < 0.01']
df_COVID = df_COVID[df_COVID['COVID_upload_1 (fold Enrichment)'].astype(float) > 5]
df_compare = df_COVID.loc[:, ['COVID_GO biological process complete', 'COVID_upload_1 (FDR)']].copy()
df_sepsis = df_sepsis.add_prefix('sepsis_')
df_compare = pd.merge(df_compare,
df_sepsis.loc[:, ['sepsis_GO biological process complete', 'sepsis_upload_1 (FDR)']].copy(),
left_on='COVID_GO biological process complete', right_on='sepsis_GO biological process complete',
how='left')
df_HIV = df_HIV.add_prefix('HIV_')
df_compare = pd.merge(df_compare,
df_HIV.loc[:, ['HIV_GO biological process complete', 'HIV_upload_1 (FDR)']].copy(),
left_on='COVID_GO biological process complete', right_on='HIV_GO biological process complete',
how='left')
#sns.heatmap(data=df_compare.loc[:, ['upload_1 (FDR)']], cmap='viridis', xticklabels=['COVID-19'])
#yticklabels=df_compare['GO biological process complete'])
# -
df_compare.head()
df_compare = df_compare.set_index('COVID_GO biological process complete')
df_compare = df_compare.drop(columns=['sepsis_GO biological process complete',
'HIV_GO biological process complete'])
cmap = copy.copy(cm.get_cmap('Reds'))
newcolors = cmap(np.linspace(0, 1, 256))[::1]
gray = np.array([128/256, 128/256, 128/256, 0.2]) # NaNs will be filled to have p value of 1, which will then
# have a gray color on the clustermap
newcolors[:1, :] = gray
newcmap = ListedColormap(newcolors)
fig = sns.clustermap(-np.log10(df_compare.fillna(1)), cmap=newcmap, figsize=(10, 7.5), lw=0.5, linecolor='gray')
fig.ax_heatmap.set_yticklabels([str(x).split('\'')[-2].split('(')[0] for x in fig.ax_heatmap.get_yticklabels()])
fig.ax_heatmap.set_xticklabels([str(x).split('\'')[-2].split('_')[0] for x in fig.ax_heatmap.get_xticklabels()])
fig.ax_heatmap.set_ylabel('GO biological process complete')
#plt.savefig(master_dir + 'output/disease_comparisons/gex_high_vs_non-PE/Monocyte/GO_COVID_vs_rest.pdf')
# +
# Shorter summary version
temp = df_COVID[df_COVID['COVID_upload_1 (fold Enrichment)'] > 5].copy()
temp = temp.sort_values('COVID_upload_1 (fold Enrichment)', ascending=False).iloc[:5].sort_values('COVID_upload_1 (FDR)')
plt.figure(figsize=[3, 3])
simplified_processes = [x.split('(')[0].split(',')[0] for x in temp['COVID_GO biological process complete']]
g = sns.barplot(x=-np.log10(temp['COVID_upload_1 (FDR)']),
y=simplified_processes, color='#ff9999')
plt.ylabel('Biological process')
plt.xlabel(r'$-log_{10}(\it{p})$')
plt.savefig(master_dir + 'output/disease_comparisons/gex_high_vs_non-PE/Monocyte/GO_COVID_unique_summary.pdf')
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
from os.path import join
sys.path.insert(0, 'utils')
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import sbm
# -
# ## Load data and visualize adjacency matrix
dataFileName = join('data', 'facebook-wall-filtered.txt')
net = nx.read_edgelist(dataFileName,create_using=nx.DiGraph(),data=(('Timestamp',int),))
adj = nx.to_numpy_array(net)
plt.figure()
plt.spy(adj)
plt.show()
# ## Estimate cluster memberships using spectral clustering
clusterId = sbm.spectralCluster(adj,directed=True)
nClusters = np.max(clusterId)+1
clusterSizes = np.histogram(clusterId, bins=nClusters)[0]
print(clusterSizes)
# ### Re-order nodes by class memberships and re-examine adjacency matrix
sbm.spyClusters(adj,clusterId)
# ## Estimate edge probabilities at the block level
blockProb,logLik = sbm.estimateBlockProb(adj,clusterId,directed=True)
print(blockProb)
print(logLik)
# ### View estimated edge probabilities as a heat map
plt.figure()
plt.imshow(blockProb)
plt.colorbar()
plt.show()
# ## Compute reciprocity and transitivity of actual network using NetworkX
recip = nx.overall_reciprocity(net)
print(recip)
trans = nx.transitivity(net)
print(trans)
# ## Simulate new networks from SBM fit to check model goodness of fit
nRuns = 10
blockProbSim = np.zeros((nClusters,nClusters,nRuns))
recipSim = np.zeros(nRuns)
transSim = np.zeros(nRuns)
for run in range(nRuns):
# Simulate new adjacency matrix and create NetworkX object for it
adjSim = sbm.generateAdj(clusterId,blockProb,directed=True)
netSim = nx.DiGraph(adjSim)
blockProbSim[:,:,run] = sbm.estimateBlockProb(adjSim,clusterId,
directed=True)[0]
recipSim[run] = nx.overall_reciprocity(netSim)
transSim[run] = nx.transitivity(netSim)
meanBlockProbSim = np.mean(blockProbSim,axis=2)
stdBlockProbSim = np.std(blockProbSim,axis=2)
print('Actual block densities:')
print(blockProb)
print('Mean simulated block densities:')
print(meanBlockProbSim)
print('95% confidence interval lower bound:')
print(meanBlockProbSim-2*stdBlockProbSim)
print('95% confidence interval upper bound:')
print(meanBlockProbSim+2*stdBlockProbSim)
plt.figure()
plt.hist(recipSim)
plt.title('Actual reciprocity: %f' % recip)
plt.show()
plt.figure()
plt.hist(transSim)
plt.title('Actual transitivity: %f' % trans)
plt.show()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 (tensorflow)
# language: python
# name: tensorflow
# ---
# <a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_09_5_transfer_feature_eng.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#
# # T81-558: Applications of Deep Neural Networks
# **Module 9: Regularization: L1, L2 and Dropout**
# * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# # Module 9 Material
#
# * Part 9.1: Introduction to Keras Transfer Learning [[Video]](https://www.youtube.com/watch?v=WLlP6S-Z8Xs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_1_keras_transfer.ipynb)
# * Part 9.2: Popular Pretrained Neural Networks for Keras [[Video]](https://www.youtube.com/watch?v=ctVA1_46YEE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_2_popular_transfer.ipynb)
# * Part 9.3: Transfer Learning for Computer Vision and Keras [[Video]](https://www.youtube.com/watch?v=61vMUm_XBMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_3_transfer_cv.ipynb)
# * Part 9.4: Transfer Learning for Languages and Keras [[Video]](https://www.youtube.com/watch?v=ajmAAg9FxXA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_4_transfer_nlp.ipynb)
# * **Part 9.5: Transfer Learning for Keras Feature Engineering** [[Video]](https://www.youtube.com/watch?v=Dttxsm8zpL8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_09_5_transfer_feature_eng.ipynb)
# # Google CoLab Instructions
#
# The following code ensures that Google CoLab is running the correct version of TensorFlow.
try:
# %tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
# # Part 9.5: Transfer Learning for Keras Feature Engineering
#
# http://cs231n.github.io/transfer-learning/
# +
# %matplotlib inline
import pandas as pd
import numpy as np
import os
import tensorflow.keras
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Dense,GlobalAveragePooling2D
from tensorflow.keras.applications import MobileNet
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.mobilenet import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from PIL import Image, ImageFile
from matplotlib.pyplot import imshow
import requests
import numpy as np
from io import BytesIO
from IPython.display import display, HTML
from tensorflow.keras.applications.mobilenet import decode_predictions
model = MobileNet(weights='imagenet',include_top=False)
IMAGE_WIDTH = 224
IMAGE_HEIGHT = 224
IMAGE_CHANNELS = 3
images = [
"https://cdn.shopify.com/s/files/1/0712/4751/products/SMA-01_2000x.jpg?v=1537468751"
]
def make_square(img):
cols,rows = img.size
if rows>cols:
pad = (rows-cols)/2
img = img.crop((pad,0,cols,cols))
else:
pad = (cols-rows)/2
img = img.crop((0,pad,rows,rows))
return img
for url in images:
x = []
ImageFile.LOAD_TRUNCATED_IMAGES = False
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.load()
img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
pred = model.predict(x)
display("___________________________________________________________________________________________")
display(img)
print(pred.shape)
print(pred)
# -
model.summary()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="HUNK9BA-jdxB"
# # Final Project Exploratory Analysis
#
# The objective of this notebook is to perform exploratory analysis on a dataset of cell images, some of which are malaria infected and some of whic are healthy, in order to create a model that can predict with high accuracy which is which. The research question to be answered is:
#
# 1. Using only a photo of a blood cell, can I predict with a level of acccuracy equivalent to or higher than microscopy in a lab (97%), whether a Malaria infection is present?
#
# Plan of Action:
#
# - Prepare the data for analysis:
# - Split data into train and validation groups
# - format the data appropriately for a neural network by scaling and normalizing
# - Start by training a basic tensorflow model to classify the images
# - experiment with the model parameters, optimizer, and layers to identify the best fit
# - perform additional data augmentation in order to improve the accuracy of the model and reduce any overfitting
#
# Note: for the initial model I am using https://www.tensorflow.org/tutorials/images/classification as a guide
#
# and https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/
# to guide the tuning process
# + colab={"base_uri": "https://localhost:8080/", "height": 73, "resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7CgpmdW5jdGlvbiBfdXBsb2FkRmlsZXMoaW5wdXRJZCwgb3V0cHV0SWQpIHsKICBjb25zdCBzdGVwcyA9IHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCk7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICAvLyBDYWNoZSBzdGVwcyBvbiB0aGUgb3V0cHV0RWxlbWVudCB0byBtYWtlIGl0IGF2YWlsYWJsZSBmb3IgdGhlIG5leHQgY2FsbAogIC8vIHRvIHVwbG9hZEZpbGVzQ29udGludWUgZnJvbSBQeXRob24uCiAgb3V0cHV0RWxlbWVudC5zdGVwcyA9IHN0ZXBzOwoKICByZXR1cm4gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpOwp9CgovLyBUaGlzIGlzIHJvdWdobHkgYW4gYXN5bmMgZ2VuZXJhdG9yIChub3Qgc3VwcG9ydGVkIGluIHRoZSBicm93c2VyIHlldCksCi8vIHdoZXJlIHRoZXJlIGFyZSBtdWx0aXBsZSBhc3luY2hyb25vdXMgc3RlcHMgYW5kIHRoZSBQeXRob24gc2lkZSBpcyBnb2luZwovLyB0byBwb2xsIGZvciBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcC4KLy8gVGhpcyB1c2VzIGEgUHJvbWlzZSB0byBibG9jayB0aGUgcHl0aG9uIHNpZGUgb24gY29tcGxldGlvbiBvZiBlYWNoIHN0ZXAsCi8vIHRoZW4gcGFzc2VzIHRoZSByZXN1bHQgb2YgdGhlIHByZXZpb3VzIHN0ZXAgYXMgdGhlIGlucHV0IHRvIHRoZSBuZXh0IHN0ZXAuCmZ1bmN0aW9uIF91cGxvYWRGaWxlc0NvbnRpbnVlKG91dHB1dElkKSB7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCkgewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwogICAgICBwZXJjZW50LnRleHRDb250ZW50ID0KICAgICAgICAgIGAke01hdGgucm91bmQoKHBvc2l0aW9uIC8gZmlsZURhdGEuYnl0ZUxlbmd0aCkgKiAxMDApfSUgZG9uZWA7CiAgICB9CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "headers": [["content-type", "application/javascript"]], "ok": true, "status": 200, "status_text": ""}}} id="kcU2CZq0jgFj" outputId="d80378da-90f0-4024-d972-c8a2ca6bd5d9" active=""
# #First few lines are for a run in Google colab
# from google.colab import files
# data_to_load = files.upload()
# + id="KWGXBR0bjgRX" active=""
# !mkdir -p ~/.kaggle
# !cp kaggle.json ~/.kaggle/
# !chmod 600 ~/.kaggle/kaggle.json
# + colab={"base_uri": "https://localhost:8080/"} id="4xT5rlL2jqgL" outputId="80d68f71-99ab-43ce-aa7e-c310ade6f7a5" active=""
# !kaggle datasets download -d syedamirraza/malaria-cell-image
# + id="E53IQP1OjxgN"
import zipfile
zip_ref = zipfile.ZipFile('malaria-cell-image.zip', 'r')
zip_ref.extractall('Cell_Images')
zip_ref.close()
# + id="-cM1EbrJjdxB"
#Import libraries
import kaggle
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
import pathlib
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
# + id="-Qtl-lEOkJZy" active=""
# #To Run in Colab
# train_data_dir = "/content/Cell_Images/cell_images/train"
# train_data_dir = pathlib.Path(train_data_dir)
# + id="ZLTrOOyQjdxB"
#Set my data directory *** Test data will only be used in the final report to evaluate my model
train_data_dir = "C:/Users/15856/Data 602/FinalProject/Cell_Images/cell_images/train"
train_data_dir = pathlib.Path(train_data_dir)
# + id="T3qbo3gYjdxB"
#I know from my data cleaning notebook that the data images are several different sizes, and so I will set a standard input size for them
batch_size = 32
img_height = 200
img_width = 200
# + colab={"base_uri": "https://localhost:8080/"} id="ffm2WQh6jdxB" outputId="650cda27-6322-4366-fae8-6bc1cf61b86b"
#creating my datasets, one for training and one for validation. I selected 10% of the images for my validation set
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
train_data_dir,
validation_split=0.1, #about 2000 images seems like a good validation set to me
subset="training",
seed=1212020,
image_size=(img_height, img_width),
batch_size=batch_size)
# + colab={"base_uri": "https://localhost:8080/"} id="xQlY6YnqjdxC" outputId="c76aecf3-a4ec-4112-cafd-13611b84e071"
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
train_data_dir,
validation_split=0.1,
subset="validation",
seed=1212020,
image_size=(img_height, img_width),
batch_size=batch_size)
# + colab={"base_uri": "https://localhost:8080/"} id="19xNnU7njdxC" outputId="65162b06-13c7-4516-dcdb-449cdad454cb"
class_names = train_ds.class_names
print(class_names)
# + [markdown] id="VCfTeMU2jdxC"
# ## Feature Engineering
# + id="blJ1FKZojdxC"
# This stage of the data preparation, using AUTOTUNE, is for optimizing performance and allows the tf.data to tune the value dynamically
#https://www.tensorflow.org/guide/data_performance
# The cache function keeps the data in memory after the first epoch which presents bottlenecking during training
#the prefectch function overlaps the data preproccessing with training the model
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
# + [markdown] id="e4pH6EIjjdxC"
# ## Model Training and Selection
# + id="40bkaYhyjdxC"
#Finally I will create my model. This is essentially based on the defaults in the tutorial I am using. I will experiment with other activation functions later on
#Furthermore, I will keep many of the inputs, such as a 3x3 Kernel size based on research on the optimal settings https://towardsdatascience.com/deciding-optimal-filter-size-for-cnns-d6f7b56f9363
num_classes = 2
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),#2d convolutional layer over spacial images
layers.MaxPooling2D(),#max pooling layer for 2D spacial data
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="qnxbRr9ijdxC"
#again, based on defaults in the tutorial as well as standards for the field. Adam is the basic optimizer to use
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="DL16wxYCjdxC" outputId="e4c4a151-7452-4117-ad9d-868c336f8495"
#Review model before training
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="ZoEZqZhqjdxC" outputId="c10b84d8-0c71-429e-e859-2c16d8853162"
#Train the model
epochs=10 #I will start with 10 epochs and see how the accuracy looks
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="uJl8LLtpjdxC"
# Some observations, the model clearly started overfitting around epoc 5 when the validation accuracy started to decrease, giving a maximum accuracy of around 96%, less than our target.
# + colab={"base_uri": "https://localhost:8080/", "height": 499} id="wF1VavLzjdxC" outputId="dd9095ae-29c4-414e-d15c-6f11911b8bc7"
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# + [markdown] id="2uGXcalxjdxD"
# To help improve the overfitting, I will add some preproccessing layers to augment the data, specifically rotating, zooming in on, and flipping images.
# + [markdown] id="XiOBt4mIjdxD"
# #https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/PreprocessingLayer
# data_augmentation = keras.Sequential(
# [
# layers.experimental.preprocessing.RandomFlip("horizontal",
# input_shape=(img_height,
# img_width,
# 3)),
# layers.experimental.preprocessing.RandomRotation(0.1),
# layers.experimental.preprocessing.RandomZoom(0.1),
# ]
# )
# + id="JULB7CpXjdxD"
#Made a change to the random flip function, so that it flips vertically and horizontally, creating greater diversity
#This made some improvement on the accuracy
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip(input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
# + colab={"base_uri": "https://localhost:8080/", "height": 575} id="auc70xspjdxD" outputId="9d493474-9db1-488e-c4ed-63e74fb707f3"
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
# + id="tUEPeAoFjdxD"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),#This layer helps to avoid overfitting by randomly setting input units to zero at a rate of 20%
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="w41hjfzPjdxD"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="CRlnbBt2jdxD" outputId="5e10432b-a60e-4038-fc7d-50b75ac7d91b"
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="dUoOSGCwjdxD" outputId="ceeaf9d6-f654-47b2-cc05-70e2e5bc7db4"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="hmGQGSUSjdxD"
# Here my final accuracy has improved, but is still lower than my 97% target. I will therefore try a few different methods to improve the accuracy
# + id="Ckl-3fwVjdxD"
#first, I will try with valid padding, then I will try adding a leakyrelu activation function
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="QKsNyD6MjdxD"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="xDIE5etAjdxD" outputId="a2b452e1-dcf5-4f19-da9d-e80804269a58"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="zajgkfeKjdxD"
# The final accuracy was moderately improved. I will now try using the swish activation function
# + colab={"base_uri": "https://localhost:8080/"} id="BaPK2gTVjdxD" outputId="bab56b43-caaa-4c5f-d6ed-61df7f8eec35"
# !pip install keras
# + id="szILRSYUjdxD"
#Next I wanted to try the swish activation, implemented below
#https://www.bignerdranch.com/blog/implementing-swish-activation-function-in-keras/
import keras
from keras.backend import sigmoid
def swish(x, beta = 1):
return (x * sigmoid(beta * x))
from keras.utils.generic_utils import get_custom_objects
from keras.layers import Activation
get_custom_objects().update({'swish': Activation(swish)})
# + id="JitFClZ9jdxD"
# I will try adding a swish activation function, created by google, it has been shown to perform better than relu, and then I will test leaky relu to se which is optimal
#https://missinglink.ai/guides/neural-network-concepts/7-types-neural-network-activation-functions-right/
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='swish'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='swish'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='swish'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='swish'),
layers.Dense(num_classes)
])
# + id="fmWToFpyjdxD"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="UIbvPITajdxD" outputId="7acc6187-7039-4acb-83ca-a0dca04e9c9c"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="5HctiuOijdxD"
# This did not really change performance. I will next attempt using the leaky relu layer in the model
# + id="4oRlqgkAjdxD"
#https://www.tensorflow.org/api_docs/python/tf/keras/layers/LeakyReLU
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid'),
layers.LeakyReLU(),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid'),
layers.LeakyReLU(),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid'),
layers.LeakyReLU(),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="AilcSl48jdxD"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="wD5UGIq5jdxD" outputId="e747785c-3880-4153-dc67-88f27532430c"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="8PfKLJU6jdxD"
# This actually decreased accuracy, I will therefore return to using the relu activation function.
#
# for my next step, I want to see if adding a layer improves output.
# + id="hz6-KyuUjdxD"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="ZlGLNAD-jdxD"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="s-9p7BzojdxD" outputId="6e3ea6c5-74bb-45eb-f3fc-1ebbf8d8ab33"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="qfzhewvJjdxD"
# This did lead to better performance. I will therefore keep the extra layer in my final model.
#
# My next step will be to test other optimizers. Based on the recommendations out there: https://towardsdatascience.com/7-tips-to-choose-the-best-optimizer-47bb9c1219e ; https://www.tensorflow.org/api_docs/python/tf/keras/optimizers
# it seems like Adam is the best of the adaptive optimizers, while SGD (stochastic gradient decent) despite generally being less effective, is still able, in some cases to produce strong results. I will also try Nadam, which incorporates Nesterov Momentum into the adam algorithm (http://cs229.stanford.edu/proj2015/054_report.pdf) and Adamax which is similar to Adam and is therefore worth comparing.
#
# In addition to the optimizers, I used tensorboard to evaluate different dropout rates and the final number of units in the dense layer.
#
# Note: I used Tensorboard in Google Colab to compare these models directly, but you can also see them written out below
# + id="ojEuT1XJjdxD"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="Afx650FMjdxD"
#https://www.tensorflow.org/api_docs/python/tf/keras/optimizers
model.compile(optimizer='Nadam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="1necCYpSjdxE" outputId="85b7a881-e325-4001-a0da-86d023e9670f"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + id="R5NSBOEujdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="RXKjqXTijdxE"
model.compile(optimizer='SGD',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="OrN8aXvAjdxE" outputId="ffa0f488-6d0f-48a2-d5bd-3862ea1b7acc"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + id="fyk5DnScjdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="8XuugmQSjdxE"
model.compile(optimizer='Adamax',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="NB1d8kO-jdxE" outputId="5a44810b-b08a-4913-8f6b-580212a2d14f"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="izEUb5e0jdxE"
# Next I will test across different units (between 128 and 256) for the dense layer. I will then test the outputs with a dropout rate of .1
# + id="Z-bA9W8DjdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.2),
layers.Flatten(),
layers.Dense(256, activation='relu'),
layers.Dense(num_classes)
])
# + id="UKhlLrVGjdxE"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="nwLCGd2hjdxE" outputId="5b873a44-1bc7-4f33-cc34-e7e926489d11"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + id="LY6u-7FujdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.2),
layers.Flatten(),
layers.Dense(256, activation='relu'),
layers.Dense(num_classes)
])
# + id="o7egZY8UjdxE"
model.compile(optimizer='SGD',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="2HtID-hpjdxE" outputId="be6e24f8-5837-474a-e2fa-f68a059eb1f0"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + id="XKKuQcitjdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.2),
layers.Flatten(),
layers.Dense(256, activation='relu'),
layers.Dense(num_classes)
])
# + id="HI1lA1rHjdxE"
model.compile(optimizer='Nadam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="ushA1MgRjdxE" outputId="b23a3c91-51c3-4979-b139-5ea19a93c5ee"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + id="YfsNnElOjdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.2),
layers.Flatten(),
layers.Dense(256, activation='relu'),
layers.Dense(num_classes)
])
# + id="cAhpzEojjdxE"
model.compile(optimizer='Adamax',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="6gV7wOgljdxE" outputId="1afcd3dd-eb3a-4958-eee9-1a6edc0dc1d8"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="slnnrc4ljdxE"
# Result
#
# Finally I will test with a dropout rate of 10%
# + id="w_tjdxp9jdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.1),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="G30FWG1RjdxE"
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="Lfdu3lpUjdxE" outputId="f3b1ffc1-5885-4f28-e3ee-acb5f8b9ee24"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + id="f0DJp72NjdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.1),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="qVtxa2ldjdxE"
model.compile(optimizer='SGD',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="lbzDIcdGjdxE" outputId="c62d902b-eefc-47cf-d85a-0546a159a120"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + id="bbg9txBgjdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.1),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="yH7vH9ckjdxE"
model.compile(optimizer='Nadam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="6u1Xp3d3jdxE" outputId="9f75d23d-a9f8-4759-da95-0daf6c0b0e2e"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + id="IgFt1vbmjdxE"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(.1),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + id="49IyF64SjdxE"
model.compile(optimizer='Adamax',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="AiRzPD0XjdxE" outputId="97f327d7-2463-47ef-c9a0-ba057381236b"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + [markdown] id="yzYD3FzQjdxE"
# Okay! So my best results seem to have come from the model with one additional layer (128 filters), valid padding, relu activation function, and the nadam activation function. Below is my final training model and the result of that model on test data!
# + id="Yepw-bDSjdxF"
model = Sequential([
data_augmentation,#my additional data augmentation layer
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='valid', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# + colab={"base_uri": "https://localhost:8080/"} id="PaXX-8mxjdxF" outputId="31c77aa8-3ebb-405f-8429-6012c08071a4"
model.summary()
# + id="nQJub7Y8jdxF"
model.compile(optimizer='Nadam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="PUS4Gl_fjdxF" outputId="f6578019-1a34-4b3c-c963-9dd8b0cb677f"
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
# + colab={"base_uri": "https://localhost:8080/", "height": 499} id="H1KJx2HvjdxF" outputId="0330bd54-6789-4f23-8b70-51ed84b45714"
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# + id="wVMjnhVhjdxF"
#preparing my test data for the model
test_data_dir = "/content/Cell_Images/cell_images/test"
# + id="QKMh_QZtjdxF"
#Set my data directory *** Test data will only be used in the final report to evaluate my model
test_data_dir = "C:/Users/15856/Data 602/FinalProject/Cell_Images/cell_images/test"
test_data_dir = pathlib.Path(train_data_dir)
# + id="M1CxhOKrjdxF"
batch_size = 32
img_height = 200
img_width = 200
# + colab={"base_uri": "https://localhost:8080/"} id="-NHomXgMjdxF" outputId="788c0379-2f29-4e1e-fff1-85596dd86d71"
#https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory
test_ds = tf.keras.preprocessing.image_dataset_from_directory(
test_data_dir,
seed=1212020,
image_size=(img_height, img_width),
batch_size=batch_size)
# + colab={"base_uri": "https://localhost:8080/"} id="ymFi7F4ijdxF" outputId="41192001-4287-4856-da78-0c283b58bc49"
model.evaluate(test_ds)
# + id="TjHfHDsQjdxF"
# + id="E7geEBEojdxF"
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Machine Learning
# ### Instacart Market Basket Analysis: Feature Set
# ----
# Ryan Alexander Alberts
#
# 7/10/2017
# #### In this notebook, I want to run LightGBM to a provisional set of features and begin training and cross-validation.
#
# ----
#
# * __Create the Training Feature Set:__
# * __Products__
# * (user_id | unique product_id) tuples
# * product lists, totals/avgs.
# * encapsulating recency
# * No. of orders since last occurance
# * __Customers__
# * order count, recent reorder rate
# * buying behavior - time of day and week
# * Weekday vs. Weekend
# * __Basket Size__
# * max, min, avg. product count per customer
# * variability of product count across customer orders
# * __'None'__
# * 'None' handling
#
# ----
#
# * __Future Topics__
# * weighted avg. product count (timeseries, frequency)
# * order Frequency / cyclicality
# * Macro-level trends in timeseries-data, like spikes in product count in first xx% or last xx% of all customers orders, corresonding to Summer or holidays
#
import pandas as pd
import lightgbm as lgb
import re
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_style("whitegrid")
import calendar
# First, let's import requisite files
orders = pd.read_csv('../Instacart_Input/orders.csv')
prior_set = pd.read_csv('../Instacart_Input/order_products__prior.csv')
train_set = pd.read_csv('../Instacart_Input/order_products__train.csv')
aisles = pd.read_csv('../Instacart_Input/aisles.csv')
departments = pd.read_csv('../Instacart_Input/departments.csv')
products = pd.read_csv('../Instacart_Input/products.csv')
# +
orders.set_index('order_id', inplace=True, drop=False)
prior_set = prior_set.join(orders, on='order_id', rsuffix='_')
prior_set.drop('order_id_', inplace=True, axis=1)
temp = pd.DataFrame()
temp['average_days_between_orders'] = orders.groupby('user_id')['days_since_prior_order'].mean().astype(np.float32)
temp['orders'] = orders[orders['eval_set'] == 'prior'].groupby('user_id').size().astype(np.int16)
user_data = pd.DataFrame()
user_data['total_items'] = prior_set.groupby('user_id').size().astype(np.int16)
user_data['all_products'] = prior_set.groupby('user_id')['product_id'].apply(set)
user_data['total_unique_items'] = (user_data.all_products.map(len)).astype(np.int16)
user_data = user_data.join(temp)
user_data['avg_basket_size'] = (user_data.total_items / user_data.orders).astype(np.float32)
user_data.reset_index(inplace=True)
user_data.head(20)
# -
# +
train = orders[orders['eval_set'] == 'train']
train_user_orders = orders[orders['user_id'].isin(train['user_id'].values)]
train_user_orders = train_user_orders.merge(prior_set, on='order_id')
train_user_orders = train_user_orders.merge(user_data, on='user_id')
train_user_orders = train_user_orders.merge(products, on='product_id')
temp = pd.DataFrame(train_user_orders.groupby(['user_id',
'product_id']
).size()).reset_index()
temp.columns = ['user_id', 'product_id', 'usr_order_instances']
train_df = train_user_orders.groupby(['user_id',
'product_id']
).mean().reset_index()
train_df.merge(temp,
on=['user_id',
'product_id']
)
train_df = train_df.drop(['order_id',
'order_number',
'reordered',
], axis=1)
train_df['order_dow'] = train_df['order_dow'].astype(np.float32)
train_df['order_hour_of_day'] = train_df['order_hour_of_day'].astype(np.float32)
train_df['days_since_prior_order'] = train_df['days_since_prior_order'].astype(np.float32)
train_df['add_to_cart_order'] = train_df['add_to_cart_order'].astype(np.float32)
train_df['avg_basket_size'] = train_df['avg_basket_size'].astype(np.float32)
train_df['aisle_id'] = train_df['aisle_id'].astype(np.int16)
train_df['department_id'] = train_df['department_id'].astype(np.int16)
train_df.head()
# +
# I've previously created 20 test submissions without machine learning algorithms
# and I benefited frmo starting with the most recent orders to get F1 score 0.365+ (top 50%)
# So I'm including this feature:
# Reorder rates (% of order that includes reordered products) for recent orders
order_reup = train_user_orders.groupby(['user_id', 'order_number']).mean()
last_order = train_user_orders.groupby(['user_id'])['order_number'].max()
d = {}
for user, order in order_reup['reordered'].index.values:
if user not in d:
count = 0
d[user] = 0
if ( (order > 1) & (order >= last_order[user] - 4) ):
d[user] += order_reup['reordered'][(user, order)]
count+=1
if order == last_order[user]:
d[user] /= count
d
# Add to train_df [Warning: LONG PROCESSING TIME...]
#train_df['recent_reorder_rate'] = 0
#for i in d.keys():
# train_df.loc[train_df.user_id == i, 'recent_reorder_rate'] = d[i]
# -
# ### ---- Questions -----
# * 2-fold cross-validation? i.e. splitting training set into two groups roughly the size of the actual test set, and running 5-fold CV on each?
#
# * How do I know when I'm overfitting? I only have 5 submissions per day, and would like to be able to estimate it without submitting.
#
# * ensemble methods - LightGBM and XGBoost? Using predictions from first model as input? Ranking predictions, max/min/std of predictions?
#
# * Any resources for parameter tuning for LightGBM and XGBoost?
#
# * How can I stratify training data into sub-sets that reflect the general popluation?
#
# * Should I train using a separate validation set, or is cross-validation per above sufficient?
#
# * libFM and Factorizing machines?
#
# ### ---- Notes -----
# +
#user1 = orders[orders['user_id'] == 1]['order_id'].values
#prior_set[prior_set['order_id'].isin(user1)]
# +
#users['total_items'] = train_user_orders.groupby(['user_id', 'product_id']).size() #[train_user_orders['user_id'] == 1]
#users = pd.DataFrame()
#users['total_items'] = train_user_orders.groupby('product_id').size().astype(np.int16)
#users['product_set'] = train_user_orders.groupby('user_id')['product_id'].apply(set)
#user_array = train_user_orders.groupby('user_id').size().index.values
#for user in user_array:
# users['total_uniqueItems'] = len(np.unique(train_user_orders[train_user_orders['user_id'] == user]['product_id']))
#orders[orders['order_id'] == 1187899]
#user_1 = orders[orders['user_id'] == 1].groupby('order_id').size()
#user_1.index.values
# 20.6M rows if you have unique rows for each (order_id | product) tuple
# vs.
# 8.5M rows if you have unique rows for each (user_id | product) tuple
# User 1 has 18 unique products spread across 10 prior orders (not including train order 11)
#np.unique(train_user_orders[train_user_orders['user_id'] == 1]['product_id'])
# +
#train_df.to_csv('train_df_LightGBM_vXXXXX.csv', index=False)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: IWST
# language: python
# name: iwst
# ---
# + code_folding=[0]
## Import Packages
from __future__ import print_function
import numpy as np
import pandas as pd
from itertools import product
#Astro Software
import astropy.units as units
from astropy.coordinates import SkyCoord
from astropy.io import fits
#Plotting Packages
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from matplotlib import rcParams
import seaborn as sns
from PIL import Image
from yt.config import ytcfg
import yt
import yt.units as u
#Scattering NN
import torch
import torch.nn.functional as F
from torch import optim
from kymatio.torch import Scattering2D
device = "cpu"
#Machine Learning
from sklearn.model_selection import train_test_split
from sklearn.mixture import GaussianMixture
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.decomposition import PCA, FastICA
import skimage
from skimage import filters
from scipy.optimize import curve_fit
from scipy import linalg
from scipy import stats
from scipy.signal import general_gaussian
#I/O
import h5py
import pickle
import glob
import copy
import time
#Plotting Style
# %matplotlib inline
plt.style.use('dark_background')
rcParams['text.usetex'] = False
rcParams['axes.titlesize'] = 20
rcParams['xtick.labelsize'] = 16
rcParams['ytick.labelsize'] = 16
rcParams['legend.fontsize'] = 12
rcParams['axes.labelsize'] = 20
rcParams['font.family'] = 'sans-serif'
#Threading
torch.set_num_threads=2
from multiprocessing import Pool
import ntpath
def path_leaf(path):
head, tail = ntpath.split(path)
out = os.path.splitext(tail)[0]
return out
def hd5_open(file_name,name):
f=h5py.File(file_name,'r', swmr=True)
data = f[name][:]
f.close()
return data
from matplotlib.colors import LinearSegmentedColormap
cdict1 = {'red': ((0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)),
'green': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'blue': ((0.0, 0.0, 1.0),
(0.5, 0.0, 0.0),
(1.0, 0.0, 0.0))
}
blue_red1 = LinearSegmentedColormap('BlueRed1', cdict1,N=5000)
from sklearn.preprocessing import StandardScaler
# + code_folding=[]
def DHC_iso_vec(wst,J,L):
(nk, Nd) = np.shape(wst)
S0 = wst[:,0:2]
S1 = wst[:,2:J*L+2]
S2 = np.reshape(wst[:,J*L+3:],(nk,(J*L+1),(J*L+1)))
S1iso = np.zeros((nk,J))
for j1 in range(J):
for l1 in range(L):
S1iso[:,j1] += S1[:,l1*J+j1]
S2iso = np.zeros((nk,J,J,L))
for j1 in range(J):
for j2 in range(J):
for l1 in range(L):
for l2 in range(L):
deltaL = np.mod(l1-l2,L)
S2iso[:,j1,j2,deltaL] += S2[:,l1*J+j1,l2*J+j2]
Sphi1 = np.zeros((nk,J))
for j1 in range(J):
for l1 in range(L):
Sphi1[:,j1] += S2[:,l1*J+j1,L*J]
Sphi2 = np.zeros((nk,J))
for j1 in range(J):
for l1 in range(L):
Sphi2[:,j1] += S2[:,L*J,l1*J+j1]
return np.hstack((S0,S1iso,wst[:,J*L+2].reshape(nk,1),S2iso.reshape(nk,J*J*L),Sphi1,Sphi2,S2[:,L*J,L*J].reshape(nk,1)))
# -
mnist_train_y = hd5_open('../../DHC/scratch_AKS/mnist_train_y.h5','main/data')
mnist_test_y = hd5_open('../../DHC/scratch_AKS/mnist_test_y.h5','main/data')
mnist_DHC_out_sizetrain = hd5_open('../from_cannon/2021_01_21/mnist_DHC_train_ang_1_1.h5','main/data')
mnist_DHC_out_sizetest = hd5_open('../from_cannon/2021_01_21/mnist_DHC_test_ang_1_1.h5','main/data')
mnist_DHC_out_size_iso = DHC_iso_vec(mnist_DHC_out_sizetest,5,8)
mnist_DHC_out_size_iso_train = DHC_iso_vec(mnist_DHC_out_sizetrain,5,8)
M = 100
angle_array = [i for i in np.linspace(2*np.pi/M,2*np.pi,M)]
angle_test = np.tile(angle_array,10000)
mnist_DHC_out_sizetest.shape, angle_test.shape
angle_test[100:200]
mnist_test_y[1]
# +
fig = plt.figure(figsize=(8,8),dpi=150)
ax = fig.add_subplot(111)
data = mnist_DHC_out_sizetest[100:200,2+2*6+0]
plt.scatter(angle_test[100:200]*180/np.pi,data,s=1)
min_y = data.min()
max_y = data.max()
for i in range(0,8):
plt.vlines(i*180/8,min_y,max_y,'w')
plt.title('Example S1 Coeff Angle Dependence for a "2"')
plt.xlabel('Rotation Angle (deg)')
plt.xlim([0,180])
plt.show()
# -
# The fact that the peak is rotated relative to the basis tells me something about the absolute instead of relative angle I think. Maybe some hope of extracting this parameter from such a hyper sweep?
from scipy import interpolate
x = np.arange(0, 10)
y = np.exp(-x/3.0)
f = interpolate.interp1d(x, y)
mnist_DHC_out_sizetest[100:200,:].shape,(angle_test[100:200]*180/np.pi).shape
f = interpolate.interp1d(angle_test[100:200]*180/np.pi, mnist_DHC_out_sizetest[100:200,:],axis=0)
f([5,10]).shape
# Dumbest Test, seems I do not have the angles exactly spaced by pi/8... not a big deal because it is clear we need more L in order to do the interpolation well.
for i in range(100,200,6):
print(i)
angle_test[range(100,200,6)]
f = interpolate.interp1d(angle_test[range(100,200,6)]*180/np.pi, mnist_DHC_out_sizetest[range(100,200,6),:],axis=0,fill_value='extrapolate')
ext_y = f(angle_test[100:200]*180/np.pi)
# +
fig = plt.figure(figsize=(8,8),dpi=150)
ax = fig.add_subplot(111)
data = mnist_DHC_out_sizetest[100:200,2+2*6+0]
plt.scatter(angle_test[100:200]*180/np.pi,data,s=1)
data1 = ext_y[:,2+2*6+0]
plt.scatter(angle_test[100:200]*180/np.pi,data1,s=1)
min_y = data.min()
max_y = data.max()
for i in range(0,8):
plt.vlines(i*180/8,min_y,max_y,'w')
plt.title('Example S1 Coeff Angle Dependence for a "2"')
plt.xlabel('Rotation Angle (deg)')
plt.xlim([0,180])
plt.show()
# -
temp = np.abs(data-data1)/np.abs(data)
# +
fig = plt.figure(figsize=(8,8),dpi=150)
ax = fig.add_subplot(111)
data = temp
plt.scatter(angle_test[100:200]*180/np.pi,data,s=1)
min_y = data.min()
max_y = data.max()
for i in range(0,8):
plt.vlines(i*180/8,min_y,max_y,'w')
plt.title('Example S1 Coeff Angle Dependence for a "2"')
plt.xlabel('Rotation Angle (deg)')
plt.xlim([0,180])
plt.show()
# -
temp = np.abs(mnist_DHC_out_sizetest[100:200,:]-ext_y)/np.abs(mnist_DHC_out_sizetest[100:200,:])
np.mean(temp,axis=1).shape
# +
fig = plt.figure(figsize=(8,8),dpi=150)
ax = fig.add_subplot(111)
data = np.mean(temp,axis=1)
plt.scatter(angle_test[100:200]*180/np.pi,data,s=1)
min_y = data.min()
max_y = data.max()
for i in range(0,8):
plt.vlines(i*180/8,min_y,max_y,'w')
plt.title('Example S1 Coeff Angle Dependence for a "2"')
plt.xlabel('Rotation Angle (deg)')
plt.xlim([0,180])
plt.show()
# +
fig = plt.figure(figsize=(8,8),dpi=150)
ax = fig.add_subplot(111)
data = np.mean(temp,axis=1)
plt.scatter(angle_test[100:200]*180/np.pi,data,s=1)
min_y = data.min()
max_y = data.max()
for i in range(0,8):
plt.vlines(i*180/8,min_y,max_y,'w')
plt.title('Example S1 Coeff Angle Dependence for a "2"')
plt.xlabel('Rotation Angle (deg)')
plt.xlim([0,180])
plt.ylim([0,0.2])
plt.show()
# +
fig = plt.figure(figsize=(8,8),dpi=150)
ax = fig.add_subplot(111)
data = np.median(temp,axis=1)
plt.scatter(angle_test[100:200]*180/np.pi,data,s=1)
min_y = data.min()
max_y = data.max()
for i in range(0,8):
plt.vlines(i*180/8,min_y,max_y,'w')
plt.title('Example S1 Coeff Angle Dependence for a "2"')
plt.xlabel('Rotation Angle (deg)')
plt.xlim([0,180])
plt.show()
# -
temp = np.abs(mnist_DHC_out_sizetest[100:200,:]-ext_y)/np.max([np.abs(mnist_DHC_out_sizetest[100:200,:]),np.abs(ext_y)],axis=0)
# +
fig = plt.figure(figsize=(8,8),dpi=150)
ax = fig.add_subplot(111)
data = np.mean(temp,axis=1)
plt.scatter(angle_test[100:200]*180/np.pi,data,s=1)
min_y = data.min()
max_y = data.max()
for i in range(0,8):
plt.vlines(i*180/8,min_y,max_y,'w')
plt.title('Example S1 Coeff Angle Dependence for a "2"')
plt.xlabel('Rotation Angle (deg)')
plt.xlim([0,180])
plt.show()
# -
rad = np.array([2**r for r in range(1,8)])
rad
6/(rad*1*np.pi)
# +
fig = plt.figure(figsize=(8,8),dpi=150)
ax = fig.add_subplot(111)
data = mnist_DHC_out_sizetest[100:200,5+2*6+0]
plt.scatter(angle_test[100:200]*180/np.pi,data,s=1)
min_y = data.min()
max_y = data.max()
for i in range(0,8):
plt.vlines(i*180/8,min_y,max_y,'w')
plt.title('Example S1 Coeff Angle Dependence for a "2"')
plt.xlabel('Rotation Angle (deg)')
plt.xlim([0,180])
plt.show()
# -
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
# ## 6. Nonlinear ensemble filtering for the Lorenz-63 problem
# In this notebook, we apply the stochastic map filter developed in Spantini et al. [5] to the Lorenz-63 problem.
#
# References:
#
# [1] Evensen, G., 1994. Sequential data assimilation with a nonlinear quasi‐geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5), pp.10143-10162.
#
# [2] Asch, M., Bocquet, M. and Nodet, M., 2016. Data assimilation: methods, algorithms, and applications. Society for Industrial and Applied Mathematics.
#
# [3] Bishop, C.H., Etherton, B.J. and Majumdar, S.J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), pp.420-436.
#
# [4] Lorenz, E.N., 1963. Deterministic nonperiodic flow. Journal of atmospheric sciences, 20(2), pp.130-141.
#
# [5] Spantini, A., Baptista, R. and Marzouk, Y., 2019. Coupling techniques for nonlinear ensemble filtering. arXiv preprint arXiv:1907.00389.
# ### The basic steps
# To carry out sequential inference in `TransportBasedInference`, we need to carry out a few basic steps:
# * **Specify the problem**: Define the state-space model: initial condition, dynamical and observation models (including process and observation noise)
# * **Specify the inflation parameters**: Determine the levels of covariance inflation to properly balance the dynamical system and the observations from the truth system
# * **Specify the filter**: Choose the ensemble filter to assimilate the observations in the state estimate
# * **Perform the sequential inference**
#
# We will go through all of these here.
using Revise
using LinearAlgebra
using TransportBasedInference
using Statistics
using Distributions
using OrdinaryDiffEq
# Load some packages to make nice figures
# +
using Plots
default(tickfont = font("CMU Serif", 18),
titlefont = font("CMU Serif", 18),
guidefont = font("CMU Serif", 18),
legendfont = font("CMU Serif", 18),
colorbar_tickfontsize = 18,
colorbar_titlefontsize = 18,
annotationfontsize = 18,
annotationfontfamily = "CMU Serif",
grid = false)
pyplot()
PyPlot.rc("text", usetex = "true")
PyPlot.rc("font", family = "CMU Serif")
using LaTeXStrings
using ColorSchemes
# -
# The Lorenz-63 model is a three dimensional system that models the atmospheric convection [4]. This system is a classical benchmark problem in data assimilation. The state $\boldsymbol{x} = (x_1, x_2, x_3)$ is governed by the following set of ordinary differential equations:
#
# \begin{equation}
# \begin{aligned}
# &\frac{\mathrm{d} x_1}{\mathrm{d} t}=\sigma(x_2-x_1)\\
# &\frac{\mathrm{d} x_2}{\mathrm{d} t}=x_1(\rho-x_2)-x_2\\
# &\frac{\mathrm{d} x_3}{\mathrm{d} t}=x_1 x_2-\beta x_3,
# \end{aligned}
# \end{equation}
#
# where $\sigma = 10, \beta = 8/3, \rho = 28$. For these values, the system is chaotic and behaves like a strange attractor. We integrate this system of ODEs with time step $\Delta t_{dyn} = 0.05$. The state is fully observed $h(t,\boldsymbol{x}) = \boldsymbol{x}$ with $\Delta t_{obs}=0.1$. The initial distribution $\pi_{\mathsf{X}_0}$ is the standard Gaussian. The process noise is Gaussian with zero mean and covariance $10^{-4}\boldsymbol{I}_3$. The measurement noise has a Gaussian distribution with zero mean and covariance $\theta^2\boldsymbol{I}_3$ where $\theta^2 = 4.0$.
# ### Simple twin-experiment
# Define the dimension of the state and observation vectors
Nx = 3
Ny = 3
# Define the time steps $\Delta t_{dyn}, \Delta t_{obs}$ of the dynamical and observation models. Observations from the truth are assimilated every $\Delta t_{obs}$.
Δtdyn = 0.05
Δtobs = 0.2
# Define the time span of interest
t0 = 0.0
tf = 1000.0
Tf = ceil(Int64, (tf-t0)/Δtobs)
# Define the distribution for the initial condition $\pi_{\mathsf{X}_0}$
π0 = MvNormal(zeros(Nx), Matrix(1.0*I, Nx, Nx))
# We construct the state-space representation `F` of the system composed of the deterministic part of the dynamical and observation models.
#
# The dynamical model is provided by the right hand side of the ODE to solve. For a system of ODEs, we will prefer an in-place syntax `f(du, u, p, t)`, where `p` are parameters of the model.
# We rely on `OrdinaryDiffEq` to integrate the dynamical system with the Tsitouras 5/4 Runge-Kutta method adaptive time marching.
#
# We assume that the state is fully observable, i.e. $h(x, t) = x$.
h(x, t) = x
F = StateSpace(lorenz63!, h)
# Define the additive inflation for the dynamical and observation models
# +
### Process and observation noise
σx = 1e-6
σy = 2.0
ϵx = AdditiveInflation(Nx, zeros(Nx), σx)
ϵy = AdditiveInflation(Ny, zeros(Ny), σy)
# -
model = Model(Nx, Ny, Δtdyn, Δtobs, ϵx, ϵy, π0, 0, 0, 0, F);
# To perform the nonlinear ensemble filtering, we first need to estimate the transport map $\boldsymbol{S}^{\boldsymbol{\mathcal{X}}}$.
#
# In this notebook, we are going to assume that the basis of features does not change over time, but solely the coefficients $c_{\boldsymbol{\alpha}}$ of the expansion.
#
#
# To estimate the map, we generate joint samples $(\boldsymbol{y}^i, \boldsymbol{x}^i), \; i = 1, \ldots, N_e$ where $\{\boldsymbol{x}^i\}$ are i.i.d. samples from pushforward of the standard Gaussian distribution by the flow of the Lorenz-63 system.
# Time span
tspan = (0.0, tf)
# Set initial condition of the true system
x0 = rand(model.π0);
data = generate_lorenz63(model, x0, Tf);
# Initialize the ensemble matrix `X` $\in \mathbb{R}^{(N_y + N_x) \times N_e}$.
# +
# Ensemble size
Ne = 160
X0 = zeros(model.Ny + model.Nx, Ne)
# Generate the initial conditions for the state.
viewstate(X0, model.Ny, model.Nx) .= rand(model.π0, Ne)
# -
# Use the stochastic ensemble Kalman filter for the spin-up phase. There is no reason to use the stochastic map filter over the first cycles, as the performance of the inference is determined by the quality of the ensemble, not the quality of the filter.
enkf = StochEnKF(x->x, ϵy, Δtdyn, Δtobs)
Xenkf = seqassim(F, data, Tf, model.ϵx, enkf, deepcopy(X0), model.Ny, model.Nx, t0);
tspin = 500.0
Tspin = ceil(Int64, (tspin-t0)/Δtobs)
# Time average root-mean-squared error
rmse_enkf = mean(map(i->norm(data.xt[:,i]-mean(Xenkf[i+1]; dims = 2))/sqrt(Nx), Tspin:Tf))
# Initialize the ensemble matrix for the radial stochastic map filter
Xspin = vcat(zeros(Ny, Ne), deepcopy(Xenkf[Tspin+1]))
tsmf = 1000.0
Tsmf = ceil(Int64, (tsmf-tspin)/Δtobs)
# Initialize the structure of the map
# +
p = 2
order = [[-1], [p; p], [-1; p; 0], [-1; p; p; 0]]
# order = [[-1], [-1; -1], [-1; -1; -1], [p; -1; -1 ;p], [-1; p; -1; p; p], [-1; -1; p; p; p; p]]
# parameters of the radial map
γ = 2.0
λ = 0.0
δ = 1e-8
κ = 10.0
β = 1.0
dist = Float64.(metric_lorenz(3))
idx = vcat(collect(1:Ny)',collect(1:Ny)')
smf = SparseRadialSMF(x->x, F.h, β, ϵy, order, γ, λ, δ, κ,
Ny, Nx, Ne,
Δtdyn, Δtobs,
dist, idx; islocalized = true)
# -
Xsmf = seqassim(F, data, Tsmf, model.ϵx, smf, deepcopy(Xspin), model.Ny, model.Nx, tspin);
rmse_smf = mean(map(i->norm(data.xt[:,Tspin+i]-mean(Xsmf[i+1]; dims = 2))/sqrt(Nx), 1:Tsmf))
(rmse_enkf-rmse_smf)/rmse_enkf
# +
# Plot the trajectories
nb = 1
ne = Tspin+Tsmf
Δ = 50
plt = plot(layout = grid(3,1), xlim = (-Inf, Inf), ylim = (-Inf, Inf), xlabel = L"t",
size = (600, 800))
for i =1:3
plot!(plt[i,1], data.tt[nb:Δ:ne], data.xt[i,nb:Δ:ne], linewidth = 3, color = :teal,
ylabel = latexstring("x_"*string(i)), legend = :bottomleft, label = "True")
plot!(plt[i,1], data.tt[nb:Δ:ne], mean_hist(vcat(Xenkf[1:Tspin+1], Xsmf[2:end]))[i,1+nb:Δ:1+ne], linewidth = 3, grid = false,
color = :orangered2, linestyle = :dash, label = "sEnKF")
scatter!(plt[i,1], data.tt[nb:Δ:ne], data.yt[i,nb:Δ:ne], linewidth = 3, color = :grey,
markersize = 5, alpha = 0.5, label = "Observation")
vline!(plt[i,1], [tspin], color = :grey2, linestyle = :dash, label = "")
end
plt
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
############Training
import tensorflow as tf
from keras.backend import tensorflow_backend
import os
from train import train
os.environ["CUDA_VISIBLE_DEVICES"]="0"
config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
session = tf.Session(config=config)
tensorflow_backend.set_session(session)
#epochs>100 will be enough, but slower
#Training may be unstable due to random initialization,
#Try it again if out2_auc doesn't increase.
train(iteration=3, DATASET='STARE', # DRIVE, CHASEDB1 or STARE
batch_size=32, epochs=200)
# +
############Test
import tensorflow as tf
from keras.backend import tensorflow_backend
import os
from predict import predict
os.environ["CUDA_VISIBLE_DEVICES"]="0"
config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
session = tf.Session(config=config)
tensorflow_backend.set_session(session)
#stride_size = 3 will be better, but slower
predict(batch_size=32, epochs=200, iteration=3, stride_size=3, DATASET='STARE')
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
nx = 50
ny = 50
# +
fig = plt.figure()
data = np.random.rand(nx, ny)
sns.heatmap(data, vmax=.8, square=True)
def init():
sns.heatmap(np.zeros((nx, ny)), vmax=.8, square=True)
def animate(i):
data = np.random.rand(nx, ny)
sns.heatmap(data, vmax=.8, square=True)
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=20, repeat = False)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:qcodes]
# language: python
# name: conda-env-qcodes-py
# ---
# # Implementing doND using the dataset
# +
from functools import partial
import numpy as np
from qcodes.dataset.database import initialise_database
from qcodes.dataset.experiment_container import new_experiment
from qcodes.tests.instrument_mocks import DummyInstrument
from qcodes.dataset.measurements import Measurement
from qcodes.dataset.plotting import plot_by_id
# -
initialise_database() # just in case no database file exists
new_experiment("doNd-tutorial", sample_name="no sample")
# First we borrow the dummy instruments from the contextmanager notebook to have something to measure.
# +
# preparatory mocking of physical setup
dac = DummyInstrument('dac', gates=['ch1', 'ch2'])
dmm = DummyInstrument('dmm', gates=['v1', 'v2'])
# -
# and we'll make a 2D gaussian to sample from/measure
def gauss_model(x0: float, y0: float, sigma: float, noise: float=0.0005):
"""
Returns a generator sampling a gaussian. The gaussian is
normalised such that its maximal value is simply 1
"""
while True:
(x, y) = yield
model = np.exp(-((x0-x)**2+(y0-y)**2)/2/sigma**2)*np.exp(2*sigma**2)
noise = np.random.randn()*noise
yield model + noise
# +
# and finally wire up the dmm v1 to "measure" the gaussian
gauss = gauss_model(0.1, 0.2, 0.25)
next(gauss)
def measure_gauss(dac):
val = gauss.send((dac.ch1.get(), dac.ch2.get()))
next(gauss)
return val
dmm.v1.get = partial(measure_gauss, dac)
# -
# Now lets reimplement the qdev-wrapper do1d function that can measure one one more parameters as a function of another parameter. This is more or less as simple as you would expect.
#
def do1d(param_set, start, stop, num_points, delay, *param_meas):
meas = Measurement()
meas.register_parameter(param_set) # register the first independent parameter
output = []
param_set.post_delay = delay
# do1D enforces a simple relationship between measured parameters
# and set parameters. For anything more complicated this should be reimplemented from scratch
for parameter in param_meas:
meas.register_parameter(parameter, setpoints=(param_set,))
output.append([parameter, None])
with meas.run() as datasaver:
for set_point in np.linspace(start, stop, num_points):
param_set.set(set_point)
for i, parameter in enumerate(param_meas):
output[i][1] = parameter.get()
datasaver.add_result((param_set, set_point),
*output)
dataid = datasaver.run_id # convenient to have for plotting
return dataid
dataid = do1d(dac.ch1, 0, 1, 10, 0.01, dmm.v1, dmm.v2)
axes, cbaxes = plot_by_id(dataid)
def do2d(param_set1, start1, stop1, num_points1, delay1,
param_set2, start2, stop2, num_points2, delay2,
*param_meas):
# And then run an experiment
meas = Measurement()
meas.register_parameter(param_set1)
param_set1.post_delay = delay1
meas.register_parameter(param_set2)
param_set1.post_delay = delay2
output = []
for parameter in param_meas:
meas.register_parameter(parameter, setpoints=(param_set1,param_set2))
output.append([parameter, None])
with meas.run() as datasaver:
for set_point1 in np.linspace(start1, stop1, num_points1):
param_set1.set(set_point1)
for set_point2 in np.linspace(start2, stop2, num_points2):
param_set2.set(set_point2)
for i, parameter in enumerate(param_meas):
output[i][1] = parameter.get()
datasaver.add_result((param_set1, set_point1),
(param_set2, set_point2),
*output)
dataid = datasaver.run_id # convenient to have for plotting
return dataid
dataid = do2d(dac.ch1, -1, 1, 100, 0.01,
dac.ch2, -1, 1, 100, 0.01,
dmm.v1, dmm.v2)
axes, cbaxes = plot_by_id(dataid)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="rn3plA2Ykjtm" executionInfo={"status": "ok", "timestamp": 1604028959335, "user_tz": -60, "elapsed": 2166, "user": {"displayName": "Paolo Maione", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8WzXBalf8l5zWT1m3zDN4K8CMQVnssgsQR14FaI8=s64", "userId": "02326155032486404430"}} outputId="bbae46d4-abab-43a3-bfc2-bc927ccedee4" colab={"base_uri": "https://localhost:8080/"}
import os
os.chdir(r"/content/drive/My Drive/TesiUNINA/Colab Notebooks/")
# !git clone https://github.com/naoto0804/chainer-cyclegan.git
# + id="r8DwIPh3k_za" executionInfo={"status": "ok", "timestamp": 1604058919942, "user_tz": -60, "elapsed": 37717, "user": {"displayName": "Paolo Maione", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8WzXBalf8l5zWT1m3zDN4K8CMQVnssgsQR14FaI8=s64", "userId": "02326155032486404430"}} outputId="3e8a5807-947a-4cb6-c904-2cef609d1d23" colab={"base_uri": "https://localhost:8080/"}
import os
os.chdir(r"/content/drive/My Drive/TesiUNINA/Colab Notebooks/chainer-cyclegan")
# !bash ./datasets/download_cyclegan_dataset.sh horse2zebra
# + id="fiDDWozA75MZ"
# + id="Y9Zx1kHImgCo" executionInfo={"status": "ok", "timestamp": 1604062596015, "user_tz": -60, "elapsed": 7844, "user": {"displayName": "Paolo Maione", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8WzXBalf8l5zWT1m3zDN4K8CMQVnssgsQR14FaI8=s64", "userId": "02326155032486404430"}} outputId="2b21397c-e5c1-445a-aa9a-4115ccf1f36a" colab={"base_uri": "https://localhost:8080/"}
# !pip install chainercv
# + id="ybYcga6Fk6lC" executionInfo={"status": "ok", "timestamp": 1604100409476, "user_tz": -60, "elapsed": 30133928, "user": {"displayName": "Paolo Maione", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8WzXBalf8l5zWT1m3zDN4K8CMQVnssgsQR14FaI8=s64", "userId": "02326155032486404430"}} outputId="3e7ae214-3c09-44e7-c0e6-983afb0f4cdb" colab={"base_uri": "https://localhost:8080/"}
import os
os.chdir(r"/content/drive/My Drive/TesiUNINA/Colab Notebooks/chainer-cyclegan")
# !python train.py --root "/content/drive/My Drive/TesiUNINA/Colab Notebooks/tentativo5/pytorch-CycleGAN-and-pix2pix/datasets/DCE_MRI_breastCancer_noBM_normV2_resized"
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from astropy.table import Table
import sys
sys.path.insert(0, '..')
from a_download_decals import download_decals_settings as settings
import shutil
# -
joint_catalog = Table.read(settings.upload_catalog_loc)
print(len(joint_catalog))
print(joint_catalog.colnames)
large = joint_catalog['petrotheta'] > 30.
close = joint_catalog['z'] < 0.01
ready = joint_catalog['fits_ready'] & joint_catalog['fits_filled'] & joint_catalog['png_ready']
pretty = joint_catalog[large & close & ready]
print(len(pretty))
print(pretty['iauname'])
print(pretty['png_loc'])
# +
target_dir = '/data/temp/outreach/pretty_galaxies'
for galaxy in pretty:
shutil.copy(galaxy['png_loc'], target_dir + '/' + galaxy['iauname'] + '.png')
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Image classification using CNN
# ## Load the data
# +
import pickle
import matplotlib.pyplot as plt
import tensorflow as tf
from os.path import join
from sklearn.preprocessing import OneHotEncoder
import numpy as np
def loadCifarData(basePath):
trainX = []
testX = []
trainY = []
testY = []
"""Load training data"""
for i in range(1, 6):
with open(join(basePath, "data_batch_%d" %i), "rb") as f:
dictionary = pickle.load(f, encoding = 'bytes')
trainX.extend(dictionary[b'data'])
trainY.extend(dictionary[b'labels'])
with open(join(basePath, "test_batch"), "rb") as f:
dictionary = pickle.load(f, encoding = 'bytes')
testX.extend(dictionary[b'data'])
testY.extend(dictionary[b'labels'])
return trainX, trainY, testX, testY
def toImage(array, rows = 32, columns = 32):
return array.reshape(3, rows, columns).transpose([1, 2, 0])
def toData(img, rows = 32, columns = 32):
return img.transpose([-1, -2, 0]).flatten()
def plotImages(rows, columns, data, convert = True):
fig, ax = plt.subplots(nrows=rows, ncols=columns)
if rows == 1:
ax = [ax]
if columns == 1:
ax = [ax]
index = 0
for row in ax:
for col in row:
if convert:
col.imshow(toImage(data[index]))
else:
col.imshow(data[index])
index = index + 1
# -
trainRawX, trainRawY, testX, testY = loadCifarData("Data")
encoder = OneHotEncoder()
trainRawY = encoder.fit_transform(np.array(trainRawY).reshape(-1,1)).todense()
testY = encoder.transform(np.array(testY).reshape(-1,1)).todense()
plotImages(3, 3, trainRawX)
# ## Data Augmentation
# ### Flip images
# +
import numpy as np
def flipImage(srcImage):
flippedImages = []
flippedImages.append(np.fliplr(srcImage))
#flippedImages.append(np.flipud(srcImage))
#flippedImages.append(np.flipud(np.fliplr(srcImage)))
return flippedImages
# -
flipped = flipImage(toImage(trainRawX[1]))
flipped.append(toImage(trainRawX[1]))
plotImages(1, 2, flipped, False)
# ### Change Brightness
# +
import cv2
def changeBrightness(image):
image = cv2.cvtColor(image,cv2.COLOR_RGB2HSV)
image = np.array(image, dtype = np.float64)
randomBrightness = .5+np.random.uniform()
image[:,:,2] = image[:,:,2]*randomBrightness
image[:,:,2][image[:,:,2]>255] = 255
image = np.array(image, dtype = np.uint8)
image = cv2.cvtColor(image,cv2.COLOR_HSV2RGB)
return image
# -
noisyImage = changeBrightness(toImage(trainRawX[1]))
plotImages(1, 2, [toImage(trainRawX[1]), noisyImage], False)
# ### Augment Image
def augmentImage(imageVector):
augmentedImages = []
rawImages = []
image = toImage(imageVector)
flippedImages = flipImage(image)
flippedImages.append(image)
coinTossOutcome = np.random.binomial(1, 0.5, len(flippedImages))
for img, toss in zip(flippedImages, coinTossOutcome):
if toss == 1:
img = changeBrightness(img)
augmentedImages.append(img)
rawImages.append(toData(img))
return augmentedImages, rawImages
img, imgRaw = augmentImage(trainRawX[211])
plotImages(1, 2, img, False)
# ## Batch Data Iterator
# +
from random import shuffle
def batchIterator(x, y, batchSize, batchCount):
size = len(x)
if batchSize * batchCount > size:
raise ValueError("Change batch size or change batch count")
indices = list(range(0, size))
shuffle(indices)
indices = indices[0:batchSize * batchCount]
batches = np.array_split(indices, batchCount)
for batch in batches:
yield (x[batch], y[batch])
# -
# ## Prepare data for training
# +
trainX = []
trainY = []
for x, y in zip(trainRawX, trainRawY):
rawAugmentedImages = augmentImage(x)[0]
trainX.extend(rawAugmentedImages)
target = [y for i in range(0, len(rawAugmentedImages))]
trainY.extend(target)
# -
print(len(trainRawX))
print(len(trainX))
print(trainRawY.shape)
print(len(trainY))
trainX = np.stack(trainX, axis=0)
trainY = np.stack(trainY, axis=0)
# +
processedTestX = []
processedTestY = []
for x, y in zip(testX, testY):
processedTestY.append(y)
processedTestX.append(toImage(x))
processedTestX = np.stack(processedTestX, axis=0)
processedTestY = np.stack(processedTestY, axis=0)
# -
# ### Helper methods
def createConvolutionLayer(inputLayer, kernelHeight, kernelWidth, kernelCount, strideX, strideY, name):
"""This will create a four dimensional tensor
In this tensor the first and second dimension define the kernel height and width
The third dimension define the channel size. If the input layer is
first layer in neural network then the channel size will be 3 in case of RGB images
else 1 if images are grey scale. Furthermore if the input layer is Convolution layer
then the channel size should be no of kernels in previous layer"""
channelSize = int(inputLayer.get_shape()[-1])
weights = tf.Variable(tf.truncated_normal([kernelHeight, kernelWidth, channelSize, kernelCount], stddev=0.03))
bias = tf.Variable(tf.constant(0.05, shape=[kernelCount]))
"""Stride is also 4 dimensional tensor
The first and last values should be 1 as they represent the image index and
chanel size padding. Second and Third index represent the X and Y strides"""
layer = tf.nn.conv2d(input = inputLayer, filter = weights, padding='SAME',
strides = [1, strideX, strideY, 1], name = name) + bias
return layer
def flattenLayer(inputLayer, name):
"""Flatten layer. The first component is image count which is useless"""
flattenedLayer = tf.reshape(inputLayer, [-1, inputLayer.get_shape()[1:].num_elements()], name=name)
return flattenedLayer
def fullyConnectedLayer(inputLayer, outputLayerCount):
weights = tf.Variable(tf.truncated_normal(
[int(inputLayer.get_shape()[1]), outputLayerCount], stddev=0.03))
bias = tf.Variable(tf.constant(0.05, shape=[outputLayerCount]))
layer = tf.matmul(inputLayer, weights) + bias
return layer
def batchNormalization(inputLayer, isTraining, name):
beta = tf.Variable(tf.constant(0.0, shape=[inputLayer.get_shape()[-1]]), trainable=True)
gamma = tf.Variable(tf.constant(1.0, shape=[inputLayer.get_shape()[-1]]), name='gamma', trainable=True)
batchMean, batchVariance = tf.nn.moments(inputLayer, [0,1,2], name='moments')
ema = tf.train.ExponentialMovingAverage(decay=0.9)
def meanVarianceUpdate():
emaOp = ema.apply([batchMean, batchVariance])
with tf.control_dependencies([emaOp]):
return tf.identity(batchMean), tf.identity(batchVariance)
mean, var = tf.cond(isTraining, meanVarianceUpdate, lambda: (ema.average(batchMean), ema.average(batchVariance)))
normed = tf.nn.batch_normalization(inputLayer, mean, var, beta, gamma, 1e-3, name=name)
return normed
def log_histogram(writer, tag, values, step, bins=50):
# Convert to a numpy array
values = np.array(values)
# Create histogram using numpy
counts, bin_edges = np.histogram(values, bins=bins)
# Fill fields of histogram proto
hist = tf.HistogramProto()
hist.min = float(np.min(values))
hist.max = float(np.max(values))
hist.num = int(np.prod(values.shape))
hist.sum = float(np.sum(values))
hist.sum_squares = float(np.sum(values**2))
# Requires equal number as bins, where the first goes from -DBL_MAX to bin_edges[1]
# See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/summary.proto#L30
# Thus, we drop the start of the first bin
bin_edges = bin_edges[1:]
# Add bin edges and counts
for edge in bin_edges:
hist.bucket_limit.append(edge)
for c in counts:
hist.bucket.append(c)
# Create and write Summary
summary = tf.Summary(value=[tf.Summary.Value(tag=tag, histo=hist)])
writer.add_summary(summary, step)
writer.flush()
# ### Define model
# +
"""Input is 4 dimensional tensor -1 so that the no of images can be infered on itself"""
inputLayer = tf.placeholder(tf.float32, [None, 32, 32, 3], name="inputLayer")
yTrue = tf.placeholder(tf.float32, shape=[None, 10], name="yTrue")
isTraining = tf.placeholder(tf.bool, [])
probability = tf.placeholder(tf.float32)
convolutionLayer1 = createConvolutionLayer(inputLayer, 2, 2, 32, 1, 1, "convolutionLayer1")
reluActivated1 = tf.nn.relu(convolutionLayer1, name = "relu1")
poolingLayer1 = tf.layers.max_pooling2d(inputs=reluActivated1, pool_size=[2, 2],
strides = [1, 1], padding='SAME', name="poolingLayer1")
bn1 = batchNormalization(poolingLayer1, isTraining, "batchNormalization1")
dropout1 = tf.nn.dropout(bn1, keep_prob = probability)
convolutionLayer2 = createConvolutionLayer(dropout1, 2, 2, 20, 1, 1, "convolutionLayer2")
reluActivated2 = tf.nn.relu(convolutionLayer2, name = "relu2")
poolingLayer2 = tf.layers.max_pooling2d(inputs=reluActivated2, pool_size=[2, 2],
strides = [2, 2], padding='SAME', name="poolingLayer2")
bn2 = batchNormalization(poolingLayer2, isTraining, "batchNormalization2")
dropout2 = tf.nn.dropout(bn2, keep_prob = probability)
flattened = flattenLayer(dropout2, name = "flattenedLayer")
fc1 = fullyConnectedLayer(flattened, 1000)
reluActivated3 = tf.nn.relu(fc1, name = "relu3")
fc2 = fullyConnectedLayer(reluActivated3, 500)
reluActivated4 = tf.nn.relu(fc2, name = "relu4")
output = fullyConnectedLayer(reluActivated4, 10)
# -
# ### Define Predictions
predictions = tf.argmax(tf.nn.softmax(output), axis = 1)
actual = tf.argmax(yTrue, axis = 1)
# ### Define loss function and specify the optimizer
loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels = yTrue)
costFunction = tf.reduce_mean(loss)
optimizer = tf.train.GradientDescentOptimizer(1e-2).minimize(costFunction)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predictions, actual), tf.float32))
# ### Create session and initialize global variables
session = tf.Session()
"""Initialize the global variables"""
session.run(tf.global_variables_initializer())
summaryWriter = tf.summary.FileWriter("tensorboard/structure5/logs", graph=tf.get_default_graph())
trainAccList = []
testAccList = []
for i in range(0, 20):
print("Epoch "+str(i))
summary = tf.Summary()
for x, y in batchIterator(trainX, trainY, 500, 100):
session.run(optimizer, feed_dict={inputLayer:x, yTrue:y, isTraining:True, probability:0.6})
loss = session.run(costFunction, feed_dict={inputLayer:x, yTrue:y, isTraining:False, probability:1})
acc = session.run(accuracy, feed_dict={inputLayer:x, yTrue:y, isTraining:False, probability:1})
summary.value.add(tag = "TrainingLoss", simple_value = loss)
summary.value.add(tag = "TrainingAcc", simple_value = acc)
trainAccList.append(acc)
lossTestList = []
accTestList = []
for x, y in batchIterator(processedTestX, processedTestY, 1000, 5):
lossTest = session.run(costFunction, feed_dict={inputLayer:x, yTrue:y, isTraining:False, probability:1})
accTest = session.run(accuracy, feed_dict={inputLayer:x, yTrue:y, isTraining:False, probability:1})
lossTestList.append(lossTest)
accTestList.append(accTest)
print(np.mean(accTestList))
summary.value.add(tag = "TestLoss", simple_value = np.mean(lossTestList))
summary.value.add(tag = "TestAcc", simple_value = np.mean(accTestList))
testAccList.append(np.mean(accTestList))
summaryWriter.add_summary(summary, i)
log_histogram(summaryWriter, "TrainAccHist", trainAccList, 50)
log_histogram(summaryWriter, "TestAccHist", testAccList, 50)
session.close()
trainAccList
testAccList
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
from matplotlib import pyplot
import numpy
# In amuse we mostly work with one or multiple collections of particles. These collections can be thought of as tables were each particle is represented by a row in the table:
#
# <table>
# <tr>
# <th>Particle</th>
# <th>mass</th>
# <th>radius</th>
# </tr>
# <tr>
# <td>1</td>
# <td>10.0</td>
# <td>3.5</td>
# </tr>
# <tr>
# <td>2</td>
# <td>4.0</td>
# <td>1</td>
# </tr>
# </table>
#
#
# <p style="background-color: lightyellow">
# <em>Background:</em> AMUSE is optimized to work with columns in the particle collections, each column represents an attribute of the particles in the collection (in the above table the particle collection stores the masses and raddii of the particles). Instead of looping through the particle set we run a function on one or more columns of the set. These functions are often numpy functions and optimized in C, so much faster than looping in python. This will take some time to get used to but often results in more compact Python code that will be easier to understand.
# </p>
#
#
from amuse.lab import *
# If you know how many particles you want, you can create a collection of particles by specifying the size of the collection. AMUSE will create a set of particles were each particle has a unique 128-bit key. Except for the key, the particles will not have any attributes.
planets = Particles(7)
print planets
# The `planets` collection is not very useful yet, it only contains a set of empty particles. We can make it more interesting by specifying a mass and radius.
planets.mass = [641.85, 4868.5, 5973.6, 102430, 86832, 568460, 1898600] | (1e21 * units.kg)
planets.radius = [0.532, 0.950, 1, 3.86, 3.98, 9.14, 10.97] | (6384 * units.km)
print planets
# The above example shows one of the dynamic properties of a particle collection, you can define a new attribute for all particles by assigning a value to the an attribute name. AMUSE does not limit the names, except these have to be valid python attribute names.
#
# It is easy to specify the same value for all attributes:
planets.density = 1000.0 | units.kg / units.m**3
print planets
# Or request the value of an attribute for all particles:
print planets.mass
# We can calculate the density instead of just setting to the same value for all particles.
planets.volume = 4.0 / 3.0 * numpy.pi * planets.radius**3
planets.density = planets.mass / planets.volume
print planets
# If you request an attribute of a particle collection, AMUSE will return a vector quantity. You can do several operations on these vectors:
print "Total mass of the planets:", planets.mass.sum()
print "Mean density of the planets:", planets.density.mean()
# Ofcourse, you can also work with one particle in the set. This works the same as it does for python lists, but instead of an object stored in the list you will get a Particle object that points to the correct row in the particle collection. All changes made on the particle will be reflected in the collection.
# +
earth = planets[2]
print earth
earth.density = 5.52 | units.g/units.cm**3
print planets
# -
# As the particle is just a pointer into the particle collection, adding a new attribute to a particle will also add a new attribute to the collection, AMUSE will set the value of this new attribute to zero (0.0) for all other particles in the set
# +
earth.population = 6973738433
print planets
# -
# Finally, you can also create single particles and add these to a particle collection. (A single particle created like this points to a particle collection with only one particle in it).
pluto = Particle(mass = 1.305 | units.kg, radius = 1153 | units.km)
print pluto
planets.add_particle(pluto)
print planets
#
#
# A particle collection can represent sets of many different kinds of astrophysical bodies (planets, stars, dark matter, smoothed hydrodynamics particles, etc.). The type of particles in a collection is determined by the attributes (stars may have different attributes than planets) and how you use the set.
# Putting different kinds of particles in a set is possible, but in those cases some attributes will have valid values and some will be zero (for example if you would add the sun with it's luminocity to this this table. In practice, we recommend you put one kind of particle in a set (for example have a different set for stars, gas clouds and dark matter particles)
sun = Particle(mass = 1 | units.MSun, radius = 1 | units.RSun, luminosity = 1 | units.LSun)
planets.add_particle(sun)
print planets
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] slideshow={"slide_type": "-"}
# # Code Search on Kubeflow
#
# This notebook implements an end-to-end Semantic Code Search on top of [Kubeflow](https://www.kubeflow.org/) - given an input query string, get a list of code snippets semantically similar to the query string.
#
# **NOTE**: If you haven't already, see [kubeflow/examples/code_search](https://github.com/kubeflow/examples/tree/master/code_search) for instructions on how to get this notebook,.
# -
# ## Install dependencies
#
# Let us install all the Python dependencies. Note that everything must be done with `Python 2`. This will take a while the first time.
# ### Verify Version Information
# + language="bash"
#
# echo "Pip Version Info: " && python2 --version && python2 -m pip --version && echo
# echo "Google Cloud SDK Info: " && gcloud --version && echo
# echo "Ksonnet Version Info: " && ks version && echo
# echo "Kubectl Version Info: " && kubectl version
# -
# ### Install Pip Packages
# ! python2 -m pip install -U pip
# Code Search dependencies
# ! python2 -m pip install --user https://github.com/kubeflow/batch-predict/tarball/master
# ! python2 -m pip install --user -r src/requirements.txt
# BigQuery Cell Dependencies
# ! python2 -m pip install --user pandas-gbq
# NOTE: The RuntimeWarnings (if any) are harmless. See ContinuumIO/anaconda-issues#6678.
from pandas.io import gbq
# ### Configure Variables
#
# This involves setting up the Ksonnet application as well as utility environment variables for various CLI steps.
# +
# Configuration Variables. Modify as desired.
PROJECT = 'kubeflow-dev'
# Dataflow Related Variables.
TARGET_DATASET = 'code_search'
WORKING_DIR = 'gs://kubeflow-examples/t2t-code-search/notebook-demo'
WORKER_MACHINE_TYPE = 'n1-highcpu-32'
NUM_WORKERS = 16
# DO NOT MODIFY. These are environment variables to be used in a bash shell.
# %env PROJECT $PROJECT
# %env TARGET_DATASET $TARGET_DATASET
# %env WORKING_DIR $WORKING_DIR
# %env WORKER_MACHINE_TYPE $WORKER_MACHINE_TYPE
# %env NUM_WORKERS $NUM_WORKERS
# -
# ### Setup Authorization
#
# In a Kubeflow cluster on GKE, we already have the Google Application Credentials mounted onto each Pod. We can simply point `gcloud` to activate that service account.
# + language="bash"
#
# # Activate Service Account provided by Kubeflow.
# gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
# -
# Additionally, to interact with the underlying cluster, we configure `kubectl`.
# + language="bash"
#
# kubectl config set-cluster kubeflow --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# kubectl config set-credentials jupyter --token "$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
# kubectl config set-context kubeflow --cluster kubeflow --user jupyter
# kubectl config use-context kubeflow
# -
# Collectively, these allow us to interact with Google Cloud Services as well as the Kubernetes Cluster directly to submit `TFJob`s and execute `Dataflow` pipelines.
# ### Setup Ksonnet Application
#
# We now point the Ksonnet application to the underlying Kubernetes cluster.
# + language="bash"
#
# cd kubeflow
#
# # Update Ksonnet to point to the Kubernetes Cluster
# ks env add code-search --context $(kubectl config current-context)
#
# # Update the Working Directory of the application
# sed -i'' "s,gs://example/prefix,${WORKING_DIR}," components/params.libsonnet
#
# # FIXME(sanyamkapoor): This command completely replaces previous configurations.
# # Hence, using string replacement in file.
# # ks param set t2t-code-search workingDir ${WORKING_DIR}
# -
# ## View Github Files
#
# This is the query that is run as the first step of the Pre-Processing pipeline and is sent through a set of transformations. This is illustrative of the rows being processed in the pipeline we trigger next.
#
# **WARNING**: The table is large and the query can take a few minutes to complete.
# +
query = """
SELECT
MAX(CONCAT(f.repo_name, ' ', f.path)) AS repo_path,
c.content
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id
JOIN (
--this part of the query makes sure repo is watched at least twice since 2017
SELECT
repo
FROM (
SELECT
repo.name AS repo
FROM
`githubarchive.year.2017`
WHERE
type="WatchEvent"
UNION ALL
SELECT
repo.name AS repo
FROM
`githubarchive.month.2018*`
WHERE
type="WatchEvent" )
GROUP BY
1
HAVING
COUNT(*) >= 2 ) AS r
ON
f.repo_name = r.repo
WHERE
f.path LIKE '%.py' AND --with python extension
c.size < 15000 AND --get rid of ridiculously long files
REGEXP_CONTAINS(c.content, r'def ') --contains function definition
GROUP BY
c.content
LIMIT
10
"""
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
# -
# ## Pre-Processing Github Files
#
# In this step, we will run a [Google Cloud Dataflow](https://cloud.google.com/dataflow/) pipeline (based on Apache Beam). A `Python 2` module `code_search.dataflow.cli.preprocess_github_dataset` has been provided which builds an Apache Beam pipeline. A list of all possible arguments can be seen via the following command.
# + language="bash"
#
# cd src
#
# python2 -m code_search.dataflow.cli.preprocess_github_dataset -h
# -
# ### Run the Dataflow Job for Pre-Processing
#
# See help above for a short description of each argument. The values are being taken from environment variables defined earlier.
# + language="bash"
#
# cd src
#
# JOB_NAME="preprocess-github-dataset-$(date +'%Y%m%d-%H%M%S')"
#
# python2 -m code_search.dataflow.cli.preprocess_github_dataset \
# --runner DataflowRunner \
# --project "${PROJECT}" \
# --target_dataset "${TARGET_DATASET}" \
# --data_dir "${WORKING_DIR}/data" \
# --job_name "${JOB_NAME}" \
# --temp_location "${WORKING_DIR}/dataflow/temp" \
# --staging_location "${WORKING_DIR}/dataflow/staging" \
# --worker_machine_type "${WORKER_MACHINE_TYPE}" \
# --num_workers "${NUM_WORKERS}"
# -
# When completed successfully, this should create a dataset in `BigQuery` named `target_dataset`. Additionally, it also dumps CSV files into `data_dir` which contain training samples (pairs of function and docstrings) for our Tensorflow Model. A representative set of results can be viewed using the following query.
# +
query = """
SELECT *
FROM
{}.token_pairs
LIMIT
10
""".format(TARGET_DATASET)
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
# -
# This pipeline also writes a set of CSV files which contain function and docstring pairs delimited by a comma. Here, we list a subset of them.
# + language="bash"
#
# LIMIT=10
#
# gsutil ls ${WORKING_DIR}/data/*.csv | head -n ${LIMIT}
# -
# ## Prepare Dataset for Training
#
# We will use `t2t-datagen` to convert the transformed data above into the `TFRecord` format.
#
# **TIP**: Use `ks show` to view the Resource Spec submitted.
# + language="bash"
#
# cd kubeflow
#
# ks show code-search -c t2t-code-search-datagen
# + language="bash"
#
# cd kubeflow
#
# ks apply code-search -c t2t-code-search-datagen
# -
# Once this job finishes, the data directory should have a vocabulary file and a list of `TFRecords` prefixed by the problem name which in our case is `github_function_docstring_extended`. Here, we list a subset of them.
# + language="bash"
#
# LIMIT=10
#
# gsutil ls ${WORKING_DIR}/data/vocab*
# gsutil ls ${WORKING_DIR}/data/*train* | head -n ${LIMIT}
# -
# ## Execute Tensorflow Training
#
# Once, the `TFRecords` are generated, we will use `t2t-trainer` to execute the training.
# + language="bash"
#
# cd kubeflow
#
# ks show code-search -c t2t-code-search-trainer
# + language="bash"
#
# cd kubeflow
#
# ks apply code-search -c t2t-code-search-trainer
# -
# This will generate TensorFlow model checkpoints which is illustrated below.
# + language="bash"
#
# gsutil ls ${WORKING_DIR}/output/*ckpt*
# -
# ## Export Tensorflow Model
#
# We now use `t2t-exporter` to export the `TFModel`.
# + language="bash"
#
# cd kubeflow
#
# ks show code-search -c t2t-code-search-exporter
# + language="bash"
#
# cd kubeflow
#
# ks apply code-search -c t2t-code-search-exporter
# -
# Once completed, this will generate a TensorFlow `SavedModel` which we will further use for both online (via `TF Serving`) and offline inference (via `Kubeflow Batch Prediction`).
# + language="bash"
#
# gsutil ls ${WORKING_DIR}/output/export/Servo
# -
# ## Compute Function Embeddings
#
# In this step, we will use the exported model above to compute function embeddings via another `Dataflow` pipeline. A `Python 2` module `code_search.dataflow.cli.create_function_embeddings` has been provided for this purpose. A list of all possible arguments can be seen below.
# + language="bash"
#
# cd src
#
# python2 -m code_search.dataflow.cli.create_function_embeddings -h
# -
# ### Configuration
#
# First, select a Exported Model version from the `${WORKING_DIR}/output/export/Servo` as seen above. This should be name of a folder with UNIX Seconds Timestamp like `1533685294`. Below, we automatically do that by selecting the folder which represents the latest timestamp.
# + magic_args="--out EXPORT_DIR_LS" language="bash"
#
# gsutil ls ${WORKING_DIR}/output/export/Servo | grep -oE "([0-9]+)/$"
# +
# WARNING: This routine will fail if no export has been completed successfully.
MODEL_VERSION = max([int(ts[:-1]) for ts in EXPORT_DIR_LS.split('\n') if ts])
# DO NOT MODIFY. These are environment variables to be used in a bash shell.
# %env MODEL_VERSION $MODEL_VERSION
# -
# ### Run the Dataflow Job for Function Embeddings
# + language="bash"
#
# cd src
#
# JOB_NAME="compute-function-embeddings-$(date +'%Y%m%d-%H%M%S')"
# PROBLEM=github_function_docstring_extended
#
# python2 -m code_search.dataflow.cli.create_function_embeddings \
# --runner DataflowRunner \
# --project "${PROJECT}" \
# --target_dataset "${TARGET_DATASET}" \
# --problem "${PROBLEM}" \
# --data_dir "${WORKING_DIR}/data" \
# --saved_model_dir "${WORKING_DIR}/output/export/Servo/${MODEL_VERSION}" \
# --job_name "${JOB_NAME}" \
# --temp_location "${WORKING_DIR}/dataflow/temp" \
# --staging_location "${WORKING_DIR}/dataflow/staging" \
# --worker_machine_type "${WORKER_MACHINE_TYPE}" \
# --num_workers "${NUM_WORKERS}"
# -
# When completed successfully, this should create another table in the same `BigQuery` dataset which contains the function embeddings for each existing data sample available from the previous Dataflow Job. Additionally, it also dumps a CSV file containing metadata for each of the function and its embeddings. A representative query result is shown below.
# +
query = """
SELECT *
FROM
{}.function_embeddings
LIMIT
10
""".format(TARGET_DATASET)
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
# -
# The pipeline also generates a set of CSV files which will be useful to generate the search index.
# + language="bash"
#
# LIMIT=10
#
# gsutil ls ${WORKING_DIR}/data/*index*.csv | head -n ${LIMIT}
# -
# ## Create Search Index
#
# We now create the Search Index from the computed embeddings. This facilitates k-Nearest Neighbor search to for semantically similar results.
# + language="bash"
#
# cd kubeflow
#
# ks show code-search -c search-index-creator
# + language="bash"
#
# cd kubeflow
#
# ks apply code-search -c search-index-creator
# -
# Using the CSV files generated from the previous step, this creates an index using [NMSLib](https://github.com/nmslib/nmslib). A unified CSV file containing all the code examples for a human-readable reverse lookup during the query, is also created.
# + language="bash"
#
# gsutil ls ${WORKING_DIR}/code_search_index*
# -
# ## Deploy an Inference Server
#
# We've seen offline inference during the computation of embeddings. For online inference, we deploy the exported Tensorflow model above using [Tensorflow Serving](https://www.tensorflow.org/serving/).
# + language="bash"
#
# cd kubeflow
#
# ks show code-search -c t2t-code-search-serving
# + language="bash"
#
# cd kubeflow
#
# ks apply code-search -c t2t-code-search-serving
# -
# ## Deploy Search UI
#
# We finally deploy the Search UI which allows the user to input arbitrary strings and see a list of results corresponding to semantically similar Python functions. This internally uses the inference server we just deployed.
# + language="bash"
#
# cd kubeflow
#
# ks show code-search -c search-index-server
# + language="bash"
#
# cd kubeflow
#
# ks apply code-search -c search-index-server
# -
# The service should now be available at FQDN of the Kubeflow cluster at path `/code-search/`.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.