code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Introduction to markdown**
# # intro
# ## intro
# ### intro
# ### Introduction to python programming
# - used as general purpose programming language
# - functional programming
# - sscripting and **object oriented** programming
# - introduction in the year of *1991* by <NAME>
# - maintained by ***google***
# - used in data science
# - used in machine learning
# - used in AI
#
# <img src= "1.jpg" Height = "300px" width = "300px" />
pwd
print("Welcome to python")
# comment line
print("Hello")
print("World")
print("Hello", end =' ')
print("World")
print("A","I")
x = 10000454558645648978456465865488 #int
print(x,type(x))
f = 0.12345658 # float
print(f,type(f))
s1 = 'a' #string
s2 = 'application'
print(s1,type(s1), end= " ")
print(s2,type(s2))
# ## Type Conversions
# - int()
# - Input will be converted into int format
# - float()
# - input will be converted into float format
# - str()
# - input will be converted into str format
s1 = "1234"
print(s1,type(s1))
a1 = int(s1)
print(a1,type(a1))
s2 = "123.455"
print(s2,type(s2))
a2 = float(s2)
print(a2,type(a2))
j2 = int(a2)
print(j2,type(j2))
# # INPUT METHOD OF PYTHON PROGRAMMING
# - input() is used to read the format from console
# - it reads the input of string format
# - use any conversion methods to have inputs
# - int()
# - float()
name= input('Enter You Name ')
print('Hello',name)
num = float(input(''))
print(num,type(num))
# ### OPERATORS
# - ++ -- not work in python, use +=1,-=1
# - symbols of && || ! will not work in python
# - use and or not
# - true and false can be used in python
# - / and // both can be used in python for divison
a = 100
b = 3
print(a/b)
print(a//b)
print(a*b)
print(a**b)
a = 10
b = (a>100) and (a<=10)
print(b)
# ### Control Flow Statements
# - Condotional Statements
# - if-else
# - Looping Statements
# - while
# - for
a = 10
if a>100 :
print('Enemy Spotted')
else :
print("I'm Under attack")
if True:
print("I'm Gaurding this path")
else:
print("Gather now")
# Read input as num and print yes if it is perfectly divisible by 3 and 5 or print no
a = int(input('Enter Number :'))
if (a%3==0 and a%5==0) :
print('True')
else:
print('False')
# Read the input as num and test wheather it is positive or negative or zero
num = int(input('Enter Number '))
if (num>0) :
print('Positive')
elif (num<0) :
print('Negative')
elif (num==0) :
print('Zero')
n = 100
if n >10:
print('Good')
elif n>20:
print('Better')
elif n>30:
print('Best')
else :
print('Nothing')
# - ord() gives character ASCII number
# - ASCII for characters
# - **A-Z--65-90**
# - **a-z--97-122**
# - **0-9--48-57**
# - **Space 32**
print(ord('A'))
print(ord('a'))
print(ord('0'))
# READ THE INPUT AS CHAR AND PRINT WHETHER IS UPPER OR LOWER CASE OR DIGIT
i = input('Enter input :')
print('You Entered : ',i)
if (ord(i)>=97 and ord(i)<=122):
print(i," is Lower Case")
elif (ord(i)>=48 and ord(i)<=57) :
print(i,'is Numeric')
elif (ord(i)>=65 and ord(i)<90) :
print(i,'is Upper Case')
# chr() -- takes the input as ASCII and output represents the character
print(chr(65))
print(chr(122))
print(chr(48))
# Read the input as character print output as case change
i = input('Enter input :')
if (65>=ord(i) and ord(i)<90) :
print(chr(ord(i)+32))
elif (97<=ord(i) and ord(i)<=122):
print(chr(ord(i)-32))
else:
print('Numeric')
# +
# Looping Statements while and for
# While -- does not know fixed no of iterations
# for -- knows the fuxwd no of iterations
# Read input as N and print the natural numbers from 1 to N
n = int(input(''))
i=1
while i<=n:
print(i,end=' ')
i+= 1
# +
# Read the input as n & print the output as sum of even numbers
# from 1-n
# 10--30
# 15--56
maximum = int(input(" Please Enter the Maximum Value : "))
number = 1
while number <= maximum:
if(number % 2 == 0):
print("{0}".format(number))
number = number + 1
# -
# Read the two numbers input and print the output
# 2 5 - 1234512345
# 3 2 - 121212
I=int(input(''))
n=int(input(''))
x=0
i=1
while x<I:
i =1
while i<=n:
print(i,end=' ')
i+=1
x+=1
# +
# Read number as input and print output as following
# 145-541
#1889-9881
num = int(input(''))
while num !=0:
r = num % 10
print(r,end=' ')
num = num // 10
# +
# read the input as number and print the output as follows
#145- five four one
#1889- nine eight eight one
num = int(input(''))
while num !=0:
r = num % 10
if r ==1:
print('One',end=' ')
elif r==2:
print('Two', end= ' ')
elif r==3:
print('Three', end=' ')
elif r==4:
print('Four', end=' ')
elif r==5:
print('Five', end=' ')
elif r==6:
print('Six', end=' ')
elif r==7:
print('Seven', end=' ')
elif r==8:
print('Eight', end=' ')
elif r==9:
print('Nine', end=' ')
num = num//10
# -
# ### Functional Programming of python
# - A block of repeated calling statements called as function
# - A block of satements represents for particular task is called as Function
# - A function makes programming more effecting in terms of **Reusability** and **Readability**
# - Function in python can be defined in a keyword **"def"**
# - Write the function name always with **Camel Case**
# - Example : **fact(), isPalindrome(), isUserTest()**
# - Parameters of the function can be **Optional**
# - User Defined Functions(Syntax):-
#
# def funName(<Parameters>)
# {
# Statements
# return
# }
# +
# Read the number as input and print the output as follows
#145- yes
#123-no
#hint: Individual Digit sum is same as orginal number then print "YES" or "NO"
def fact(n):
f = 1
while n != 0:
f = f* n
n -= 1
return f
def digitExtract(x):
sum = 0
while x != 0 :
r = x % 10
sum += fact(r)
x = x // 10
return sum
n = int(input(' '))
s = digitExtract(n)
if n == s:
print('Yes')
else :
print('No')
# -
# # FOR LOOP
# - range(5) prints the output as 0 1 2 3 4, here range function will not consider the upper limit
# - range(1,11) - 1 2 3 4 5 6 7 8 9 10
# - range(0,11,2) - Start value : 1 End value : 10 Step Value : 2
for i in range(0,5):
print(i,end=' ')
for i in range(1,11):
print(i,end=' ')
for i in range(1,11,2):
print(i,end=' ')
for i in range(100,150,5):
print(i,end=' ')
# Factorial of a number using FOR LOOP
def fact(n):
f = 1
for i in range(1,n+1):
f = f * i
return f
num = int(input(' '))
fact(num)
for i in range(10,-1,-1):
print(i,end=' ')
# 10 - start value
# -1 - end value
# -1 - step value
# Even Numbers from the range of two limits:-
#ll ul
def fun1(ll,ul):
count = 0
for i in range(ll,ul+1,2):
count += 1
return count
ll = 10
ul = 150
fun1(ll,ul)
| Basics/file 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h2 style="color:green" align="center"> Machine Learning With Python: Linear Regression Multiple Variables</h2>
# <img src="machine learning pipline.png" style='height:300px;width:550px'>
# <h3 style="color:purple">Sample problem of predicting home price in monroe, new jersey (USA)</h3>
# Below is the table containing home prices in monroe twp, NJ. Here price depends on **area (square feet), bed rooms and age of the home (in years)**. Given these prices we have to predict prices of new homes based on area, bed rooms and age.
# <img src="homeprices.jpg" style='height:200px;width:350px'>
# Given these home prices find out price of a home that has,
#
# **3000 sqr ft area, 3 bedrooms, 40 year old**
#
# **2500 sqr ft area, 4 bedrooms, 5 year old**
import pandas as pd
import numpy as np
from sklearn import linear_model
df = pd.read_csv('homeprices.csv')
df.head()
df.shape
df.bedrooms.median()
# **Data Preprocessing: Fill NA values with median value of a column**
df.bedrooms = df.bedrooms.fillna(df.bedrooms.median())
df
x = df[['area','bedrooms','age']]
x
y = df.price
y
# **Now Apply Linear Regression model**
reg = linear_model.LinearRegression()
reg.fit(x,y)
reg.coef_
reg.intercept_
# **Find price of home with 3000 sqr ft area, 3 bedrooms, 40 year old**
reg.predict([[3000,3,40]])
112.06244194*3000+23388.88007794*3+-3231.71790863*40+221323.00186540408
# **Find price of home with 2500 sqr ft area, 4 bedrooms, 5 year old**
reg.predict([[2500,4,5]])
# <h3>Exercise<h3>
# In exercise folder (same level as this notebook on github) there is **hiring.csv**. This file contains hiring statics for a firm such as experience of candidate, his written test score and personal interview score. Based on these 3 factors, HR will decide the salary. Given this data, you need to build a machine learning model for HR department that can help them decide salaries for future candidates. Using this predict salaries for following candidates,
#
#
# **2 yr experience, 9 test score, 6 interview score**
#
# **12 yr experience, 10 test score, 10 interview score**
#
# <h3>Answer<h3>
# What will be answer?
| Week02/Day02/Linear Regression Multivariate/2_linear_regression_multivariate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
data = pd.read_csv('/home/atrides/Desktop/R/statistics_with_Python/03_Python_Enviroment/Data_Files/Honeymoon Period.dat', sep='\s+')
print(data.head())
print(data.columns)
# ## wide data to long data
melted_df = pd.melt(data,id_vars=['Person','Gender'], value_vars=['Satisfaction_Base','Satisfaction_6_Months','Satisfaction_12_Months','Satisfaction_18_Months'],
var_name='satisfaction_after_time')
print(melted_df.head(10))
# ## long data to wide data
unmelted_df = pd.pivot_table(melted_df, values='value', index=['Person'],
columns=['satisfaction_after_time'], dropna=False)
_ = melted_df[melted_df['satisfaction_after_time']=='Satisfaction_Base']['Gender']
unmelted_df.columns = ['Satisfaction_Base','Satisfaction_6_Months', 'Satisfaction_12_Months','Satisfaction_18_Months']
unmelted_df['Person'] = unmelted_df.index
unmelted_df.reset_index(drop=True, inplace=True)
unmelted_df['Gender'] = _
# data converted from long to wide
print(unmelted_df)
| Python/statistics_with_Python/03_Python_Enviroment/Markdown_notebook/05_reshapingData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sqlalchemy import create_engine
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
from env_vars import *
import pandas as pd
import sqlite3
from sqlalchemy import create_engine
import pickle
import numpy as np
con = sqlite3.connect("song_list_v3_11.db")
# Load the data into a DataFrame
df1 = pd.read_sql_query("SELECT * from songs", con)
con.close()
con = sqlite3.connect("song_list_v3_12.db")
# Load the data into a DataFrame
df2 = pd.read_sql_query("SELECT * from songs", con)
con.close()
con = sqlite3.connect("song_list_v3_13.db")
# Load the data into a DataFrame
d31 = pd.read_sql_query("SELECT * from songs", con)
con.close()
con = sqlite3.connect("song_list_v3_14.db")
# Load the data into a DataFrame
df4 = pd.read_sql_query("SELECT * from songs", con)
con.close()
concated_df = pd.concat([df1, df2], ignore_index=True)
concated_df = pd.concat([concated_df, df3], ignore_index=True)
concated_df = pd.concat([concated_df, df4], ignore_index=True)
concated_df
engine = create_engine('sqlite:///song_list_v3_concated.db', echo=False)
concated_df.to_sql('songs', con=engine)
| data_collection/concatenate_SQL_files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import sys
sys.path.append('../')
import base
import path_analysis
import matplotlib.pyplot as plt
import pandas as pd
root_paths = ['../../Data/Raw/FS10/BPositions_FS10_20211006-154014/', '../../Data/Raw/FS10/BPositions_FS10_20211007-150456/',\
'../../Data/Raw/FS10/BPositions_FS10_20211011-094820/', '../../Data/Raw/FS10/BPositions_FS10_20211014-160224/']
tags = ['20211006-154014','20211007-150456', '20211011-094820', '20211014-160224']
data = base.MultiDaysBeaconPosition(root_paths, tags, has_beacon = True, has_metadata = True)
straightness_moment = path_analysis.straightness_moment_time(data.trial_list[0][1][:,:3], before_time=2)
straightness_time = path_analysis.straightness_over_time(data.trial_list[0][1][:,:3], before_time=2)
plt.plot(straightness_moment[2][:,0], straightness_moment[2][:,1])
plt.plot(straightness_moment[1][:,0], straightness_moment[1][:,1])
print(f'Ratio: {straightness_moment[0]}')
for i in range(len(data.trial_list[0])):
straightness_moment = path_analysis.straightness_moment_time(data.trial_list[0][i][:,:3], before_time=2)
straightness_time = path_analysis.straightness_over_time(data.trial_list[0][i][:,:3], before_time=2)
if data.trial_visible[0][i]:
plt.plot(straightness_time[0])
else:
plt.plot(straightness_time[0],c='cyan')
bootstrap_sliding_window=path_analysis.bootstrap(data.trial_list[0][0], num_sampling=10, time_window=2, straightness_type = 'sliding')
| refactoring/demo_notebook/StraightnessAnalysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D4_ReinforcementLearning/student/W3D4_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# -
# # Tutorial 3: Learning to Act: Q-Learning
# **Week 3, Day 4: Reinforcement Learning**
#
# **By Neuromatch Academy**
#
# __Content creators:__ <NAME> and <NAME> with help from <NAME>
#
# __Content reviewers:__ <NAME> and <NAME>
# + [markdown] colab_type="text"
# **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
#
# <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
# -
# ---
#
# # Tutorial Objectives
#
# In this tutorial you will learn how to act in the more realistic setting of sequential decisions, formalized by Markov Decision Processes (MDPs). In a sequential decision problem, the actions executed in one state not only may lead to immediate rewards (as in a bandit problem), but may also affect the states experienced next (unlike a bandit problem). Each individual action may therefore affect affect all future rewards. Thus, making decisions in this setting requires considering each action in terms of their expected **cumulative** future reward.
#
# We will consider here the example of spatial navigation, where actions (movements) in one state (location) affect the states experienced next, and an agent might need to execute a whole sequence of actions before a reward is obtained.
#
# By the end of this tutorial, you will learn
# * what grid worlds are and how they help in evaluating simple reinforcement learning agents
# * the basics of the Q-learning algorithm for estimating action values
# * how the concept of exploration and exploitation, reviewed in the bandit case, also applies to the sequential decision setting
# ---
# # Setup
# + cellView="both"
# Imports
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import convolve as conv
# + cellView="form"
#@title Figure settings
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# + cellView="form"
#@title Helper functions
def epsilon_greedy(q, epsilon):
"""Epsilon-greedy policy: selects the maximum value action with probabilty
(1-epsilon) and selects randomly with epsilon probability.
Args:
q (ndarray): an array of action values
epsilon (float): probability of selecting an action randomly
Returns:
int: the chosen action
"""
if np.random.random() > epsilon:
action = np.argmax(q)
else:
action = np.random.choice(len(q))
return action
class CliffWorld:
"""
World: Cliff world.
40 states (4-by-10 grid world).
The mapping from state to the grids are as follows:
30 31 32 ... 39
20 21 22 ... 29
10 11 12 ... 19
0 1 2 ... 9
0 is the starting state (S) and 9 is the goal state (G).
Actions 0, 1, 2, 3 correspond to right, up, left, down.
Moving anywhere from state 9 (goal state) will end the session.
Taking action down at state 11-18 will go back to state 0 and incur a
reward of -100.
Landing in any states other than the goal state will incur a reward of -1.
Going towards the border when already at the border will stay in the same
place.
"""
def __init__(self):
self.name = "cliff_world"
self.n_states = 40
self.n_actions = 4
self.dim_x = 10
self.dim_y = 4
self.init_state = 0
def get_outcome(self, state, action):
if state == 9: # goal state
reward = 0
next_state = None
return next_state, reward
reward = -1 # default reward value
if action == 0: # move right
next_state = state + 1
if state % 10 == 9: # right border
next_state = state
elif state == 0: # start state (next state is cliff)
next_state = None
reward = -100
elif action == 1: # move up
next_state = state + 10
if state >= 30: # top border
next_state = state
elif action == 2: # move left
next_state = state - 1
if state % 10 == 0: # left border
next_state = state
elif action == 3: # move down
next_state = state - 10
if state >= 11 and state <= 18: # next is cliff
next_state = None
reward = -100
elif state <= 9: # bottom border
next_state = state
else:
print("Action must be between 0 and 3.")
next_state = None
reward = None
return int(next_state) if next_state is not None else None, reward
def get_all_outcomes(self):
outcomes = {}
for state in range(self.n_states):
for action in range(self.n_actions):
next_state, reward = self.get_outcome(state, action)
outcomes[state, action] = [(1, next_state, reward)]
return outcomes
def learn_environment(env, learning_rule, params, max_steps, n_episodes):
# Start with a uniform value function
value = np.ones((env.n_states, env.n_actions))
# Run learning
reward_sums = np.zeros(n_episodes)
# Loop over episodes
for episode in range(n_episodes):
state = env.init_state # initialize state
reward_sum = 0
for t in range(max_steps):
# choose next action
action = epsilon_greedy(value[state], params['epsilon'])
# observe outcome of action on environment
next_state, reward = env.get_outcome(state, action)
# update value function
value = learning_rule(state, action, reward, next_state, value, params)
# sum rewards obtained
reward_sum += reward
if next_state is None:
break # episode ends
state = next_state
reward_sums[episode] = reward_sum
return value, reward_sums
def plot_state_action_values(env, value, ax=None):
"""
Generate plot showing value of each action at each state.
"""
if ax is None:
fig, ax = plt.subplots()
for a in range(env.n_actions):
ax.plot(range(env.n_states), value[:, a], marker='o', linestyle='--')
ax.set(xlabel='States', ylabel='Values')
ax.legend(['R','U','L','D'], loc='lower right')
def plot_quiver_max_action(env, value, ax=None):
"""
Generate plot showing action of maximum value or maximum probability at
each state (not for n-armed bandit or cheese_world).
"""
if ax is None:
fig, ax = plt.subplots()
X = np.tile(np.arange(env.dim_x), [env.dim_y,1]) + 0.5
Y = np.tile(np.arange(env.dim_y)[::-1][:,np.newaxis], [1,env.dim_x]) + 0.5
which_max = np.reshape(value.argmax(axis=1), (env.dim_y,env.dim_x))
which_max = which_max[::-1,:]
U = np.zeros(X.shape)
V = np.zeros(X.shape)
U[which_max == 0] = 1
V[which_max == 1] = 1
U[which_max == 2] = -1
V[which_max == 3] = -1
ax.quiver(X, Y, U, V)
ax.set(
title='Maximum value/probability actions',
xlim=[-0.5, env.dim_x+0.5],
ylim=[-0.5, env.dim_y+0.5],
)
ax.set_xticks(np.linspace(0.5, env.dim_x-0.5, num=env.dim_x))
ax.set_xticklabels(["%d" % x for x in np.arange(env.dim_x)])
ax.set_xticks(np.arange(env.dim_x+1), minor=True)
ax.set_yticks(np.linspace(0.5, env.dim_y-0.5, num=env.dim_y))
ax.set_yticklabels(["%d" % y for y in np.arange(0, env.dim_y*env.dim_x,
env.dim_x)])
ax.set_yticks(np.arange(env.dim_y+1), minor=True)
ax.grid(which='minor',linestyle='-')
def plot_heatmap_max_val(env, value, ax=None):
"""
Generate heatmap showing maximum value at each state
"""
if ax is None:
fig, ax = plt.subplots()
if value.ndim == 1:
value_max = np.reshape(value, (env.dim_y,env.dim_x))
else:
value_max = np.reshape(value.max(axis=1), (env.dim_y,env.dim_x))
value_max = value_max[::-1,:]
im = ax.imshow(value_max, aspect='auto', interpolation='none', cmap='afmhot')
ax.set(title='Maximum value per state')
ax.set_xticks(np.linspace(0, env.dim_x-1, num=env.dim_x))
ax.set_xticklabels(["%d" % x for x in np.arange(env.dim_x)])
ax.set_yticks(np.linspace(0, env.dim_y-1, num=env.dim_y))
if env.name != 'windy_cliff_grid':
ax.set_yticklabels(
["%d" % y for y in np.arange(
0, env.dim_y*env.dim_x, env.dim_x)][::-1])
return im
def plot_rewards(n_episodes, rewards, average_range=10, ax=None):
"""
Generate plot showing total reward accumulated in each episode.
"""
if ax is None:
fig, ax = plt.subplots()
smoothed_rewards = (conv(rewards, np.ones(average_range), mode='same')
/ average_range)
ax.plot(range(0, n_episodes, average_range),
smoothed_rewards[0:n_episodes:average_range],
marker='o', linestyle='--')
ax.set(xlabel='Episodes', ylabel='Total reward')
def plot_performance(env, value, reward_sums):
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(16, 12))
plot_state_action_values(env, value, ax=axes[0,0])
plot_quiver_max_action(env, value, ax=axes[0,1])
plot_rewards(n_episodes, reward_sums, ax=axes[1,0])
im = plot_heatmap_max_val(env, value, ax=axes[1,1])
fig.colorbar(im)
# -
# ---
# # Section 1: Markov Decision Processes
# + cellView="form"
# @title Video 1: MDPs and Q-learning
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="8yvwMrUQJOU", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# ## Section 1.1: Grid Worlds
#
# As pointed out, bandits only have a single state and immediate rewards for our actions. Many problems we are interested in have multiple states and delayed rewards, i.e. we won't know if the choices we made will pay off over time, or which actions we took contributed to the outcomes we observed.
#
# In order to explore these ideas, we turn the a common problem setting: the grid world. Grid worlds are simple environments where each state corresponds to a tile on a 2D grid, and the only actions the agent can take are to move up, down, left, or right across the grid tiles. The agent's job is almost always to find a way to a goal tile in the most direct way possible while overcoming some maze or other obstacles, either static or dynamic.
#
# For our discussion we will be looking at the classic Cliff World, or Cliff Walker, environment. This is a 4x10 grid with a starting position in the lower-left and the goal position in the lower-right. Every tile between these two is the "cliff", and should the agent enter the cliff, they will receive a -100 reward and be sent back to the starting position. Every tile other than the cliff produces a -1 reward when entered. The goal tile ends the episode after taking any action from it.
#
# <img alt="CliffWorld" width="577" height="308" src="https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W2D5_ReinforcementLearning/static/W2D5_Tutorial3_CliffWorld.png?raw=true">
#
# Given these conditions, the maximum achievable reward is -11 (1 up, 9 right, 1 down). Using negative rewards is a common technique to encourage the agent to move and seek out the goal state as fast as possible.
# ---
# # Section 2: Q-Learning
#
# Now that we have our environment, how can we solve it?
#
# One of the most famous algorithms for estimating action values (aka Q-values) is the Temporal Differences (TD) **control** algorithm known as *Q-learning* (Watkins, 1989).
#
# \begin{align}
# Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha \big(r_t + \gamma\max_\limits{a} Q(s_{t+1},a_{t+1}) - Q(s_t,a_t)\big)
# \end{align}
#
# where $Q(s,a)$ is the value function for action $a$ at state $s$, $\alpha$ is the learning rate, $r$ is the reward, and $\gamma$ is the temporal discount rate.
#
# The expression $r_t + \gamma\max_\limits{a} Q(s_{t+1},a_{t+1})$ is referred to as the TD target while the full expression
# \begin{align}
# r_t + \gamma\max_\limits{a} Q(s_{t+1},a_{t+1}) - Q(s_t,a_t),
# \end{align}
# i.e. the difference between the TD target and the current Q-value, is referred to as the TD error, or reward prediction error.
#
# Because of the max operator used to select the optimal Q-value in the TD target, Q-learning directly estimates the optimal action value, i.e. the cumulative future reward that would be obtained if the agent behaved optimally, regardless of the policy currently followed by the agent. For this reason, Q-learning is referred to as an **off-policy** method.
# ## Exercise 1: Implement the Q-learning algorithm
#
# In this exercise you will implement the Q-learning update rule described above. It takes in as arguments the previous state $s_t$, the action $a_t$ taken, the reward received $r_t$, the current state $s_{t+1}$, the Q-value table, and a dictionary of parameters that contain the learning rate $\alpha$ and discount factor $\gamma$. The method returns the updated Q-value table. For the parameter dictionary, $\alpha$: `params['alpha']` and $\gamma$: `params['gamma']`.
#
def q_learning(state, action, reward, next_state, value, params):
"""Q-learning: updates the value function and returns it.
Args:
state (int): the current state identifier
action (int): the action taken
reward (float): the reward received
next_state (int): the transitioned to state identifier
value (ndarray): current value function of shape (n_states, n_actions)
params (dict): a dictionary containing the default parameters
Returns:
ndarray: the updated value function of shape (n_states, n_actions)
"""
# Q-value of current state-action pair
q = value[state, action]
##########################################################
## TODO for students: implement the Q-learning update rule
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the Q-learning update rule")
##########################################################
# write an expression for finding the maximum Q-value at the current state
if next_state is None:
max_next_q = 0
else:
max_next_q = ...
# write the expression to compute the TD error
td_error = ...
# write the expression that updates the Q-value for the state-action pair
value[state, action] = ...
return value
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D4_ReinforcementLearning/solutions/W3D4_Tutorial3_Solution_725e14ff.py)
#
#
# -
# Now that we have our Q-learning algorithm, let's see how it handles learning to solve the Cliff World environment.
#
# You will recall from the previous tutorial that a major part of reinforcement learning algorithms are their ability to balance exploitation and exploration. For our Q-learning agent, we again turn to the epsilon-greedy strategy. At each step, the agent will decide with probability $1 - \epsilon$ to use the best action for the state it is currently in by looking at the value function, otherwise just make a random choice.
#
# The process by which our the agent will interact with and learn about the environment is handled for you in the helper function `learn_environment`. This implements the entire learning episode lifecycle of stepping through the state observation, action selection (epsilon-greedy) and execution, reward, and state transition. Feel free to review that code later to see how it all fits together, but for now let's test out our agent.
# +
# set for reproducibility, comment out / change seed value for different results
np.random.seed(1)
# parameters needed by our policy and learning rule
params = {
'epsilon': 0.1, # epsilon-greedy policy
'alpha': 0.1, # learning rate
'gamma': 1.0, # discount factor
}
# episodes/trials
n_episodes = 500
max_steps = 1000
# environment initialization
env = CliffWorld()
# solve Cliff World using Q-learning
results = learn_environment(env, q_learning, params, max_steps, n_episodes)
value_qlearning, reward_sums_qlearning = results
# Plot results
plot_performance(env, value_qlearning, reward_sums_qlearning)
# -
# If all went well, we should see four plots that show different aspects on our agent's learning and progress.
#
# * The top left is a representation of the Q-table itself, showing the values for different actions in different states. Notably, going right from the starting state or down when above the cliff is clearly very bad.
# * The top right figure shows the greedy policy based on the Q-table, i.e. what action would the agent take if it only took its best guess in that state.
# * The bottom right is the same as the top, only instead of showing the action, it's showing a representation of the maximum Q-value at a particular state.
# * The bottom left is the actual proof of learning, as we see the total reward steadily increasing after each episode until asymptoting at the maximum possible reward of -11.
#
# Feel free to try changing the parameters or random seed and see how the agent's behavior changes.
# ---
# # Summary
#
# In this tutorial you implemented a reinforcement learning agent based on Q-learning to solve the Cliff World environment. Q-learning combined the epsilon-greedy approach to exploration-expoitation with a table-based value function to learn the expected future rewards for each state.
# ---
# # Bonus
# ## SARSA
#
# An alternative to Q-learning, the SARSA algorithm also estimates action values. However, rather than estimating the optimal (off-policy) values, SARSA estimates the **on-policy** action value, i.e. the cumulative future reward that would be obtained if the agent behaved according to its current beliefs.
#
# \begin{align}
# Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha \big(r_t + \gamma Q(s_{t+1},a_{t+1}) - Q(s_t,a_t)\big)
# \end{align}
#
# where, once again, $Q(s,a)$ is the value function for action $a$ at state $s$, $\alpha$ is the learning rate, $r$ is the reward, and $\gamma$ is the temporal discount rate.
#
# In fact, you will notices that the *only* difference between Q-learning and SARSA is the TD target calculation uses the policy to select the next action (in our case epsilon-greedy) rather than using the action that maximizes the Q-value.
# ### Exercise 2: Implement the SARSA algorithm
#
# In this exercise you will implement the SARSA update rule described above. Just like Q-learning, it takes in as arguments the previous state $s_t$, the action $a_t$ taken, the reward received $r_t$, the current state $s_{t+1}$, the Q-value table, and a dictionary of parameters that contain the learning rate $\alpha$ and discount factor $\gamma$. The method returns the updated Q-value table. You may use the `epsilon_greedy` function to acquire the next action. For the parameter dictionary, $\alpha$: `params['alpha']`, $\gamma$: `params['gamma']`, and $\epsilon$: `params['epsilon']`.
#
def sarsa(state, action, reward, next_state, value, params):
"""SARSA: updates the value function and returns it.
Args:
state (int): the current state identifier
action (int): the action taken
reward (float): the reward received
next_state (int): the transitioned to state identifier
value (ndarray): current value function of shape (n_states, n_actions)
params (dict): a dictionary containing the default parameters
Returns:
ndarray: the updated value function of shape (n_states, n_actions)
"""
# value of previous state-action pair
q = value[state, action]
##########################################################
## TODO for students: implement the SARSA update rule
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the SARSA update rule")
##########################################################
# select the expected value at current state based on our policy by sampling
# from it
if next_state is None:
policy_next_q = 0
else:
# write an expression for selecting an action using epsilon-greedy
policy_action = ...
# write an expression for obtaining the value of the policy action at the
# current state
policy_next_q = ...
# write the expression to compute the TD error
td_error = ...
# write the expression that updates the Q-value for the state-action pair
value[state, action] = ...
return value
def sarsa(state, action, reward, next_state, value, params):
"""SARSA: updates the value function and returns it.
Args:
state (int): the current state identifier
action (int): the action taken
reward (float): the reward received
next_state (int): the transitioned to state identifier
value (ndarray): current value function of shape (n_states, n_actions)
params (dict): a dictionary containing the default parameters
Returns:
ndarray: the updated value function of shape (n_states, n_actions)
"""
# value of previous state-action pair
q = value[state, action]
# select the expected value at current state based on our policy by sampling
# from it
if next_state is None:
policy_next_q = 0
else:
# write an expression for selecting an action using epsilon-greedy
policy_action = epsilon_greedy(value[next_state], params['epsilon'])
# write an expression for obtaining the value of the policy action at the
# current state
policy_next_q = value[next_state, policy_action]
# write the expression to compute the TD error
td_error = reward + params['gamma'] * policy_next_q - q
# write the expression that updates the Q-value for the state-action pair
value[state, action] = q + params['alpha'] * td_error
return value
# Now that we have an implementation for SARSA, let's see how it tackles Cliff World. We will again use the same setup we tried with Q-learning.
# +
# set for reproducibility, comment out / change seed value for different results
np.random.seed(1)
# parameters needed by our policy and learning rule
params = {
'epsilon': 0.1, # epsilon-greedy policy
'alpha': 0.1, # learning rate
'gamma': 1.0, # discount factor
}
# episodes/trials
n_episodes = 500
max_steps = 1000
# environment initialization
env = CliffWorld()
# learn Cliff World using Sarsa
results = learn_environment(env, sarsa, params, max_steps, n_episodes)
value_sarsa, reward_sums_sarsa = results
# Plot results
plot_performance(env, value_sarsa, reward_sums_sarsa)
# -
# We should see that SARSA also solves the task with similar looking outcomes to Q-learning. One notable difference is that SARSA seems to be skittsh around the cliff edge and often goes further away before coming back down to the goal.
#
# Again, feel free to try changing the parameters or random seed and see how the agent's behavior changes.
# ## On-Policy vs Off-Policy
#
# We have now seen an example of both on- and off-policy learning algorithms. Let's compare both Q-learning and SARSA reward results again, side-by-side, to see how they stack up.
# +
# parameters needed by our policy and learning rule
params = {
'epsilon': 0.1, # epsilon-greedy policy
'alpha': 0.1, # learning rate
'gamma': 1.0, # discount factor
}
# episodes/trials
n_episodes = 500
max_steps = 1000
# environment initialization
env = CliffWorld()
# learn Cliff World using Sarsa
np.random.seed(1)
results = learn_environment(env, q_learning, params, max_steps, n_episodes)
value_qlearning, reward_sums_qlearning = results
np.random.seed(1)
results = learn_environment(env, sarsa, params, max_steps, n_episodes)
value_sarsa, reward_sums_sarsa = results
# -
fig, ax = plt.subplots()
ax.plot(reward_sums_qlearning, label='Q-learning')
ax.plot(reward_sums_sarsa, label='SARSA')
ax.set(xlabel='Episodes', ylabel='Total reward')
plt.legend(loc='lower right');
# On this simple Cliff World task, Q-learning and SARSA are almost indistinguisable from a performance standpoint, but we can see that Q-learning has a slight-edge within the 500 episode time horizon. Let's look at the illustrated "greedy policy" plots again.
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 6))
plot_quiver_max_action(env, value_qlearning, ax=ax1)
ax1.set(title='Q-learning maximum value/probability actions')
plot_quiver_max_action(env, value_sarsa, ax=ax2)
ax2.set(title='SARSA maximum value/probability actions');
# What should immediately jump out is that Q-learning learned to go up, then immediately go to the right, skirting the cliff edge, until it hits the wall and goes down to the goal. The policy further away from the cliff is less certain.
#
# SARSA, on the other hand, appears to avoid the cliff edge, going up one more tile before starting over to the goal side. This also clearly solves the challenge of getting to the goal, but does so at an additional -2 cost over the truly optimal route.
#
# Why do you think these behaviors emerged the way they did?
| tutorials/W3D4_ReinforcementLearning/student/W3D4_Tutorial3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="pUq0PvsJ16LM"
# # Getting Started with cuDF
#
# + [markdown] colab_type="text" id="v1iGy7TOyvXK"
# ## Using cuDF with the California Housing Dataset
#
# The final goal of this excercise is to examine the average number of total rooms and bedrooms of a set of properties within a co-ordinate box by using only the _cuDF_ library.
# ### Importing as a cuDF DataFrame
#
# + colab={} colab_type="code" id="-ZTSR5vG8yqa"
## Import the cuDF Library:
import cudf
# + [markdown] colab_type="text" id="TE9Cs3fK0K3E"
# The data we're going to use is the `data/housing.csv` file. This file contains data on housing blocks in the state of California. Let's examine this data further.
# + colab={} colab_type="code" id="fr2L7QmK0Fvo"
## We can read this as a cuDF dataframe by using:
californiaDF = cudf.read_csv('data/housing.csv')
## Visualize DataFrame
print(californiaDF)
## Examine Shape of DataFrame
print("\nDataframe is of dimensions: " + str(californiaDF.shape))
# + [markdown] colab_type="text" id="eP0EHYsR1AKD"
# From the output of the above code block we can see the actual dimensions of the dataframe created. We can visualize the names of the columns by examining `californiaDF.columns.values`.
# + colab={} colab_type="code" id="srv5aCem0sAh"
## Visualizing Column Names
print(californiaDF.columns.values)
# + [markdown] colab_type="text" id="hhiV_UC_2AL4"
# We can also import a dataframe through pandas through the `cudf.DataFrame.from_pandas()` function.
# + colab={} colab_type="code" id="JsYnjWFb1k8s"
## Import Pandas
import pandas
## Read data as pandas dataframe
californiaDF = pandas.read_csv('data/housing.csv')
## Convert to cuDF dataframe
californiaDF = cudf.DataFrame.from_pandas(californiaDF)
## Visualize DataFrame
print(californiaDF)
## Examine Shape of DataFrame
print("\nDataframe is of dimensions: " + str(californiaDF.shape))
# + [markdown] colab_type="text" id="nlhQD1Up3k8t"
# This dataframe should be identical to the one created earlier!
# + [markdown] colab_type="text" id="RZ3h2TU24Fh9"
# ### Selection
#
# Our first task in manipulation is to extract all the rows of data and only their `longitude` , `latitude`, `total_rooms`, `total_bedrooms` and `households` column values.
# + colab={} colab_type="code" id="iOHguPCI3htK"
## Selecting the columns (longitude , latitude, total_rooms, total_bedrooms and households) and all rows alone
householdDF = californiaDF.loc[:,['longitude', 'latitude', 'total_rooms','total_bedrooms','households']]
print(householdDF)
# + [markdown] colab_type="text" id="M5s87Ki65D8F"
# ### Filter queries
#
# Our next task is to visualize only the housing blocks within a certain longitude and latitude bounding box. This box can be defined by two `longitude`,`latitude` pairs; one pair representing the lower left of the box and one pair representing the top right.
#
# Lets focus on the Mountain View Area where we define:
# * Lower left co-ordinates of the bounding box: `latitude` = 37.36472345, `longitude` = -122.12830693
# * Top right co-ordinates of the bounding box: `latitude` = 37.40657584. `longitude`= -122.06162184
#
# + colab={} colab_type="code" id="R8-z0KXv4tU5"
## Running Queries on cuDF
filteredDF = householdDF.query("(-122.06162184 <= longitude >= -122.12830693) and (37.40657584 <= latitude >= 37.36472345)")
print(filteredDF)
## Count the number of occurrences
print(filteredDF)
# + [markdown] colab_type="text" id="V9LLd1K672xx"
# We should do some preliminary cleaning as a good practice against any errors. Lets replace any 'None' categories with a value of '0' in the dataframe. This is done through `filteredDF.fillna()`.
#
# _This is not necessary on this data-set as it is already cleaned, but it is a good practice regardless_
#
# _Note that if the data actually had 'None' values, this process will change the final results slightly_
# + colab={} colab_type="code" id="fyoH7FqJ7_ST"
filteredDF = filteredDF.fillna(0)
print(filteredDF)
# + [markdown] colab_type="text" id="tigaWGjo6qTT"
# We should now have a dataframe with housing data over the Mountain View area. Our next step is to average the number of total_rooms and total_bedrooms over the total households.
# + [markdown] colab_type="text" id="GXGPi56c8QHF"
# ### Operations
#
# We need to find the average bedrooms and rooms for each of the households within a geographic area.
# + colab={} colab_type="code" id="_tSZKwpl7Zgu"
## Average values of certain columns
print("Avg. Households per block in given bounding box: " + str(filteredDF['households'].mean()))
print("Avg. Total Bedrooms per block in given bounding box: " + str(filteredDF['total_bedrooms'].mean()))
print("Avg. Total Rooms per block in given bounding box: " + str(filteredDF['total_rooms'].mean()))
print("\n----------------\n")
avgBedroomsHousehold = filteredDF['total_bedrooms'].sum()/filteredDF['households'].sum()
avgRoomsHousehold = filteredDF['total_rooms'].sum()/filteredDF['households'].sum()
print("Avg. Bedrooms per household in given bounding box: " + str(avgBedroomsHousehold))
print("Avg. Rooms per household in given bounding box: " + str(avgRoomsHousehold))
# + [markdown] colab_type="text" id="nRz_Rah49jKR"
# Congrats! You succesfully generated the average number of bedrooms and rooms per household in a given area in California!
#
# If you want to make this easier, I suggest we create a function that automates this process. Let's do this quickly!
# + colab={} colab_type="code" id="szC0-xnk8zha"
## Function combining all the processes above
# Function input variables:
# csvPath - Path to CSV housing data file
# long1 - Lower left longitude coordinate
# lat1 - Lower left latitude coordinate
# long2 - Upper right longitude coordinate
# lat2 - Upper right latitude coordinate
def HouseHoldAnalysis(csvPath, long1, lat1, long2, lat2):
## Data Input
californiaDF = cudf.read_csv(csvPath)
print("\n Initial Dataframe is of dimensions: " + str(californiaDF.shape) +"\n")
## Selection
householdDF = californiaDF.loc[:,['longitude', 'latitude', 'total_rooms','total_bedrooms','households']]
## Query
filteredDF = householdDF.query("("+str(long2)+" <= longitude >= "+str(long1)+") and ("+str(lat2)+" <= latitude >= "+str(lat1)+")")
## Average values of certain columns
print("FIltered Dataframe is of dimensions: " + str(filteredDF.shape) +"\n")
print("Avg. Households per block in given bounding box: " + str(filteredDF['households'].mean()))
print("Avg. Total Bedrooms per block in given bounding box: " + str(filteredDF['total_bedrooms'].mean()))
print("Avg. Total Rooms per block in given bounding box: " + str(filteredDF['total_rooms'].mean()))
print("\n----------------\n")
avgBedroomsHousehold = filteredDF['total_bedrooms'].sum()/filteredDF['households'].sum()
avgRoomsHousehold = filteredDF['total_rooms'].sum()/filteredDF['households'].sum()
print("Avg. Bedrooms per household in given bounding box: " + str(avgBedroomsHousehold))
print("Avg. Rooms per household in given bounding box: " + str(avgRoomsHousehold))
return(avgBedroomsHousehold, avgRoomsHousehold)
# + [markdown] colab_type="text" id="YZlRJqPvKex_"
# # Here we go!
#
# You can now simply call this function with whatever geographical bounding box values and you will be returned with the average bedrooms per household and average rooms per household in that area!
#
# This function we created can also work on datasets for different areas that follow the same structure as the `data/housing.csv` file. For example, this means that if you have a similar data-set for New York, then you can calculate the average bedrooms and rooms in geographic boxes that you define!
# + colab={} colab_type="code" id="QasHcCIOK5Ov"
## Mountain View Area (Same example as above)
HouseHoldAnalysis('data/housing.csv', -122.12830693, 37.36472345, -122.06162184, 37.40657584)
# + [markdown] colab_type="text" id="6quBOAl4Lf8Z"
# Go on try some more areas in California! If you need help finding `latitude`, `longitude` values, check this link [here](https://www.mapcoordinates.net/en).
# + colab={} colab_type="code" id="e6iX2FPdLvB_"
## Santa Clara Area (New Example!)
HouseHoldAnalysis('data/housing.csv', -121.97046831, 37.3316244, -121.92549303, 37.36519458)
# + [markdown] colab_type="text" id="xtCTb1tx-m1C"
# # Next Steps
#
# ## cuDF Guide
#
# I recommend using the [cuDF documentation guide](https://rapidsai.github.io/projects/cudf/en/latest/index.html) for a deeper understanding of GPU dataframe usage.
#
# ## GPU Data Science
#
# If you wish to learn more about running Data Science Projects on the GPU, I recommend you check out the [full documentation for RAPIDS.](https://docs.rapids.ai/api)
#
# ### Check out the RAPIDS notebooks repos for more examples:
#
# * https://github.com/rapidsai/notebooks
# * https://github.com/rapidsai/notebooks-extended
| blog_notebooks/intro_to_cudf/getting_started_with_cuDF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Parth-Rawri/CIFAR10/blob/main/CIFAR10_Image_Classification_using_CNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="ccOHhPkjnYXM"
import os
import torch
import torchvision
import tarfile
import matplotlib.pyplot as plt
from torchvision.datasets.utils import download_url
from torch.utils.data import random_split
# + [markdown] id="OWU2BvMhgzCI"
# ### Step 1: Download and Explore the Dataset
# + colab={"base_uri": "https://localhost:8080/"} id="s4N0KoSzoZxv" outputId="44846af9-c4c7-410c-ef2a-9d91b476ac91"
dataset_url = 'https://s3.amazonaws.com/fast-ai-imageclas/cifar10.tgz'
download_url(dataset_url, '/content/')
# + id="wb4dQPAQhUn1"
# Extract from archive
with tarfile.open('/content/cifar10.tgz', 'r:gz') as tar:
tar.extractall(path = '/content/data/')
# + colab={"base_uri": "https://localhost:8080/"} id="ZVQv0HKFicKa" outputId="cc84fbf2-346d-42f5-92d4-bab872e419f8"
data_dir = '/content/data/cifar10'
print(os.listdir(data_dir))
classes = os.listdir(data_dir + '/train')
print(classes)
# + [markdown] id="inWF1V7Yk7yC"
# Looking inside a couple of folders.
# + colab={"base_uri": "https://localhost:8080/"} id="rvEN-zDzjM1g" outputId="c1c5fdc8-a17c-49ff-f5a4-f7b96ee40e76"
airplane_files = os.listdir(data_dir + '/train/airplane')
print("No. of training examples for airplanes:", len(airplane_files))
print(airplane_files[:5])
# + colab={"base_uri": "https://localhost:8080/"} id="mZXmhhZtk3Oi" outputId="df49807a-5d91-4c6d-dd46-b1a10df7bc7f"
ship_test_files = os.listdir(data_dir + '/test/ship')
print("No. of test examples for ship:", len(ship_test_files))
print(ship_test_files[:5])
# + [markdown] id="jB0d8eZjl3AL"
# Using the `ImageFolder` class from `torchvision` to load the data as PyTorch tensors.
# + id="kJnehSKDlXxZ"
from torchvision.datasets import ImageFolder
from torchvision.transforms import ToTensor
dataset = ImageFolder(data_dir + '/train', transform = ToTensor())
test_dataset = ImageFolder(data_dir + '/test', transform = ToTensor())
# + colab={"base_uri": "https://localhost:8080/"} id="3BL9pDWum_bM" outputId="d609d6a5-da02-4e76-9ef4-b251fa26d071"
len(dataset), len(test_dataset)
# + [markdown] id="QpoUrlbRnFd5"
# Looking at sample elements from the training dataset.
# + colab={"base_uri": "https://localhost:8080/"} id="cFUTeD5ynCtS" outputId="15d06ad7-cd8c-4eea-f97d-a7954f505faf"
img, label = dataset[0]
print(img.shape, label)
img
# + colab={"base_uri": "https://localhost:8080/"} id="jgDacFUcnbXK" outputId="ad029b1e-e165-4b72-a214-a8b20181aeef"
print(dataset.classes)
# + id="lax9AgAPnovI"
def show_example(img, label):
print('Label: ', dataset.classes[label], "(" + str(label) + ")")
plt.imshow(img.permute(1, 2, 0))
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="ohuUzitQorrM" outputId="8707cf5c-1ee5-4fcd-97cc-a2a592fa46b6"
img, label = dataset[0]
show_example(img, label)
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="1aun4eKRpBSa" outputId="ff482500-7fef-436d-eac3-332208dfc201"
show_example(*dataset[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="GQRznpYXpll3" outputId="12e21b49-a059-447f-d825-4bd924557d80"
show_example(*dataset[450])
# + [markdown] id="SOQXVmQ-qAG2"
# ### Step 2: Training and Validation Datasets
# + [markdown] id="n0bv4FufhvxX"
# To ensure the selection of the same 5000 elements of the validation set, everytime the notebook is restarted, re run, etc. <br>
# We are standardizing our validation set to evaluate our models. SO that we're not evaluating on a different set of images each time. <br>
# This is done by setting a manual seed.
#
#
#
# + id="sjGkl-jVpr-8"
random_seed = 34
torch.manual_seed(random_seed);
# + colab={"base_uri": "https://localhost:8080/"} id="2FNR2d44iFIl" outputId="9b5ed39c-a3f5-4e3a-c981-44f7dd993784"
val_percent = 0.1
val_size = int(len(dataset) * val_percent)
train_size = len(dataset) - val_size
train_ds, val_ds = random_split(dataset, [train_size, val_size])
len(train_ds), len(val_ds)
# + [markdown] id="Q00_XpolmBLs"
# Creating DataLoaders
# + colab={"base_uri": "https://localhost:8080/"} id="Tc_GVihJj0V4" outputId="8b4ed0a1-13c5-4c2b-b835-2e025a429383"
from torch.utils.data.dataloader import DataLoader
batch_size = 128
train_dl = DataLoader(train_ds, batch_size, shuffle = True, num_workers = 4, pin_memory = True)
val_dl = DataLoader(val_ds, batch_size*2, num_workers = 4, pin_memory = True)
test_dl = DataLoader(test_dataset, batch_size*2, num_workers = 4, pin_memory = True)
# + id="ZucDxJpwnBu8"
from torchvision.utils import make_grid
def show_batch(dl):
for images, labels in dl:
fig, ax = plt.subplots(figsize = (12, 6))
ax.set_xticks([])
ax.set_yticks([])
ax.imshow(make_grid(images, nrow=16).permute(1, 2, 0))
break
# + colab={"base_uri": "https://localhost:8080/", "height": 415} id="4D4dWu4Mo-Rw" outputId="cfad4e06-f65d-48c3-9cf5-62068af9ee9d"
show_batch(train_dl)
# + [markdown] id="hcq_ZP-ex3Jj"
# ### Step 3: Developing the Model
# + [markdown] id="MW_-aSKqytbo"
# Implementing a convolution operation on a 1 channel image with a 3x3 kernel.
# + id="w2uwO3N1pQiW"
def apply_kernel(image, kernel):
ri, ci = image.shape # Image dimensions
rk, ck = kernel.shape # Kernel dimensions
ro, co = ri-rk+1, ci-ck+1 # Output dimensions
output = torch.zeros([ro, co])
for i in range(ro):
for j in range(co):
output[i, j] = torch.sum(image[i:i+rk, j:j+ck] * kernel)
return output
# + colab={"base_uri": "https://localhost:8080/"} id="O88DIHFp0ZiB" outputId="f89fcc75-e728-4323-8a67-dc1d51b5ff09"
sample_image = torch.tensor([
[3, 3, 2, 1, 0],
[0, 0, 1, 3, 1],
[3, 1, 2, 2, 3],
[2, 0, 0, 2, 2],
[2, 0, 0, 0, 1]], dtype = torch.float32)
sample_kernel = torch.tensor([
[0, 1, 2],
[2, 2, 0],
[0, 1, 2]], dtype = torch.float32)
apply_kernel(sample_image, sample_kernel)
# + [markdown] id="pLZLeEJn4e1X"
# Developing a simple model!
# + id="hi-rq6-m1Xj2"
import torch.nn as nn
import torch.nn.functional as F
# + id="pIvUWUlQ4pu-"
conv = nn.Conv2d(in_channels=3, out_channels=8, kernel_size=3, stride=1, padding=1)
# + id="8ge_3Uv47Bjn"
pool = nn.MaxPool2d(kernel_size=2, stride=None, padding=0)
# + colab={"base_uri": "https://localhost:8080/"} id="ismCA9wX6Q8r" outputId="6a2ae22a-e1c9-462a-84b3-248838f910d1"
for images, labels in train_dl:
print('images.shape:', images.shape)
out = conv(images)
print('out.shape', out.shape)
out = pool(out)
print('out.shape', out.shape)
break
# + colab={"base_uri": "https://localhost:8080/"} id="xrTvnnDY62Mf" outputId="1b989300-43b7-4238-c626-5698d5076554"
conv.weight.shape
# + [markdown] id="hwzxKDMp9Bqp"
# We have 8 kernels (since we want 8 O/P channels); Each of these 8 kernels contains 3 matrices (since there are 3 I/P channels) of size 3x3
# + colab={"base_uri": "https://localhost:8080/"} id="DGb_MthL8UV5" outputId="7ff76748-ea55-420b-cf7d-dfd56c6f7757"
# Displays 1st Kernel (1/8 Kernels)
conv.weight[0]
# + colab={"base_uri": "https://localhost:8080/"} id="C-OXL4Ka9uPv" outputId="1858a75f-a969-4471-d0cc-38c90cb5c020"
print(f"Matrix for Red:\n{conv.weight[0,0]}\nMatrix for Green:\n{conv.weight[0,1]}\nMatrix for Blue:\n{conv.weight[0,2]}")
# + colab={"base_uri": "https://localhost:8080/"} id="XxeATgpo-HbQ" outputId="b7c08e83-51dd-4d77-b61c-826a429905c4"
# Displays all Kernels
conv.weight
# + [markdown] id="ATBRpg-s-r7p"
# Max Pooling layer doesn't have any weights and biases. It doesn't have parameters, it just applies a rule.
# + id="optvnZfa-lSS"
simple_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=8, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(kernel_size=2, stride=None, padding=0)
)
# + colab={"base_uri": "https://localhost:8080/"} id="Cl-pPkHcASkI" outputId="df6fa481-4838-4656-b6b3-3b390e7705a0"
for images, labels in train_dl:
print('images.shape:', images.shape)
out = simple_model(images)
print('out.shape', out.shape)
break
# + [markdown] id="fEFBN1w3RRQW"
# Image Classification Base Class - developed by extending the nn.Module class
# + id="lH7jUYgtOckn"
class ImageClassificationBase(nn.Module):
def training_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
return loss
def validation_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
acc = accuracy(out, labels) # Calculate accuracy
return {'val_loss': loss.detach(), 'val_acc': acc}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print(f"Epoch [{epoch}], train_loss: {result['train_loss']:.4f}, val_loss: {result['val_loss']:.4f}, val_acc: {result['val_acc']:.4f}")
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
# + [markdown] id="LQHGYUM8RWJq"
# Extending the Imagine Classification Base Class to develop our problem specific model (the CIFAR10 CNN Model)
# + id="y3U1TxksRNpj"
class Cifar10CnnModel(ImageClassificationBase):
def __init__(self):
super().__init__()
self.network = nn.Sequential(
# input: 3 x 32 x 32
nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, stride=1, padding=1),
# output: 32 x 32 x 32
nn.ReLU(),
# output: 32 x 32 x 32
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1),
# output: 64 x 32 x 32
nn.ReLU(),
# output: 64 x 32 x 32
nn.MaxPool2d(kernel_size=2, stride=None, padding=0),
# output: 64 x 16 x 16
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=None, padding=0),
# output: 128 x 8 x 8
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=None, padding=0),
# output: 256 x 4 x 4
nn.Flatten(),
nn.Linear(in_features= 256*4*4, out_features= 1024),
nn.ReLU(),
nn.Linear(in_features= 1024, out_features= 512),
nn.ReLU(),
nn.Linear(in_features= 512, out_features= 10)
)
def forward(self, xb):
return self.network(xb)
# + colab={"base_uri": "https://localhost:8080/"} id="9rbNTbtkYwLN" outputId="cd99ba23-5fbc-4057-8fbf-7128f9190643"
model = Cifar10CnnModel()
model
# + colab={"base_uri": "https://localhost:8080/"} id="WTEoP_Knc3Ce" outputId="b8641d4b-ff01-44c1-fab8-76da541a71f5"
for images, labels in train_dl:
print('images.shape:', images.shape)
out = model(images)
print('out.shape', out.shape)
print('out[0]:', out[0])
break
# + id="G2hdr38RhMwD"
def get_default_device():
"""Pick GPU if available, else CPU"""
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
def to_device(data, device):
"""Move tensor(s) to chosen device"""
if isinstance(data, (list,tuple)):
return [to_device(x, device) for x in data]
return data.to(device, non_blocking=True)
class DeviceDataLoader():
"""Wrap a dataloader to move data to a device"""
def __init__(self, dl, device):
self.dl = dl
self.device = device
def __iter__(self):
"""Yield a batch of data after moving it to device"""
for b in self.dl:
yield to_device(b, self.device)
def __len__(self):
"""Number of batches"""
return len(self.dl)
# + colab={"base_uri": "https://localhost:8080/"} id="GcrMZWOPi52f" outputId="2253e0ff-ee21-4011-811b-364bcc7facad"
device = get_default_device()
device
# + id="sIOoTO-ejKys" colab={"base_uri": "https://localhost:8080/"} outputId="2112bded-502f-49af-9d6f-98c44d574446"
train_dl = DeviceDataLoader(train_dl, device)
val_dl =DeviceDataLoader(val_dl, device)
to_device(model, device)
# + [markdown] id="qo1MzwAHjuXv"
# ### Step 4: Training the Model
# + id="F6Q5BCPtjfek"
@torch.no_grad()
def evaluate(model, val_loader):
model.eval()
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
model.train()
train_losses = []
for batch in train_loader:
loss = model.training_step(batch)
train_losses.append(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
result['train_loss'] = torch.stack(train_losses).mean().item()
model.epoch_end(epoch, result)
history.append(result)
return history
# + id="7N7ci9T2l_wd"
model = to_device(Cifar10CnnModel(), device)
# + colab={"base_uri": "https://localhost:8080/"} id="mgE5GpZIQFGM" outputId="1ae7d2da-3b4c-4338-ebed-43175a837adc"
total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
total_params
# + colab={"base_uri": "https://localhost:8080/"} id="GOLKgG0UnaSW" outputId="18b0ac4f-c995-4cb6-886b-cebab54d3761"
evaluate(model, val_dl)
# + id="isBLwR_WTL1P"
num_epochs = 10
opt_func = torch.optim.Adam
lr = 0.001
# + colab={"base_uri": "https://localhost:8080/"} id="qLVHulg7TlXF" outputId="2beea5e6-feb9-42ac-c279-986a10c34025"
# %%time
history = fit(num_epochs, lr, model, train_dl, val_dl, opt_func)
# + colab={"base_uri": "https://localhost:8080/"} id="49Y42WGuUGZI" outputId="cee728fa-884f-4923-aafd-84f8767723b1"
history
# + id="ltabSTQQUqDi"
def plot_accuracies(history):
accuracies = [x['val_acc'] for x in history]
plt.plot(accuracies, '-x')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.title('Accuracy vs. No. of epochs');
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="X-QGwpnOU91Y" outputId="3fd6e189-7bbe-4116-a976-f689fc699fa9"
plot_accuracies(history)
# + id="gbMLGUyFVCqb"
def plot_losses(history):
train_losses = [x['train_loss'] for x in history]
val_losses = [x['val_loss'] for x in history]
plt.plot(train_losses, '-bx')
plt.plot(val_losses, '-rx')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['Training', 'Validation'])
plt.title('Loss vs. No. of epochs');
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="m8a7DFNnViiD" outputId="dc9a5cc6-e2cf-4c2a-eeeb-89ac4133e798"
plot_losses(history)
# + [markdown] id="b7sSGfXGbFzM"
# ### Step 5: Testing on individual images
# + id="--a71a8ZYdeH"
def predict_image(img, model):
# Convert to a batch of 1
xb = to_device(img.unsqueeze(0), device)
# Get predictions from model
yb = model(xb)
# Pick index with highest probability
_, preds = torch.max(yb, dim=1)
# Retrieve the class label
return dataset.classes[preds[0].item()]
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="SNUuljzqcYx8" outputId="78aa7916-d91b-4eea-82fa-73a2e1231c16"
img, label = test_dataset[450]
plt.imshow(img.permute(1, 2, 0))
print('Label:', dataset.classes[label], ', Predicted:', predict_image(img, model))
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="FHOeP-0WcZsO" outputId="73a8de27-b77b-497c-8f67-067648f7858d"
img, label = test_dataset[6430]
plt.imshow(img.permute(1, 2, 0))
print('Label:', dataset.classes[label], ', Predicted:', predict_image(img, model))
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="NnsXT0bPcdo7" outputId="31c5092a-639b-48e5-aa63-aaa63b098337"
img, label = test_dataset[644]
plt.imshow(img.permute(1, 2, 0))
print('Label:', dataset.classes[label], ', Predicted:', predict_image(img, model))
# + colab={"base_uri": "https://localhost:8080/"} id="-LzuGRlrdr0u" outputId="e1dbdaf8-1139-4baf-b2a2-ae1cc9bd0625"
test_loader = DeviceDataLoader(DataLoader(test_dataset, batch_size*2), device)
result = evaluate(model, test_loader)
result
# + [markdown] id="eMd6xtFHer3k"
# ### Step 6: Saving the trained Model
# + id="Go-0Yijxdy1n"
torch.save(model.state_dict(), 'cifar10-cnn.pth')
# + [markdown] id="S2g9KocwfPpY"
# The `.state_dict` method returns an `OrderedDict` containing all the weights and bias matrices mapped to the right attributes of the model. To load the model weights, we can redefine the model with the same structure, and use the `.load_state_dict` method.
# + id="0Ycnro1cfPK2"
model2 = to_device(Cifar10CnnModel(), device)
# + colab={"base_uri": "https://localhost:8080/"} id="Pk74-ziifpnU" outputId="16a81b36-c98d-4378-b095-20c9097f1fa1"
model2.load_state_dict(torch.load('cifar10-cnn.pth'))
# + colab={"base_uri": "https://localhost:8080/"} id="SExfzA4pe5xa" outputId="5edee372-b9df-4e46-bc09-dc3ae053e08c"
evaluate(model2, test_loader)
| CIFAR10_Image_Classification_using_CNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import turicreate as tc
import mxnet as mx
import numpy as np
# setgup
tc.config.set_num_gpus(-1)
# Load the style and content images
styles = tc.load_images('/input/style_transfer_data/style/')
content = tc.load_images('/input/style_transfer_data/content/')
# show styles
styles
# show content
content
# +
# Create a StyleTransfer model
model = tc.style_transfer.create(styles, content)
# Load some test images
test_images = tc.load_images('/input/style_transfer_data/test/')
# Stylize the test images
stylized_images = model.stylize(test_images)
# Save the model for later use in Turi Create
model.save('mymodel.model')
# Export for use in Core ML
model.export_coreml('MyStyleTransfer.mlmodel')
| data/output/style_transfer_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# <p align="center">
# <img src="https://github.com/GeostatsGuy/GeostatsPy/blob/master/TCG_color_logo.png?raw=true" width="220" height="240" />
#
# </p>
#
# ## Interactive Sequential Gaussian Simulation Demonstration
#
#
# ### <NAME>, Associate Professor, University of Texas at Austin
#
# ##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#
#
# ### The Interactive Workflow
#
# Here's a simple workflow for calculating the sequential Gaussian simulation with sequential use of simple kriging estimate and the estimation variance for a local uncertainty model and Monte Carlo simulation.
#
# * we use a 'toy problem' with only 3 data for speed and interpretability of the results
#
# #### Spatial Estimation
#
# Consider the case of making an estimate at some unsampled location, $𝑧(\bf{u}_0)$, where $z$ is the property of interest (e.g. porosity etc.) and $𝐮_0$ is a location vector describing the unsampled location.
#
# How would you do this given data, $𝑧(\bf{𝐮}_1)$, $𝑧(\bf{𝐮}_2)$, and $𝑧(\bf{𝐮}_3)$?
#
# It would be natural to use a set of linear weights to formulate the estimator given the available data.
#
# \begin{equation}
# z^{*}(\bf{u}) = \sum^{n}_{\alpha = 1} \lambda_{\alpha} z(\bf{u}_{\alpha})
# \end{equation}
#
# We could add an unbiasedness constraint to impose the sum of the weights equal to one. What we will do is assign the remainder of the weight (one minus the sum of weights) to the global average; therefore, if we have no informative data we will estimate with the global average of the property of interest.
#
# \begin{equation}
# z^{*}(\bf{u}) = \sum^{n}_{\alpha = 1} \lambda_{\alpha} z(\bf{u}_{\alpha}) + \left(1-\sum^{n}_{\alpha = 1} \lambda_{\alpha} \right) \overline{z}
# \end{equation}
#
# We will make a stationarity assumption, so let's assume that we are working with residuals, $y$.
#
# \begin{equation}
# y^{*}(\bf{u}) = z^{*}(\bf{u}) - \overline{z}(\bf{u})
# \end{equation}
#
# If we substitute this form into our estimator the estimator simplifies, since the mean of the residual is zero.
#
# \begin{equation}
# y^{*}(\bf{u}) = \sum^{n}_{\alpha = 1} \lambda_{\alpha} y(\bf{u}_{\alpha})
# \end{equation}
#
# while satisfying the unbaisedness constraint.
#
# #### Kriging
#
# Now the next question is what weights should we use?
#
# We could use equal weighting, $\lambda = \frac{1}{n}$, and the estimator would be the average of the local data applied for the spatial estimate. This would not be very informative.
#
# We could assign weights considering the spatial context of the data and the estimate:
#
# * **spatial continuity** as quantified by the variogram (and covariance function)
# * **redundancy** the degree of spatial continuity between all of the available data with themselves
# * **closeness** the degree of spatial continuity between the avaiable data and the estimation location
#
# The kriging approach accomplishes this, calculating the best linear unbiased weights for the local data to estimate at the unknown location. The derivation of the kriging system and the resulting linear set of equations is available in the lecture notes. Furthermore kriging provides a measure of the accuracy of the estimate! This is the kriging estimation variance (sometimes just called the kriging variance).
#
# \begin{equation}
# \sigma^{2}_{E}(\bf{u}) = C(0) - \sum^{n}_{\alpha = 1} \lambda_{\alpha} C(\bf{u}_0 - \bf{u}_{\alpha})
# \end{equation}
#
# What is 'best' about this estimate? Kriging estimates are best in that they minimize the above estimation variance.
#
# #### Properties of Kriging
#
# Here are some important properties of kriging:
#
# * **Exact interpolator** - kriging estimates with the data values at the data locations
# * **Kriging variance** can be calculated before getting the sample information, as the kriging estimation variance is not dependent on the values of the data nor the kriging estimate, i.e. the kriging estimator is homoscedastic.
# * **Spatial context** - kriging takes into account, furthermore to the statements on spatial continuity, closeness and redundancy we can state that kriging accounts for the configuration of the data and structural continuity of the variable being estimated.
# * **Scale** - kriging may be generalized to account for the support volume of the data and estimate. We will cover this later.
# * **Multivariate** - kriging may be generalized to account for multiple secondary data in the spatial estimate with the cokriging system. We will cover this later.
# * **Smoothing effect** of kriging can be forecast. We will use this to build stochastic simulations later.
#
# #### Spatial Continuity
#
# **Spatial Continuity** is the correlation between values over distance.
#
# * No spatial continuity – no correlation between values over distance, random values at each location in space regardless of separation distance.
#
# * Homogenous phenomenon have perfect spatial continuity, since all values as the same (or very similar) they are correlated.
#
# We need a statistic to quantify spatial continuity! A convenient method is the Semivariogram.
#
# #### The Semivariogram
#
# Function of difference over distance.
#
# * The expected (average) squared difference between values separated by a lag distance vector (distance and direction), $h$:
#
# \begin{equation}
# \gamma(\bf{h}) = \frac{1}{2 N(\bf{h})} \sum^{N(\bf{h})}_{\alpha=1} (z(\bf{u}_\alpha) - z(\bf{u}_\alpha + \bf{h}))^2
# \end{equation}
#
# where $z(\bf{u}_\alpha)$ and $z(\bf{u}_\alpha + \bf{h})$ are the spatial sample values at tail and head locations of the lag vector respectively.
#
# * Calculated over a suite of lag distances to obtain a continuous function.
#
# * the $\frac{1}{2}$ term converts a variogram into a semivariogram, but in practice the term variogram is used instead of semivariogram.
# * We prefer the semivariogram because it relates directly to the covariance function, $C_x(\bf{h})$ and univariate variance, $\sigma^2_x$:
#
# \begin{equation}
# C_x(\bf{h}) = \sigma^2_x - \gamma(\bf{h})
# \end{equation}
#
# Note the correlogram is related to the covariance function as:
#
# \begin{equation}
# \rho_x(\bf{h}) = \frac{C_x(\bf{h})}{\sigma^2_x}
# \end{equation}
#
# The correlogram provides of function of the $\bf{h}-\bf{h}$ scatter plot correlation vs. lag offset $\bf{h}$.
#
# \begin{equation}
# -1.0 \le \rho_x(\bf{h}) \le 1.0
# \end{equation}
#
# #### Sequential Gaussian Simulation
#
# With sequential Gaussian simulation we build on kriging by:
#
# * adding a random residual with the missing variance
#
# * sequentially adding the simulated values as data to correct the covariance between the simulated values
#
# I have more on this topic at [Simulation YouTube Lecture](https://www.youtube.com/watch?v=3cLqK3lR56Y&list=PLG19vXLQHvSB-D4XKYieEku9GQMQyAzjJ&index=45&t=813s).
#
# #### Objective
#
# In the PGE 383: Stochastic Subsurface Modeling class I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows.
#
# The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods.
#
# #### Getting Started
#
# Here's the steps to get setup in Python with the GeostatsPy package:
#
# 1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
# 2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
# 3. In the terminal type: pip install geostatspy.
# 4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
#
# You will need to copy the data file to your working directory. They are available here:
#
# * Tabular data - sample_data.csv at https://git.io/fh4gm.
#
# There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code.
#
# #### Load the required libraries
#
# The following code loads the required libraries.
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
# We will also need some standard packages. These should have been installed with Anaconda 3.
# %matplotlib inline
import os # to set current working directory
import sys # supress output to screen for interactive variogram modeling
import io
import numpy as np # arrays and matrix math
import pandas as pd # DataFrames
import matplotlib.pyplot as plt # plotting
from matplotlib.pyplot import cm # color maps
from matplotlib.patches import Ellipse # plot an ellipse
import math # sqrt operator
import random # random simulation locations
from copy import copy # copy a colormap
from scipy.stats import norm
from ipywidgets import interactive # widgets and interactivity
from ipywidgets import widgets
from ipywidgets import Layout
from ipywidgets import Label
from ipywidgets import VBox, HBox
from scipy.stats import norm # Gaussian distribution
import scipy.stats as stats # trimmed statistics
# If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs.
# #### Simple, Simple Kriging Function
#
# Let's write a fast Python function to take data points and unknown location and provide the:
#
# * **simple kriging estimate**
#
# * **simple kriging variance / estimation variance**
#
# * **simple kriging weights**
#
# This provides a fast method for small datasets, with less parameters (no search parameters) and the ability to see the simple kriging weights.
#
# * we use it here for fast, flexible application of sequential simulation
#
# * the method will not work with only ones simulation location so we send 2 and only use the first result (the 2nd is always a dummy location in the workflow below.
# +
def simple_simple_krige(df,xcol,ycol,vcol,dfl,xlcol,ylcol,vario,skmean):
# load the variogram
nst = vario['nst']; pmx = 9999.9
cc = np.zeros(nst); aa = np.zeros(nst); it = np.zeros(nst)
ang = np.zeros(nst); anis = np.zeros(nst)
nug = vario['nug']; sill = nug
cc[0] = vario['cc1']; sill = sill + cc[0]
it[0] = vario['it1']; ang[0] = vario['azi1'];
aa[0] = vario['hmaj1']; anis[0] = vario['hmin1']/vario['hmaj1'];
if nst == 2:
cc[1] = vario['cc2']; sill = sill + cc[1]
it[1] = vario['it2']; ang[1] = vario['azi2'];
aa[1] = vario['hmaj2']; anis[1] = vario['hmin2']/vario['hmaj2'];
# set up the required matrices
rotmat, maxcov = geostats.setup_rotmat(nug,nst,it,cc,ang,pmx)
ndata = len(df); a = np.zeros([ndata,ndata]); r = np.zeros(ndata); s = np.zeros(ndata); rr = np.zeros(ndata)
nest = len(dfl)
est = np.zeros(nest); var = np.full(nest,sill); weights = np.zeros([nest,ndata])
# Make and solve the kriging matrix, calculate the kriging estimate and variance
for iest in range(0,nest):
for idata in range(0,ndata):
for jdata in range(0,ndata):
a[idata,jdata] = geostats.cova2(df[xcol].values[idata],df[ycol].values[idata],df[xcol].values[jdata],df[ycol].values[jdata],
nst,nug,pmx,cc,aa,it,ang,anis,rotmat,maxcov)
r[idata] = geostats.cova2(df[xcol].values[idata],df[ycol].values[idata],dfl[xlcol].values[iest],dfl[ylcol].values[iest],
nst,nug,pmx,cc,aa,it,ang,anis,rotmat,maxcov)
rr[idata] = r[idata]
s = geostats.ksol_numpy(ndata,a,r)
sumw = 0.0
for idata in range(0,ndata):
sumw = sumw + s[idata]
weights[iest,idata] = s[idata]
est[iest] = est[iest] + s[idata]*df[vcol].values[idata]
var[iest] = var[iest] - s[idata]*rr[idata]
est[iest] = est[iest] + (1.0-sumw)*skmean
return est,var,weights
# -
# #### Interactive Sequential Simulation to Random Points Method
#
# For this first interactive method we will perform sequential simulation:
#
# * at **nsim** random point locations in the area of interest.
#
# The following code includes:
#
# * dashboard with number of simulation locations, variogram model and data locations
#
# * plots of variogram model, data locations with point scaled by weights and uncertainty distribution at the unknown location
#
# Let's first set up the model area of interest.
csiz = 100; xmn = csiz * 0.5; nx = 10; ymn = csiz * 0.5; ny = 10
xmin = xmn - csiz * 0.5; xmax = xmin + nx * csiz
ymin = ymn - csiz * 0.5; ymax = ymin + ny * csiz
print('X extents [' + str(xmin) + ',' + str(xmax) + '] and Y entents [' + str(ymin) + ',' + str(ymax) + ']')
# Now let's set up our dash board.
# +
import warnings; warnings.simplefilter('ignore')
# dashboard: number of simulation locations and variogram parameters
style = {'description_width': 'initial'}
l = widgets.Text(value=' Sequential Simulation, <NAME>, Associate Professor, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))
nsim = widgets.IntSlider(min = 0, max = 99, value = 5, step = 1, description = 'nsim',orientation='vertical',
layout=Layout(width='25px', height='200px'),continuous_update = False)
nsim.style.handle_color = 'gray'
nug = widgets.FloatSlider(min = 0, max = 1.0, value = 0.0, step = 0.1, description = 'nug',orientation='vertical',
layout=Layout(width='25px', height='200px'),continuous_update = False)
nug.style.handle_color = 'gray'
it1 = widgets.Dropdown(options=['Spherical', 'Exponential', 'Gaussian'],value='Spherical',
description='Type1:',disabled=False,layout=Layout(width='180px', height='30px'), style=style,continuous_update = False)
azi = widgets.FloatSlider(min=0, max = 360, value = 0, step = 22.5, description = 'azi',
orientation='vertical',layout=Layout(width='40px', height='200px'),continuous_update = False)
azi.style.handle_color = 'gray'
hmaj1 = widgets.FloatSlider(min=0.01, max = 10000.0, value = 100.0, step = 25.0, description = 'hmaj1',
orientation='vertical',layout=Layout(width='40px', height='200px'),continuous_update = False)
hmaj1.style.handle_color = 'gray'
hmin1 = widgets.FloatSlider(min = 0.01, max = 10000.0, value = 100.0, step = 25.0, description = 'hmin1',
orientation='vertical',layout=Layout(width='40px', height='200px'),continuous_update = False)
hmin1.style.handle_color = 'gray'
uikvar = widgets.HBox([nsim,nug,it1,azi,hmaj1,hmin1],)
# dashboard: data locations
x1 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 100.0, step = 1.0, description = 'x1',orientation='horizontal',
layout=Layout(width='180px', height='30px'),readout_format = '.0f',style=style,continuous_update = False)
x1.style.handle_color = 'blue'
y1 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 100.0, step = 1.0, description = 'y1',orientation='vertical',
layout=Layout(width='90px', height='180px'),readout_format = '.0f',style=style,continuous_update = False)
y1.style.handle_color = 'blue'
uik1 = widgets.VBox([x1,y1],)
x2 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 500.0, step = 1.0, description = 'x2',orientation='horizontal',
layout=Layout(width='180px', height='30px'),readout_format = '.0f',style=style,continuous_update = False)
x2.style.handle_color = 'red'
y2 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 800.0, step = 1.0, description = 'y2',orientation='vertical',
layout=Layout(width='90px', height='180px'),readout_format = '.0f',style=style,continuous_update = False)
y2.style.handle_color = 'red'
uik2 = widgets.VBox([x2,y2],)
x3 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 900.0, step = 1.0, description = 'x3',orientation='horizontal',
layout=Layout(width='180px', height='30px'),readout_format = '.0f',style=style,continuous_update = False)
x3.style.handle_color = 'green'
y3 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 200.0, step = 1.0, description = 'y3',orientation='vertical',
layout=Layout(width='90px', height='180px'),readout_format = '.0f',style=style,continuous_update = False)
y3.style.handle_color = 'green'
uik3 = widgets.VBox([x3,y3],)
uipars = widgets.HBox([uikvar,uik1,uik2,uik3],)
uik = widgets.VBox([l,uipars],)
def convert_type(it):
if it == 'Spherical':
return 1
elif it == 'Exponential':
return 2
else:
return 3
def f_make_krige(nsim,nug,it1,azi,hmaj1,hmin1,x1,y1,x2,y2,x3,y3): # function to take parameters, make sample and plot
text_trap = io.StringIO() # suppress all text function output to dashboard to avoid clutter
sys.stdout = text_trap
cmap = cm.inferno
np.random.seed(seed = 73073) # ensure same results for all runs
it1 = convert_type(it1)
nst = 1; xlag = 10; nlag = int(hmaj1/xlag); c1 = 1.0-nug
vario = GSLIB.make_variogram(nug,nst,it1,c1,azi,hmaj1,hmin1) # make model object
index_maj,h_maj,gam_maj,cov_maj,ro_maj = geostats.vmodel(nlag,xlag,azi,vario) # project the model in the major azimuth # project the model in the 135 azimuth
index_min,h_min,gam_min,cov_min,ro_min = geostats.vmodel(nlag,xlag,azi+90.0,vario) # project the model in the minor azimuth
seed = 73073
# make hard data dataframe and hard code the data values
x = [x1,x2,x3]; y = [y1,y2,y3]; value = [-2.0,0.0,2.0]
df = pd.DataFrame({'X':x,'Y':y,'Value':value})
ndata = len(df); skmean = np.average(df['Value'].values)
# make simulation locations dataframe
random.seed(a = seed)
xl = random.sample(range(0, 1000), nsim);
random.seed(a = seed+1)
yl = random.sample(range(0, 1000), nsim); valuel = np.full(nsim,-9999)
dfl = pd.DataFrame({'X':xl,'Y':yl, 'Value':valuel},dtype=np.single)
dfl_temp = pd.DataFrame({'X':[-9999,9999],'Y':[-9999,9999], 'Value':[-9999,-9999]},dtype=np.single)
sim = np.zeros(len(dfl)); sk_est = np.zeros(len(dfl)); sk_var = np.zeros(len(dfl)); sk_std = np.zeros(len(dfl))
sk_weights = np.zeros([ndata,len(dfl)])
# perform sequential simulation
for isim in range(0,len(dfl)):
dfl_temp.loc[0,'X'] = dfl.loc[isim,'X']; dfl_temp.loc[0,'Y'] = dfl.loc[isim,'Y']; # copy current data to first data / method needs atleast 2 data
sk_est_temp, sk_var_temp, sk_weights_temp = simple_simple_krige(df,'X','Y','Value',dfl_temp,'X','Y',vario,skmean=skmean)
sk_est[isim] = sk_est_temp[0];
sk_var[isim] = sk_var_temp[0];
sk_weights[:,isim] = sk_weights_temp[0,:ndata]
if sk_var[isim] == 0:
sk_std[isim] = 0.0
else:
sk_std[isim] = math.sqrt(sk_var[isim])
sim[isim] = norm.rvs(loc=sk_est[isim], scale=sk_std[isim], size=1)[0] # random seedset at the start
df = df.append({'X': dfl.loc[isim,'X'],'Y': dfl.loc[isim,'Y'],'Value': sim[isim]}, ignore_index=True)
dfl.at[isim,'Value'] = float(sim[isim])
# plot the variogram model
xlag = 10.0; nlag = int(hmaj1/xlag)
plt.subplot(1,3,1)
plt.plot([0,hmaj1*1.5],[1.0,1.0],color = 'black')
plt.plot(h_maj,gam_maj,color = 'black',label = 'Major ' + str(azi))
plt.plot(h_min,gam_min,color = 'black',label = 'Minor ' + str(azi+90.0))
deltas = [22.5, 45, 67.5];
ndelta = len(deltas); hd = np.zeros(ndelta); gamd = np.zeros(ndelta);
color=iter(cm.plasma(np.linspace(0,1,ndelta)))
for delta in deltas:
index,hd,gamd,cov,ro = geostats.vmodel(nlag,xlag,azi+delta,vario);
c=next(color)
plt.plot(hd,gamd,color = c,label = 'Azimuth ' + str(azi+delta))
plt.xlabel(r'Lag Distance $\bf(h)$, (m)')
plt.ylabel(r'$\gamma \bf(h)$')
plt.title('Interpolated NSCORE Porosity Variogram Models')
plt.xlim([0,hmaj1*1.5])
plt.ylim([0,1.4])
plt.legend(loc='upper left')
# plot the data and simulated values on a scatter plot
sk_weights_avg = np.mean(sk_weights,axis = 1)
plt.subplot(1,3,2)
for idata in range(0,len(df)):
if idata < ndata:
plt.scatter([df.loc[idata,'X']],[df.loc[idata,'Y']],marker='^',
c = [df.loc[idata,'Value']], cmap = cmap, vmin = -2.0, vmax = 2.0, edgecolors = 'black',
s = 100,label = 'Original Data')
else:
plt.scatter([df.loc[idata,'X']],[df.loc[idata,'Y']],
c = [df.loc[idata,'Value']], cmap = cmap, vmin = -2.0, vmax = 2.0, edgecolors = 'black',
label = 'Simulated Values')
ax = plt.gca()
plt.xlabel('X(m)'); plt.ylabel('Y(m)')
plt.title('Sequential Simulation - Data and Unknown Locations')
plt.xlim([0,1000])
plt.ylim([0,1000])
plt.colorbar()
if nsim < 10:
for i, txt in enumerate(np.round(dfl['Value'].values,2)):
plt.annotate(txt, (dfl.loc[i,'X']-40, dfl.loc[i,'Y']-40))
ellipse = Ellipse((500, 500),width=hmin1*2.0,height=hmaj1*2.0,angle = 360-azi,facecolor='gray',alpha = 0.1)
ax = plt.gca()
ax.add_patch(ellipse)
# plot the distribution of the simulated values
plt.subplot(1,3,3)
plt.hist(sim,bins = np.linspace(-3.0,3.0,20),alpha=0.2,color="red",edgecolor="black")
plt.xlim([-3.0,3.0]); plt.ylim([0,nsim/2])
plt.title('Uncertainty Model at Unknown Location')
plt.xlabel('Value'); plt.ylabel('Frequency')
ax = plt.gca()
ax.annotate('Simulations: Mean = ' + str(np.round(np.average(sim),2)), (-2.8, nsim*0.05))
ax.annotate('Simulations: Standard Deviation = ' + str(np.round(np.std(sim),2)), (-2.8, nsim *0.02))
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.2, top=0.9, wspace=0.3, hspace=0.3)
plt.show()
# connect the function to make the samples and plot to the widgets
interactive_plot = widgets.interactive_output(f_make_krige, {'nsim':nsim,'nug':nug, 'it1':it1, 'azi':azi, 'hmaj1':hmaj1, 'hmin1':hmin1,
'x1':x1, 'y1':y1, 'x2':x2, 'y2':y2, 'x3':x3, 'y3':y3,})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
# -
# ### Interactive Sequential Simulation to Random Points Demostration
#
# * select the variogram model and the data locations and observe the outputs from sequential simulation
#
# #### <NAME>, Associate Professor, University of Texas at Austin
#
# ##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
#
# ### The Inputs
#
# Select the variogram model and the data locations:
#
# * **nug**: nugget effect
#
# * **c1 **: contributions of the sill
#
# * **hmaj1 / hmin1 **: range in the major and minor direction
#
# * **(x1, y1),...(x3,y3) **: spatial data locations
display(uik, interactive_plot) # display the interactive plot
# #### Interactive Sequential Simulation to a Regular Grid Method
#
# Let's repeat the previous interactive demonstration, but this time we will simulate on a random set of nodes on a regular grid with our point data.
#
# * this is more similar to current practice with most spatial modeling software
#
# The following code includes:
#
# * dashboard with number of simulated nodes, variogram model and data locations
#
# * plot of the point data and the simulated model on a regular grid
# +
import warnings; warnings.simplefilter('ignore')
# dashboard: number of simulation grid nodes and the variogram model
style = {'description_width': 'initial'}
l = widgets.Text(value=' Sequential Simulation, <NAME>, Associate Professor, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))
nsim = widgets.IntSlider(min = 0, max = 100, value = 5, step = 1, description = 'nsim',orientation='vertical',
layout=Layout(width='40px', height='200px'),continuous_update=False)
nsim.style.handle_color = 'gray'
nug = widgets.FloatSlider(min = 0, max = 1.0, value = 0.0, step = 0.1, description = 'nug',orientation='vertical',
layout=Layout(width='25px', height='200px'),continuous_update=False)
nug.style.handle_color = 'gray'
it1 = widgets.Dropdown(options=['Spherical', 'Exponential', 'Gaussian'],value='Spherical',
description='Type1:',disabled=False,layout=Layout(width='180px', height='30px'), style=style)
seed = widgets.IntText(value=73074,description='Seed:',disabled=False,layout=Layout(width='180px', height='30px'),continuous_update=False)
azi = widgets.FloatSlider(min=0, max = 360, value = 0, step = 22.5, description = 'azi',
orientation='vertical',layout=Layout(width='40px', height='200px'),continuous_update=False)
azi.style.handle_color = 'gray'
hmaj1 = widgets.FloatSlider(min=0.01, max = 10000.0, value = 100.0, step = 25.0, description = 'hmaj1',
orientation='vertical',layout=Layout(width='40px', height='200px'),continuous_update=False)
hmaj1.style.handle_color = 'gray'
hmin1 = widgets.FloatSlider(min = 0.01, max = 10000.0, value = 100.0, step = 25.0, description = 'hmin1',
orientation='vertical',layout=Layout(width='40px', height='200px'),continuous_update=False)
hmin1.style.handle_color = 'gray'
uikvarb = widgets.VBox([it1,seed],)
uikvar = widgets.HBox([nsim,nug,uikvarb,azi,hmaj1,hmin1],) # basic widget formatting
# dashboard: data locations
x1 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 100.0, step = 1.0, description = 'x1',orientation='horizontal',
layout=Layout(width='180px', height='30px'),readout_format = '.0f',style=style,continuous_update=False)
x1.style.handle_color = 'blue'
y1 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 100.0, step = 1.0, description = 'y1',orientation='vertical',
layout=Layout(width='90px', height='180px'),readout_format = '.0f',style=style,continuous_update=False)
y1.style.handle_color = 'blue'
uik1 = widgets.VBox([x1,y1],)
x2 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 500.0, step = 1.0, description = 'x2',orientation='horizontal',
layout=Layout(width='180px', height='30px'),readout_format = '.0f',style=style,continuous_update=False)
x2.style.handle_color = 'red'
y2 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 800.0, step = 1.0, description = 'y2',orientation='vertical',
layout=Layout(width='90px', height='180px'),readout_format = '.0f',style=style,continuous_update=False)
y2.style.handle_color = 'red'
uik2 = widgets.VBox([x2,y2],)
x3 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 900.0, step = 1.0, description = 'x3',orientation='horizontal',
layout=Layout(width='180px', height='30px'),readout_format = '.0f',style=style,continuous_update=False)
x3.style.handle_color = 'yellow'
y3 = widgets.FloatSlider(min=0.0, max = 1000.0, value = 200.0, step = 1.0, description = 'y3',orientation='vertical',
layout=Layout(width='90px', height='180px'),readout_format = '.0f',style=style,continuous_update=False)
y3.style.handle_color = 'yellow'
uik3 = widgets.VBox([x3,y3],)
uipars = widgets.HBox([uikvar,uik1,uik2,uik3],)
uik = widgets.VBox([l,uipars],)
def convert_type(it):
if it == 'Spherical':
return 1
elif it == 'Exponential':
return 2
else:
return 3
def f_make_krige2(nsim,nug,it1,seed,azi,hmaj1,hmin1,x1,y1,x2,y2,x3,y3): # function to take parameters, make sample and plot
text_trap = io.StringIO() # suppress all text function output to dashboard to avoid clutter
sys.stdout = text_trap
cmap = cm.inferno
it1 = convert_type(it1)
nst = 1; xlag = 10; nlag = int(hmaj1/xlag); c1 = 1.0-nug
vario = GSLIB.make_variogram(nug,nst,it1,c1,azi,hmaj1,hmin1) # make model object
index_maj,h_maj,gam_maj,cov_maj,ro_maj = geostats.vmodel(nlag,xlag,azi,vario) # project the model in the major azimuth # project the model in the 135 azimuth
index_min,h_min,gam_min,cov_min,ro_min = geostats.vmodel(nlag,xlag,azi+90.0,vario) # project the model in the minor azimuth
# make data dataframe
x = [x1,x2,x3]; y = [y1,y2,y3]; value = [-1.5,0.0,1.5]
df = pd.DataFrame({'X':x,'Y':y,'Value':value})
ndata = len(df); skmean = np.average(df['Value'].values)
# make simulation nodes dataframe
random.seed(a = seed) # ensure same results for all runs, you can sequentially add / remove nodes
if nsim == 100:
icelll = np.linspace(0, nx*ny-1, 100)
random.shuffle(icelll)
else:
random.seed(seed)
icelll = np.asarray(random.sample(range(0, nx*ny-1), nsim),dtype = np.int32)
iyl = np.around(icelll / nx-0.49,0); yl = iyl * csiz + ymn
ixl = np.around(icelll - iyl * nx , 0); xl = ixl * csiz + xmn
valuel = np.full(nsim,-9999)
dfl = pd.DataFrame({'X':xl,'Y':yl, 'Value':valuel},dtype=np.single)
dfl_temp = pd.DataFrame({'X':[-9999,9999],'Y':[-9999,9999], 'Value':[-9999,-9999]},dtype=np.single)
np.random.seed(seed = seed)
sim = np.zeros(len(dfl)); sk_est = np.zeros(len(dfl)); sk_var = np.zeros(len(dfl)); sk_std = np.zeros(len(dfl))
sk_weights = np.zeros([ndata,len(dfl)])
# perform sequential simulation
for isim in range(0,len(dfl)):
dfl_temp.loc[0,'X'] = dfl.loc[isim,'X']; dfl_temp.loc[0,'Y'] = dfl.loc[isim,'Y']; # copy current data to first data / method needs atleast 2 data
sk_est_temp, sk_var_temp, sk_weights_temp = simple_simple_krige(df,'X','Y','Value',dfl_temp,'X','Y',vario,skmean=skmean)
sk_est[isim] = sk_est_temp[0];
sk_var[isim] = sk_var_temp[0];
sk_weights[:,isim] = sk_weights_temp[0,:ndata]
if sk_var[isim] == 0:
sk_std[isim] = 0.0
else:
sk_std[isim] = math.sqrt(sk_var[isim])
sim[isim] = norm.rvs(loc=sk_est[isim], scale=sk_std[isim], size=1)[0] # random seedset at the start
df = df.append({'X': dfl.loc[isim,'X'],'Y': dfl.loc[isim,'Y'],'Value': sim[isim]}, ignore_index=True)
dfl.at[isim,'Value'] = float(sim[isim])
# make the 2D simulated model on a regular grid
plt.subplot(121)
model = np.full([ny,nx],-999.9)
for idata in range(len(df)-1,-1,-1):
ix = int(df.loc[idata,'X']/csiz); iy = int(df.loc[idata,'Y']/csiz);
model[ny - iy - 1, ix] = df.loc[idata,'Value']
ax = plt.gca()
plt.xlabel('X(m)'); plt.ylabel('Y(m)')
plt.title('Sequential Simulation - Data, Simulated Values and Random Path')
palette = copy(plt.cm.inferno)
palette.set_under('r', 0.0)
palette.set_over('r', 0.0)
im = plt.imshow(model,interpolation = None,extent = [0,1000,0,1000], vmin = -3.0, vmax = 3.0,cmap = palette)
plt.scatter(df['X'].values[:ndata],df['Y'].values[:ndata],marker='^',c=df['Value'].values[:ndata], vmin = -2.0, vmax = 2.0, cmap = cmap, edgecolors = 'black',s = 500,label = 'Original Data')
plt.xlim([0,1000]); plt.ylim([0,1000])
for idata in range(len(df)-1,-1,-1):
x = df.loc[idata,'X'];y = df.loc[idata,'Y']
ix = int(x/csiz); iy = int(y/csiz)
xc = csiz*ix + csiz*0.45; yc = csiz*iy + csiz*0.5;
# if idata < 3:
# #plt.annotate('D'+str(idata+1),[xc-15,yc],color='white',weight='bold')
# else:
if idata > 2:
plt.annotate(idata-2,[xc-10,yc],color='white')
cbar = plt.colorbar(im,ax = plt.gca()) # Similar to fig.colorbar(im, cax = cax)
plt.gca().set_aspect('auto')
cbar.set_label('Simulated Values')
# plot the variogram modle for visualization
ellipse1 = Ellipse((x1, y1),width=hmin1*2.0,height=hmaj1*2.0,angle = 360-azi,facecolor='blue',alpha = 0.1)
ellipse2 = Ellipse((x2, y2),width=hmin1*2.0,height=hmaj1*2.0,angle = 360-azi,facecolor='red',alpha = 0.1)
ellipse3 = Ellipse((x3, y3),width=hmin1*2.0,height=hmaj1*2.0,angle = 360-azi,facecolor='yellow',alpha = 0.1)
ax = plt.gca()
ax.add_patch(ellipse1); ax.add_patch(ellipse2); ax.add_patch(ellipse3);
x_values = np.linspace(-3.0,3.0,100) # get an array of x values
p_values = norm.pdf(x_values, loc = 0.0, scale = 1.0)
plt.subplot(122)
plt.hist(model.flatten(),color='red',alpha=0.2,edgecolor='black',bins=np.linspace(-3,3,10),density =True)
plt.plot(x_values,p_values,color='red')
plt.xlim(-3,3); plt.ylim(0,0.6)
plt.title('Distribution of Seuqential Gaussian Simulated Values')
plt.xlabel('Simulated Gaussian Values'); plt.ylabel('Normalized Frequence')
plt.gca().annotate('Simulation Mean = ' + str(np.round(stats.tmean(model.flatten(),limits=(-5,5)),2)), (0.9, 0.55))
plt.gca().annotate('Simulation StDev. = ' + str(np.round(stats.tstd(model.flatten(),limits=(-3,3)),2)), (0.9, 0.52))
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.3, hspace=0.3)
plt.show()
# connect the function to make the samples and plot to the widgets
interactive_plot = widgets.interactive_output(f_make_krige2, {'nsim':nsim,'nug':nug, 'it1':it1,'seed':seed,'azi':azi, 'hmaj1':hmaj1, 'hmin1':hmin1,
'x1':x1, 'y1':y1, 'x2':x2, 'y2':y2, 'x3':x3, 'y3':y3,})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
# -
# ### Interactive Sequential Simulation to Model Regular Grid Demonstration
#
# * select the variogram model and the data locations and observe the outputs from sequential simulation
#
# #### <NAME>, Associate Professor, University of Texas at Austin
#
# ##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
#
# ### The Inputs
#
# Select the simulation nodes, variogram model and the data locations:
#
# * **nsim**: number of simulated nodes, for computational speed up use less nodes
#
# * **nug**: nugget effect
#
# * **c1 **: contributions of the sill
#
# * **hmaj1 / hmin1 **: range in the major and minor direction
#
# * **(x1, y1),...(x3,y3) **: spatial data locations
display(uik, interactive_plot) # display the interactive plot
# #### Comments
#
# This was an interactive demonstration of sequential Gaussian simulation for spatial data analytics. Much more could be done, I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
#
# #### The Author:
#
# ### <NAME>, Associate Professor, University of Texas at Austin
# *Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
#
# With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
#
# For more about Michael check out these links:
#
# #### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#
# #### Want to Work Together?
#
# I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
#
# * Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
#
# * Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
#
# * I can be reached at <EMAIL>.
#
# I'm always happy to discuss,
#
# *Michael*
#
# <NAME>, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#
# #### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#
| Interactive_Simulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
print("Hello World!")
print("1,2,3,4,5")
print("This are numbers")
print("1,2,3,4,5")
print("This are numbers")
print("1")
import numpy
# # Heading
# ## Heading
# ### Heading
# #### Heading
# ##### Heading
# **Bold**
# *Italics?*
# # Keyboard Shortcuts
# To change a cell from code to markdown tap `m`
# to run a cell an move to the cell beneat it you tap `shift + enter` key
# to run a cell an insert another cell beneat it you tap `alt + enter` key
# to run a cell an stay on it you tap `ctrl + enter` key
# to check the doc string of a function you tap `shift + tab` key
# for code completion you tap the `tab` key
| files/Intro to jupyter notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import os
import pandas as pd
import numpy as np
from numpy import *
# !type Pollution.csv
data = pd.read_table("Pollution.csv", sep=',', header=None)
data.index.names = ['location']
data.columns.names = ['pollution']
data.head()
import numpy as np
np_data = data.as_matrix()
np_data[0]
from sklearn.decomposition import PCA
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
a =le.fit_transform(data[0])
data[0]=a
data[0]
# +
data.loc[[0],0:1] = 1,2
data.loc[[0],2:3] = 3,4
data.loc[[0],4:5] = 5,6
data.loc[[0],6:7] = 7,8
data.loc[[0],8:9] = 9,10
data.loc[[0],10:11] = 11,12
data.head()
# -
from sklearn.decomposition import PCA
from sklearn import preprocessing
pca = PCA(n_components=2)
pca.fit(data)
PCA(copy=True, n_components=2, whiten=False)
data_2d = pca.transform(data)
data_2d = pd.DataFrame(data_2d)
data_2d.index = data.index
data_2d.columns = ['PC1','PC2']
data_2d.head()
# # K-means
# k-means is one of the simplest unsupervised learning algorithms that solve the well known clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters) fixed apriori. The main idea is to define k centers, one for each cluster.The optimal choice of k will strike a balance between maximum compression of the data using a single cluster, and maximum accuracy by assigning each data point to its own cluster. If an appropriate value of k is not apparent from prior knowledge of the properties of the data set, it must be chosen somehow. we will use sklearn, in this case its k-means clustering implementation, in order to perform our clustering on the TB data. Since we already decided on a number of clusters of 5, we will use it here straightaway.
# +
#All the following code is from the below citation[1]
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=5)
clusters = kmeans.fit(data)
# -
# Now we need to store the cluster assignments together with each city in our data frame. The cluster labels are returned in clusters.labels.
data_2d['cluster'] = pd.Series(clusters.labels_, index=data_2d.index)
# And now we are ready to plot, using the cluster column as color.
#
# +
# %matplotlib inline
import numpy as np
data_2d.plot(
kind='scatter',
x='PC2',y='PC1',
c=data_2d.cluster.astype(np.float),
figsize=(16,8))
# -
# lets do increase the number of clusters to 10 and see what happens.
# +
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=10)
clusters = kmeans.fit(data)
# -
data_2d['cluster'] = pd.Series(clusters.labels_, index=data_2d.index)
# +
# %matplotlib inline
import numpy as np
data_2d.plot(
kind='scatter',
x='PC2',y='PC1',
c=data_2d.cluster.astype(np.float),
figsize=(16,8))
# -
| Homeworks/Extra credit Algorithms/K_Means.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1>Training on Cloud AI Platform</h1>
#
# This notebook illustrates distributed training and hyperparameter tuning on Cloud AI Platform (formerly known as Cloud ML Engine).
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.13'
# + language="bash"
# gcloud config set project $PROJECT
# gcloud config set compute/region $REGION
# + language="bash"
# if ! gsutil ls | grep -q gs://${BUCKET}/babyweight/preproc; then
# gsutil mb -l ${REGION} gs://${BUCKET}
# # copy canonical set of preprocessed files if you didn't do previous notebook
# gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
# fi
# + language="bash"
# gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
# -
# Now that we have the TensorFlow code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud ML Engine.
# <p>
# <h2> Train on Cloud ML Engine </h2>
# <p>
# Training on Cloud ML Engine requires:
# <ol>
# <li> Making the code a Python package
# <li> Using gcloud to submit the training code to Cloud ML Engine
# </ol>
#
# Ensure that the AI Platform API is enabled by going to this [link](https://console.developers.google.com/apis/library/ml.googleapis.com).
# ## Lab Task 1
#
# The following code edits babyweight/trainer/task.py. You should use add hyperparameters needed by your model through the command-line using the `parser` module. Look at how `batch_size` is passed to the model in the code below. Do this for the following hyperparameters (defaults in parentheses): `train_examples` (5000), `eval_steps` (None), `pattern` (of).
# +
# %%writefile babyweight/trainer/task.py
import argparse
import json
import os
from . import model
import tensorflow as tf
from tensorflow.contrib.learn.python.learn import learn_runner
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--bucket',
help = 'GCS path to data. We assume that data is in gs://BUCKET/babyweight/preproc/',
required = True
)
parser.add_argument(
'--output_dir',
help = 'GCS location to write checkpoints and export models',
required = True
)
parser.add_argument(
'--batch_size',
help = 'Number of examples to compute gradient over.',
type = int,
default = 512
)
parser.add_argument(
'--job-dir',
help = 'this model ignores this field, but it is required by gcloud',
default = 'junk'
)
parser.add_argument(
'--nnsize',
help = 'Hidden layer sizes to use for DNN feature columns -- provide space-separated layers',
nargs = '+',
type = int,
default=[128, 32, 4]
)
parser.add_argument(
'--nembeds',
help = 'Embedding size of a cross of n key real-valued parameters',
type = int,
default = 3
)
## TODOs after this line
################################################################################
## TODO 1: add the new arguments here
## parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# unused args provided by service
arguments.pop('job_dir', None)
arguments.pop('job-dir', None)
## assign the arguments to the model variables
output_dir = arguments.pop('output_dir')
model.BUCKET = arguments.pop('bucket')
model.BATCH_SIZE = arguments.pop('batch_size')
model.TRAIN_STEPS = (arguments.pop('train_examples') * 1000) / model.BATCH_SIZE
model.EVAL_STEPS = arguments.pop('eval_steps')
print ("Will train for {} steps using batch_size={}".format(model.TRAIN_STEPS, model.BATCH_SIZE))
model.PATTERN = arguments.pop('pattern')
model.NEMBEDS= arguments.pop('nembeds')
model.NNSIZE = arguments.pop('nnsize')
print ("Will use DNN size of {}".format(model.NNSIZE))
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
output_dir = os.path.join(
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
# Run the training job
model.train_and_evaluate(output_dir)
# -
# ## Lab Task 2
#
# Address all the TODOs in the following code in `babyweight/trainer/model.py` with the cell below. This code is similar to the model training code we wrote in Lab 3.
#
# After addressing all TODOs, run the cell to write the code to the model.py file.
# +
# %%writefile babyweight/trainer/model.py
import shutil
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
BUCKET = None # set from task.py
PATTERN = 'of' # gets all files
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
# Define some hyperparameters
TRAIN_STEPS = 10000
EVAL_STEPS = None
BATCH_SIZE = 512
NEMBEDS = 3
NNSIZE = [64, 16, 4]
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(prefix, mode, batch_size):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Use prefix to create file path
file_path = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, prefix, PATTERN)
# Create list of files that match pattern
file_list = tf.gfile.Glob(file_path)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(file_list) # Read text file
.map(decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
# Define feature columns
def get_wide_deep():
# Define column types
is_male,mother_age,plurality,gestation_weeks = \
[\
tf.feature_column.categorical_column_with_vocabulary_list('is_male',
['True', 'False', 'Unknown']),
tf.feature_column.numeric_column('mother_age'),
tf.feature_column.categorical_column_with_vocabulary_list('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']),
tf.feature_column.numeric_column('gestation_weeks')
]
# Discretize
age_buckets = tf.feature_column.bucketized_column(mother_age,
boundaries=np.arange(15,45,1).tolist())
gestation_buckets = tf.feature_column.bucketized_column(gestation_weeks,
boundaries=np.arange(17,47,1).tolist())
# Sparse columns are wide, have a linear relationship with the output
wide = [is_male,
plurality,
age_buckets,
gestation_buckets]
# Feature cross all the wide columns and embed into a lower dimension
crossed = tf.feature_column.crossed_column(wide, hash_bucket_size=20000)
embed = tf.feature_column.embedding_column(crossed, NEMBEDS)
# Continuous columns are deep, have a complex relationship with the output
deep = [mother_age,
gestation_weeks,
embed]
return wide, deep
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.placeholder(tf.string, [None]),
'mother_age': tf.placeholder(tf.float32, [None]),
'plurality': tf.placeholder(tf.string, [None]),
'gestation_weeks': tf.placeholder(tf.float32, [None]),
KEY_COLUMN: tf.placeholder_with_default(tf.constant(['nokey']), [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
# create metric for hyperparameter tuning
def my_rmse(labels, predictions):
pred_values = predictions['predictions']
return {'rmse': tf.metrics.root_mean_squared_error(labels, pred_values)}
## TODOs after this line
################################################################################
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
wide, deep = get_wide_deep()
EVAL_INTERVAL = 300 # seconds
## TODO 2a: set the save_checkpoints_secs to the EVAL_INTERVAL
run_config = tf.estimator.RunConfig(save_checkpoints_secs = None,
keep_checkpoint_max = 3)
## TODO 2b: change the dnn_hidden_units to NNSIZE
estimator = tf.estimator.DNNLinearCombinedRegressor(
model_dir = output_dir,
linear_feature_columns = wide,
dnn_feature_columns = deep,
dnn_hidden_units = None,
config = run_config)
# illustrates how to add an extra metric
estimator = tf.contrib.estimator.add_metrics(estimator, my_rmse)
# for batch prediction, you need a key associated with each instance
estimator = tf.contrib.estimator.forward_features(estimator, KEY_COLUMN)
## TODO 2c: Set the third argument of read_dataset to BATCH_SIZE
## TODO 2d: and set max_steps to TRAIN_STEPS
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('train', tf.estimator.ModeKeys.TRAIN, None),
max_steps = None)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn, exports_to_keep=None)
## TODO 2e: Lastly, set steps equal to EVAL_STEPS
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('eval', tf.estimator.ModeKeys.EVAL, 2**15), # no need to batch in eval
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# -
# ## Lab Task 3
#
# After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_examples lines so that I am not trying to boil the ocean on the small notebook VM. Change as appropriate for your model).
# <p>
# Even with smaller data, this might take <b>3-5 minutes</b> in which you won't see any output ...
# + language="bash"
# echo "bucket=${BUCKET}"
# rm -rf babyweight_trained
# export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
# python -m trainer.task \
# --bucket=${BUCKET} \
# --output_dir=babyweight_trained \
# --job-dir=./tmp \
# --pattern="00000-of-" --train_examples=1 --eval_steps=1
# -
# ## Lab Task 4
#
# The JSON below represents an input into your prediction model. Write the input.json file below with the next cell, then run the prediction locally to assess whether it produces predictions correctly.
# %%writefile inputs.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
# + language="bash"
# MODEL_LOCATION=$(ls -d $(pwd)/babyweight_trained/export/exporter/* | tail -1)
# echo $MODEL_LOCATION
# gcloud ml-engine local predict --model-dir=$MODEL_LOCATION --json-instances=inputs.json
# -
# ## Lab Task 5
#
# Once the code works in standalone mode, you can run it on Cloud ML Engine.
# Change the parameters to the model (-train_examples for example may not be part of your model) appropriately.
#
# Because this is on the entire dataset, it will take a while. The training run took about <b> 2 hours </b> for me. You can monitor the job from the GCP console in the Cloud Machine Learning Engine section.
# + language="bash"
# OUTDIR=gs://${BUCKET}/babyweight/trained_model
# JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
# echo $OUTDIR $REGION $JOBNAME
# gsutil -m rm -rf $OUTDIR
# gcloud ml-engine jobs submit training $JOBNAME \
# --region=$REGION \
# --module-name=trainer.task \
# --package-path=$(pwd)/babyweight/trainer \
# --job-dir=$OUTDIR \
# --staging-bucket=gs://$BUCKET \
# --scale-tier=STANDARD_1 \
# --runtime-version=$TFVERSION \
# -- \
# --bucket=${BUCKET} \
# --output_dir=${OUTDIR} \
# --train_examples=200000
# -
# When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was:
# <pre>
# Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186
# </pre>
# The final RMSE was 1.03 pounds.
# <h2> Optional: Hyperparameter tuning </h2>
# <p>
# All of these are command-line parameters to my program. To do hyperparameter tuning, create hyperparam.xml and pass it as --configFile.
# This step will take <b>up to 2 hours</b> -- you can increase maxParallelTrials or reduce maxTrials to get it done faster. Since maxParallelTrials is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
#
# %%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
- parameterName: nnsize
type: INTEGER
minValue: 64
maxValue: 512
scaleType: UNIT_LOG_SCALE
# + language="bash"
# OUTDIR=gs://${BUCKET}/babyweight/hyperparam
# JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
# echo $OUTDIR $REGION $JOBNAME
# gsutil -m rm -rf $OUTDIR
# gcloud ml-engine jobs submit training $JOBNAME \
# --region=$REGION \
# --module-name=trainer.task \
# --package-path=$(pwd)/babyweight/trainer \
# --job-dir=$OUTDIR \
# --staging-bucket=gs://$BUCKET \
# --scale-tier=STANDARD_1 \
# --config=hyperparam.yaml \
# --runtime-version=$TFVERSION \
# -- \
# --bucket=${BUCKET} \
# --output_dir=${OUTDIR} \
# --eval_steps=10 \
# --train_examples=20000
# -
# <h2> Repeat training </h2>
# <p>
# This time with tuned parameters (note last line)
# + language="bash"
# OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
# JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
# echo $OUTDIR $REGION $JOBNAME
# gsutil -m rm -rf $OUTDIR
# gcloud ml-engine jobs submit training $JOBNAME \
# --region=$REGION \
# --module-name=trainer.task \
# --package-path=$(pwd)/babyweight/trainer \
# --job-dir=$OUTDIR \
# --staging-bucket=gs://$BUCKET \
# --scale-tier=STANDARD_1 \
# --runtime-version=$TFVERSION \
# -- \
# --bucket=${BUCKET} \
# --output_dir=${OUTDIR} \
# --train_examples=20000 --batch_size=35 --nembeds=16 --nnsize=281
# -
# Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| courses/machine_learning/deepdive/06_structured/labs/5_train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="dlsBDOweUJyP" colab_type="code" colab={}
import os
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import clear_output
from six.moves import urllib
import tensorflow.compat.v2.feature_column as fc
import tensorflow as tf
from sklearn.metrics import roc_curve
from matplotlib import pyplot as plt
# + id="PHyHEPDOU0o4" colab_type="code" colab={}
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
# + id="JEkVf8a9VL9Q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 200} outputId="292db14f-2f54-4209-aadc-7ce88aa89663"
dftrain.head()
# + id="BqNAHkxwVfpE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 291} outputId="172b0d45-2b9c-4a35-9014-731343adc19a"
dftrain.describe()
# + id="zorz10UnVjJs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="398c7f9e-d413-4fc8-c40e-3440ca11b99e"
dftrain.shape[0], dfeval.shape[0]
# + id="eFowsH6bVtTd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="763fe12f-141e-4f78-8bda-bd7726554ed9"
dftrain.age.hist(bins=20)
# + id="6zDDJduOV0W_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="c6b2d965-a2e6-4d47-af2b-476ce454aa84"
dftrain.sex.value_counts().plot(kind='barh')
# + id="ZMqxSwG0V8Xy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="0c763a55-0d66-44a1-c812-22eb16c626df"
dftrain['class'].value_counts().plot(kind='barh')
# + id="vN7YoC6UWCDK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="ccd76c7f-4663-4866-f84a-2cfd60de9ffe"
pd.concat([dftrain, y_train], axis=1).groupby('sex').survived.mean().plot(kind='barh')
# + id="VJRq03MdWP37" colab_type="code" colab={}
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
vocabulary = dftrain[feature_name].unique()
feature_columns.append(tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name, dtype=tf.float32))
# + id="jJoOm734XFVB" colab_type="code" colab={}
def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32):
def input_function():
ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df))
if shuffle:
ds = ds.shuffle(1000)
ds = ds.batch(batch_size).repeat(num_epochs)
return ds
return input_function
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, num_epochs=1, shuffle=False)
# + id="-155Vkr4X072" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="a03cd21f-c684-4f2a-e136-42c7054adaeb"
ds = make_input_fn(dftrain, y_train, batch_size=10)()
for feature_batch, label_batch in ds.take(1):
print("Some feature keys:", list(feature_batch.keys()))
print()
print("A batch of class:", feature_batch['class'].numpy())
print()
print("A batch of labels:", label_batch.numpy())
# + id="BgG_JIUmYZin" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 310} outputId="2c0f1a8c-13f2-44ee-dc09-9af3c30d2755"
age_column = feature_columns[7]
tf.keras.layers.DenseFeatures([age_column])(feature_batch).numpy()
# + id="KZfHd61rZPMR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 310} outputId="2b3a14c3-6b15-4f85-9256-863f82db9a9e"
gender_column = feature_columns[0]
tf.keras.layers.DenseFeatures([tf.feature_column.indicator_column(gender_column)])(feature_batch).numpy()
# + id="z3xAvz0_ZxLA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="64c66168-5610-4b1d-8265-79b5684ec21c"
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns)
linear_est.train(train_input_fn)
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(result)
# + id="vFKIEzw4aEDm" colab_type="code" colab={}
age_x_gender = tf.feature_column.crossed_column(['age', 'sex'], hash_bucket_size=100)
# + id="bJ7hZjAgaiIj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="dae1fec9-5d5c-4fd9-d56b-c90910e085bb"
derived_feature_columns = [age_x_gender]
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns+derived_feature_columns)
linear_est.train(train_input_fn)
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(result)
# + id="j0r8KSc0ay4K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 523} outputId="e6aab8eb-1b8e-46e7-c462-a07dadea72c6"
pred_dicts = list(linear_est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities')
# + id="QyxonrjjbLS8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="35e3649c-309a-4617-e1da-d10df46bc20b"
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,)
| tensorflow/estimator/Linear.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Activity 01: Use NumPy to compute the Mean, Median, and Variance
# In this activity, you will consolidate the skills you've acquired in the last exercise and use NumPy to do some very basic mathematical calculations on our `normal_distribution` dataset.
# NumPy has a consistent API, so it should be rather easy to transfer your knowledge of the mean method to median and variance.
# #### Loading the dataset
# importing the necessary dependencies
import numpy as np
# loading the Dataset
dataset = np.genfromtxt('./data/normal_distribution.csv', delimiter=',')
# looking at the first two rows of the dataset
dataset[0:2]
# ---
# #### Mean
# calculate the mean of the third row
np.mean(dataset[2])
# calculate the mean of the last column
np.mean(dataset[:,-1])
# calculate the mean of the intersection of the first 3 rows and first 3 columns
np.mean(dataset[0:3, 0:3])
# ---
# #### Median
# calculate the median of the last row
np.median(dataset[-1])
# calculate the median of the last 3 columns
np.median(dataset[:, -3:])
# calculate the median of each row
np.median(dataset, axis=1)
# ---
# #### Variance
# calculate the variance of each column
np.var(dataset, axis=0)
# calculate the variance of the intersection of the last 2 rows and first 2 columns
np.var(dataset[-2:, :2])
# The values of the variance might seem a little bit strange at first.
# You can always go back to the topic that gives you a quick statistical overview to recap what you've learned so far.
#
# > **Note:**
# Just remember, the variance is not the standard deviation.
#
# Try calculation the standard deviation with NumPy to get a more descriptive value when comparing it to our dataset
# calculate the standard deviation for the dataset
np.std(dataset)
| Lesson01/Activity01/activity01_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 8-2. グローバーのアルゴリズム
#
# グローバーのアルゴリズムは、整列化されていないデータベースから特定のデータを探索するための量子アルゴリズムである[1]。
# グローバーのアルゴリズムは、ソートされていない $N$ 個のデータに対して、$O( \sqrt{N})$ 回のクエリ(オラクルを呼ぶこと)で解を見出せる。古典コンピュータでは$O(N)$回のクエリが必要であるので、量子アルゴリズムを用いることで二次 (quadratic) の加速が実現されることになる。
#
# オラクルさえ構成できれば、グローバーのアルゴリズムはあらゆる古典アルゴリズムの全探索部分を高速化することができる。例えば、
#
# - 充足可能性問題(SAT問題)
# - 特定のハッシュ値から元の値を探索する問題
#
# といった応用例が考えられ、後者は実際にビットコインのマイニング高速化として提案論文がある[2]。
#
# この節ではまずグローバーのアルゴリズムの理論的な説明を行い、その後 Qulacs による実装例を紹介する。
# ### アルゴリズムの流れ
# グローバーのアルゴリズムの流れはシンプルで、以下の通りである。
# 前節と同様、 $N$ 個の要素からなるデータベースから $M$ 個の解を探索する問題を考え、要素のラベルを $n$ 桁のビット列 $x = x_1 \ldots x_n$ とする。
#
# 1. 全ての状態の重ね合わせ状態 $|s\rangle = \frac{1}{\sqrt{N}}\sum_x |x\rangle$ を用意する
# 2. オラクル $U_w$ (解に対する反転操作)を作用させる
# 3. $|s\rangle$ を対称軸にした反転操作 $U_s$ を作用させる
# 4. ステップ 2,3 を $k$ 回繰り返す
# 5. 測定を行う
#
# 各ステップを詳細に見ていこう。
# #### 1. 全ての状態の重ね合わせ状態 $|s\rangle = \frac{1}{\sqrt{N}}\sum_x |x\rangle$ を用意する
# これは簡単である。初期状態 $|0\cdots0\rangle$ に対して全ての量子ビットにアダマールゲート $H$ をかければ良い。
#
# $$
# (H\otimes \cdots \otimes H) |0\cdots0\rangle = \frac{1}{(\sqrt{2})^n} (|0\rangle+|1\rangle) \otimes \cdots \otimes (|0\rangle+|1\rangle)
# = |s\rangle
# $$
#
# #### 2. オラクル $U_w$ (解に対する反転操作)を作用させる
# 次に、オラクルを状態 $|s\rangle$ に作用させる。
# ここではオラクルとして、[前節](8.1_oracle.ipynb)の最後で述べたような「入力 $|x\rangle$ に対して $x$が解なら(-1)をかけて位相を反転し、解でないなら何もしない」という演算を考えることにして、補助ビットは省略する。つまり、オラクル $U_w$ を
#
# $$
# U_w = I - 2\sum_{w\in \text{解}}|w\rangle \langle w|,
# $$
#
# $$
# U_w|x\rangle =
# \begin{cases}
# |x\rangle \:\: \text{(x is not solution)} \\
# -|x\rangle \:\: \text{(x is solution)}
# \end{cases}
# $$
#
# と定義する。入力が解である時にだけ位相を反転させるので、オラクル $U_w$ は「解に対する反転操作」と呼ばれる。
# #### 3. $|s\rangle$ を対称軸にした反転操作 $U_s$ を作用させる
# ステップ2では解に対する反転操作を考えたが、ステップ3では全ての状態の重ね合わせ $|s\rangle$ を対称軸にした反転操作 $U_s$ を作用させる。
#
# $$
# U_s = 2 |s\rangle \langle s| - I
# $$
#
# この演算子は、入力状態 $|\psi\rangle = a|s\rangle + b|s_\perp\rangle$ ($|s_\perp\rangle$ は $|s\rangle$ に直交するベクトル)に対して
#
# $$
# U_s|\psi\rangle = a |s\rangle - b|s_\perp\rangle
# $$
#
# と作用し、$|s_\perp\rangle$ に比例する部分の位相だけを反転する。
# #### 4. ステップ2,3 を $k$ 回繰り返す
# 上記の2つの反転操作 $U_w$ と $U_s$ を繰り返す。後で述べるが、およそ $O(\sqrt{N/M})$ 回の繰り返しを行えば、次のステップ5の測定で十分高い確率で解が得られる。つまり、オラクルを呼ぶ回数は $O(\sqrt{N})$ で良い。
# #### 5. 測定を行う
# ここまでのステップで状態は $(U_s U_w)^k | s \rangle$ となっている。 $k$ はステップ2,3の繰り返しの回数である。
# 後述するように実はこの状態は、解 $w$ に対応する状態 $|w\rangle$ の係数(の絶対値)のみが非常に大きくなっているので、計算基底で測定を行えば、高い確率で解 $w$ (ビット列) が得られる。
#
#
# 理屈を抜きにすれば、グローバーのアルゴリズムで行う操作はこれだけで、非常にシンプルである。
# ### 幾何学的な説明
# 次に、グローバーのアルゴリズムがなぜ上手くいくのか、幾何学的な説明を行う。(他にも係数の平均操作に着目する説明もあり、例えば[3]を参照してほしい。)
#
# #### 二次元平面の定義
# まず、次の2つの状態 $|\alpha\rangle, |\beta\rangle$ で張られる2次元平面を考える。
#
# $$
# |\alpha\rangle = \frac{1}{\sqrt{N-M}} \sum_{x \in {解ではない}} |x\rangle
# $$
#
# $$
# |\beta\rangle = \frac{1}{\sqrt{M}}\sum_{x \in 解} |x\rangle
# $$
#
# 全ての状態の重ね合わせ状態 $|s\rangle$ は次のように表せるので、この2次元平面内のベクトルであることがわかる。
#
# $$
# |s\rangle = \sqrt{\frac{N-M}{N}} |\alpha\rangle + \sqrt{\frac{M}{N}} |\beta\rangle
# $$
#
# 特に、$\cos{\frac{\theta}{2}} = \sqrt{\frac{N-M}{N}}, \sin{\frac{\theta}{2}} = \sqrt{\frac{M}{N}}$ を満たす角 $\theta$ を用いれば
#
# $$
# |s\rangle = \cos{\frac{\theta}{2}} |\alpha\rangle + \sin{\frac{\theta}{2}} |\beta\rangle
# $$
#
# とかける。これを図示すると、以下のようになる。
# (なお、一般に探索問題においては $N \gg{} M$ であるから、 $\sqrt{M/N}$ は0に近く、$\theta$ は0に近い正の数になっていることが多い。)
#
# 
# #### 2回の反転操作 $U_s U_w$ = 二次元平面内の回転
# この平面内で考えると、オラクル $U_w$ は $|\beta\rangle$ 軸に対する反転操作である($U_w|\alpha\rangle =|\alpha\rangle, U_w|\beta\rangle = -|\beta\rangle$)。
# よって、$U_w$ を作用させた後、$|s\rangle$ を対称軸とした反転 $U_s$ を作用させると、$|\alpha\rangle, |\beta\rangle$ 平面内で角度 $\theta$だけの回転が行われることになる。(図を見て考えると分かる)
#
# グローバーのアルゴリズムでは $U_s U_w$ を $k$ 回繰り返すから、状態は $k$ 回回転し、測定直前には
#
# $$
# (U_s U_w)^k |s\rangle = \cos{\frac{(2k+1)\theta}{2}} |\alpha\rangle + \sin{\frac{(2k+1)\theta}{2}} |\beta\rangle
# $$
#
# となっている。 $N \gg M$ の時 $\theta$ は0に近い正の数だったから、$|s\rangle$ に $U_s U_w$ を作用させるたびに、$|\alpha\rangle$ の係数が減って $|\beta\rangle$ の係数が増えることになる。
# $|\beta\rangle$ は全ての解の状態の重ね合わせでできていたから、これはすなわち $(U_s U_w)^k |s\rangle$ を測定した時に解が出力される確率が高まることを意味する。
#
# 以上が、グローバーのアルゴリズムが解を上手く探索できる理由である。
#
# #### 最適な $k$ の見積もり
# 最後に、$U_s U_w$ を作用させる回数 $k$ 、つまりオラクルを呼ぶ回数がどれくらいになるのか評価してみよう。
# これが計算量を決めることになる。
#
# $(U_s U_w)^k |s\rangle$ が $|\beta\rangle$ に最も近くなるのは $\frac{(2k+1)\theta}{2}$ が $\frac{\pi}{2}$ に近くなるとき、すなわち$k$ が
#
# $$
# R = \text{ClosestInteger}\left( \frac{\pi}{2\theta} -\frac{1}{2} \right)
# $$
#
# の時である。ここで $\text{ClosestInteger}(\ldots)$ は $\ldots$ に最も近い整数を表す。
# $R$の上限を評価しよう。 $\theta > 0$ について成り立つ式
#
# $$
# \frac{\theta}{2} \geq \sin \frac{\theta}{2} = \sqrt{\frac{M}{N}}
# $$
#
# を使うと、
#
# $$
# R \leq \left( \frac{\pi}{2\theta} -\frac{1}{2} \right) + 1 = \frac{\pi}{2\theta} + \frac{1}{2} \leq \frac{\pi}{4}\sqrt{\frac{N}{M}} + \frac{1}{2}
# $$
#
# となる。つまり、$R$ は高々 $O(\sqrt{N/M})$ であり、グローバーのアルゴリズムが $O(\sqrt{N})$ で動作することが分かった。
#
# ### 実装例
# グローバーのアルゴリズムを Qulacs を用いて実装してみよう。(実装コードは[4]をほぼそのまま使用している)
# +
## ライブラリのインポート
import matplotlib.pyplot as plt
import numpy as np
import time
import random
from qulacs import QuantumState
from qulacs.state import inner_product
from qulacs import QuantumCircuit
from qulacs.gate import to_matrix_gate
from qulacs import QuantumState
from qulacs.gate import Identity, X,Y,Z #パウリ演算子
from qulacs.gate import H
from qulacs.gate import RX,RY,RZ #パウリ演算子についての回転演算
## Google Colaboratory / (Linux or Mac)のjupyter notebook 環境の場合にのみ実行してください。
## Qulacsのエラーが正常に出力されるようになります。
# !pip3 install wurlitzer
# %load_ext wurlitzer
# -
## 係数の絶対値の分布をプロットする関数
def show_distribution(state,nqubits):
plt.bar([i for i in range(pow(2,nqubits))], abs(state.get_vector()))
plt.show()
# #### 動作の確認
# まずは 5 qubitでグローバーのアルゴリズムを実装し、動作を確認してみよう。
# 全ての状態の重ね合わせ状態 $|s\rangle$ は状態 $|0\cdots0\rangle$ の全てのビットにアダマールゲートを作用させることで作れる。
# +
nqubits = 5
state = QuantumState(nqubits)
state.set_zero_state()
def make_Hadamard(nqubits):
Hadamard = QuantumCircuit(nqubits)
for i in range(nqubits):
Hadamard.add_gate(H(i))
return Hadamard
Hadamard = make_Hadamard(nqubits)
Hadamard.update_quantum_state(state)
show_distribution(state,nqubits)
# -
# 次にオラクル $U_w$ を作る。ここでは $|1\ldots1\rangle$ を解として設定し、$|1\ldots1\rangle$ のみに位相(-1)をつける演算子を作る。
# このような演算子は「`0`番目から`nqubits-1`番目までの量子ビットがすべて`1`の場合に`nqubits`番目の量子ビットに $Z$ ゲートを作用させる演算子」として実装できる。
# 実装には Qulacs の特殊ゲート `to_matrix_gate` を用い、 `control_index` と`control_with_value` を使用する。
def make_U_w(nqubits):
U_w = QuantumCircuit(nqubits)
CnZ = to_matrix_gate(Z(nqubits-1))
# i-th qubitが全て1の場合だけゲートを作用
for i in range(nqubits-1):
control_index = i
control_with_value = 1
CnZ.add_control_qubit(control_index, control_with_value)
U_w.add_gate(CnZ)
return U_w
# オラクルの作用を確認すると、確かに最後の成分($|1\cdots1\rangle$)だけ位相が反転していることが分かる。
hoge = state.copy()
U_w = make_U_w(nqubits)
U_w.update_quantum_state(hoge)
print(hoge.get_vector())
# 同様に、$|s\rangle$ を対称軸にした反転 $U_s$ を作る。以下の式が成り立つことを使う。
#
# $$
# U_s = 2 |s\rangle \langle s| - I = H^{\otimes n} (2 |0\cdots0\rangle \langle0\cdots0| - I) H^{\otimes n}
# $$
def make_U_s(nqubits):
U_s = QuantumCircuit(nqubits)
for i in range(nqubits):
U_s.add_gate(H(i))
## 2|0><0| - I の実装
U_s.add_gate(to_matrix_gate(RZ(nqubits-1, 2*np.pi))) ## まず、位相(-1)を全ての状態に付与する。ゲート行列はarrary([[-1,0],[0,-1]])
U_s.add_gate( X(nqubits-1) )
## 全てのi-th qubitが0の場合だけZゲートを作用させる
CnZ = to_matrix_gate(Z(nqubits-1))
for i in range(nqubits-1):
control_index = i
control_with_value = 0
CnZ.add_control_qubit(control_index, control_with_value)
U_s.add_gate( CnZ )
U_s.add_gate( X(nqubits-1) )
for i in range(nqubits):
U_s.add_gate(H(i))
return U_s
# それでは、 $U_s U_w$ を一回だけ作用させて確率分布の変化を見てみよう。全部1の状態(一番右側)の確率が少し大きくなっている。
# +
## 初期状態の準備
state = QuantumState(nqubits)
state.set_zero_state()
Hadamard.update_quantum_state(state)
## U_s U_w を作用
U_s = make_U_s(nqubits)
U_w.update_quantum_state(state)
U_s.update_quantum_state(state)
show_distribution(state,nqubits)
# -
# これを何回か繰り返してみると
# +
## 内積を評価するために 解状態 |1...1> を作っておく
target_state = QuantumState(nqubits)
target_state.set_computational_basis(2**nqubits-1) ## 2**n_qubits-1 は 2進数で 1...1
## グローバーのアルゴリズムの実行
state = QuantumState(nqubits)
state.set_zero_state()
Hadamard.update_quantum_state(state)
for i in range(4):
U_w.update_quantum_state(state)
U_s.update_quantum_state(state)
show_distribution(state,nqubits)
print(np.linalg.norm(inner_product(state, target_state)))
# -
# $k=4$ 回ほどの繰り返しで、ほぼ確率 1 で解の状態を得ることができた。
# `nqubits` をもう少し大きくして、$k$ に対する解出力確率の振る舞いをチェックしてみる。
# +
nqubits = 10
state = QuantumState(nqubits)
state.set_zero_state()
## 内積を評価するために 解状態 |1...1> を作っておく
target_state = QuantumState(nqubits)
target_state.set_computational_basis(2**nqubits-1) ## 2**n_qubits-1 は 2進数で 1...1
## グローバーのアルゴリズムの実行
Hadamard = make_Hadamard(nqubits)
U_w= make_U_w(nqubits)
U_s = make_U_s(nqubits)
result = []
state = QuantumState(nqubits)
state.set_zero_state()
Hadamard.update_quantum_state(state)
for k in range(30):
U_w.update_quantum_state(state)
U_s.update_quantum_state(state)
#show_distribution(state,nqubits)
result.append(np.linalg.norm(inner_product(state, target_state)))
max_k = np.argmax(result)
print( f"maximal probability {result[max_k]:5e} is obtained at k = {max_k+1}")
plt.plot(np.arange(1, 30+1), result, "o-")
# -
# $k=25$ 回 でほぼ確率1でtarget状態が得られている。また、確率の$k$依存性は、「幾何学的な説明」の箇所で見たように、サイン関数になっている。
#
# 最後に、解を見つけるのに必要な $k$ が量子ビット数についてどのように振る舞うか見てみよう。
result = []
min_nqubits = 6
max_nqubits = 16
for nqubits in range(min_nqubits, max_nqubits+1, 2):
## 回路の準備
Hadamard = make_Hadamard(nqubits)
U_w= make_U_w(nqubits)
U_s = make_U_s(nqubits)
## 内積を評価するために 解状態 |1...1> を作っておく
target_state = QuantumState(nqubits)
target_state.set_computational_basis(2**nqubits-1) ## 2**n_qubits-1 は 2進数で 1...1
state = QuantumState(nqubits)
state.set_zero_state()
Hadamard.update_quantum_state(state)
## 確率が減少を始めるまで U_s U_w をかける
tmp = 0
flag = 0
num_iter = 0
while flag == 0 and num_iter <= 1000:
num_iter += 1
U_w.update_quantum_state(state)
U_s.update_quantum_state(state)
suc_prob = np.linalg.norm(inner_product(state, target_state))
if tmp < suc_prob:
tmp = suc_prob
else:
flag = 1
result.append( [nqubits, num_iter, suc_prob] )
print(f"nqubits={nqubits}, num_iter={num_iter}, suc_prob={suc_prob:5e}")
# +
result_array = np.array(result)
plt.xlim(min_nqubits-1, max_nqubits+1)
plt.xlabel("n, # of qubits", fontsize=15)
plt.ylabel("k, # of iteration", fontsize=15)
plt.semilogy(result_array[:,0], result_array[:,1], "o-", label="experiment")
plt.semilogy(result_array[:,0], 0.05*2**result_array[:,0], "-", label=r"$\propto N=2^n$")
plt.semilogy(result_array[:,0], 2**(0.5*result_array[:,0]), "-", label=r"$\propto \sqrt{N}=2^{n/2}$")
plt.legend(fontsize=10)
# -
# 繰り返し回数=オラクルを呼ぶ回数 $k$ が $O(N)$ ではなく $O(\sqrt{N})$ に比例していることがわかる。
# ### 発展
# 興味のある読者は、グローバーのアルゴリズムを使ってコンビニの配置問題を解く [IBM Quantum Challenge 2019 コンテスト問題](https://github.com/quantum-challenge/2019/blob/master/problems/final/Final.ipynb) に取り組んでほしい。[解答例](https://github.com/quantum-challenge/2019/blob/master/problems/final/answer_and_comment_by_judges.ipynb)もアップロードされている。
# ### 参考文献
#
# [1] <NAME> and <NAME>, “Quantum Computation and Quantum Information 10th Anniversary Edition“, University Printing House の `6.1 The quantum search algorithm`
# [2] <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>, “Quantum Attacks on Bitcoin, and How to Protect Against Them“, Ledger, [S.l.], v. 3, oct. 2018, https://ledgerjournal.org/ojs/index.php/ledger/article/view/127
# [3] IBM Quantum Challenge 2019, Week 2 ノートブック https://github.com/quantum-challenge/2019/blob/master/problems/week2/week2.ipynb
# [4] 藤井啓祐, “グローバー探索量子アルゴリズムをとことん実装する“, https://github.com/keisukefujii/QulacsExamples/blob/master/GroverSearch.ipynb
| notebooks/8.2_Grovers_algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torchvision
import torch.nn as nn
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
# Gradients represent the rate of change in function. They are used in neural network for optimising the network to predict the correct output. Gradient is calculated in neural network optimization process by calculating the derivative of the function to be optimised. PyTorch uses autograd package to provide automatic differentiation for all operations on Tensor. Lets see autograd in action
x = torch.tensor([2,2],dtype=torch.float32,requires_grad = True)
# Now lets perform some operations on this tensor
y = x * x
# Since y was created as aresult of operation on x, y has grad_fn attribute so we can calculate the derivative of y wrt x
y.grad_fn
# Lets perform some more operations on y
y = 3*y +2
y = y.mean()
y
# Now suppose we want to optimize y and for that we want to calculate the gradient of y wrt x. Pytorch calculates the gradient automatically as we define and execute the operation. This is due to its nature of building Dynamic Computation graphs
y.backward()
# After we have called the backward method that computes the gradient we have the gradient for each of the input stored in the grad attribute
x.grad
# Here we had the function y = 3$x^2$ + 2. So gradient of y wrt x dy/dx is 3*2 = 6
| PyTorch_autograd.ipynb |
# # Traces
# +
import model.utils as utils
from model.HodgkinHuxley import HodgkinHuxley
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# + inputHidden=false outputHidden=false
# !mkdir svg/
# +
I, t_on, t_off, dt = utils.syn_current()
I = I*0.75
true_params, _ = utils.obs_params()
obs = utils.syn_obs_data(I, dt, true_params, seed=1, cython=True)
# model
m = HodgkinHuxley(I, dt, V0=obs['data'][0], seed=1, cython=True)
# consistent sample 1
params_consistent1 = true_params*1.01
params_consistent1[2] = true_params[2]
params_consistent1[7] = true_params[7]
# consistent sample 2
params_consistent2 = true_params*.99
params_consistent2[2] = true_params[2]
params_consistent2[7] = true_params[7]
# inconsistent sample
params_inconsistent = true_params*1.
params_inconsistent[3] = params_inconsistent[3]*3
x_consistent1 = m.gen_single(params_consistent1)
x_consistent2 = m.gen_single(params_consistent2)
x_inconsistent = m.gen_single(params_inconsistent)
# consistent sample 2
params_data1 = true_params*.99
params_data1[2] = true_params[2]
params_data1[7] = true_params[7]
params_data2 = true_params*.92
params_data2[2] = true_params[2]
params_data2[7] = true_params[7]
params_data3 = true_params*.95
params_data3[2] = true_params[2]
params_data3[7] = true_params[7]
params_data4 = true_params*.90
params_data4[2] = true_params[2]
params_data4[7] = true_params[7]
x_sample1 = m.gen_single(params_data1)
x_sample2 = m.gen_single(params_data2)
x_sample3 = m.gen_single(params_data3)
x_sample4 = m.gen_single(params_data4)
with mpl.rc_context(fname='../.matplotlibrc'):
fig = plt.figure(figsize=(15,10))
plt.subplot(221)
plt.plot(obs['time'],obs['data'],lw=2)
plt.xlabel('time (ms)')
plt.ylabel('voltage (mV)')
plt.title('measured data')
plt.subplot(222)
plt.plot(x_consistent1['time'],x_consistent1['data'],lw=2)
plt.xlabel('time (ms)')
plt.ylabel('voltage (mV)')
plt.title('consistent sample 1')
plt.subplot(223)
plt.plot(x_consistent2['time'],x_consistent2['data'],lw=2)
plt.xlabel('time (ms)')
plt.ylabel('voltage (mV)')
plt.title('consistent sample 2')
plt.subplot(224)
plt.plot(x_inconsistent['time'],x_inconsistent['data'],lw=2)
plt.xlabel('time (ms)')
plt.ylabel('voltage (mV)')
plt.title('inconsistent sample');
plt.savefig('svg/traces.svg')
with mpl.rc_context(fname='../.matplotlibrc'):
fig = plt.figure(figsize=(15,10))
plt.subplot(221)
plt.plot(x_sample1['time'],x_sample1['data'],lw=2)
plt.xlabel('time (ms)')
plt.ylabel('voltage (mV)')
plt.title('sample 1')
plt.subplot(222)
plt.plot(x_sample2['time'],x_sample2['data'],lw=2)
plt.xlabel('time (ms)')
plt.ylabel('voltage (mV)')
plt.title('sample 2')
plt.subplot(223)
plt.plot(x_sample3['time'],x_sample3['data'],lw=2)
plt.xlabel('time (ms)')
plt.ylabel('voltage (mV)')
plt.title('sample 3')
plt.subplot(224)
plt.plot(x_sample4['time'],x_sample4['data'],lw=2)
plt.xlabel('time (ms)')
plt.ylabel('voltage (mV)')
plt.title('sample 4')
plt.savefig('svg/traces_samples.svg')
# -
| 1_goal/01_make_traces.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Lambda School Data Science
#
# *Unit 2, Sprint 1, Module 3*
#
# ---
# + [markdown] colab_type="text" id="7IXUfiQ2UKj6"
# # Ridge Regression
#
# ## Assignment
#
# We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
#
# But not just for condos in Tribeca...
#
# - [x] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million.
# - [x] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
# - [x] Do one-hot encoding of categorical features.
# - [] Do feature selection with `SelectKBest`.
# - [ ] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set)
# - [ ] Get mean absolute error for the test set.
# - [ ] As always, commit your notebook to your fork of the GitHub repo.
#
# The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
#
#
# ## Stretch Goals
#
# Don't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from.
#
# - [ ] Add your own stretch goal(s) !
# - [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥
# - [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).
# - [ ] Learn more about feature selection:
# - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
# - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
# - [mlxtend](http://rasbt.github.io/mlxtend/) library
# - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
# - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
# - [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
# - [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.
# - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
# + colab={} colab_type="code" id="o9eSnDYhUGD7"
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# + colab={} colab_type="code" id="QJBD4ruICm1m"
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# -
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df['BOROUGH'] = df['BOROUGH'].astype(str)
# +
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
# +
# Use a subset of the data with one family dwellings
# Condition
condition = (
(df['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS') &
(df['SALE_PRICE'] > 100000) &
(df['SALE_PRICE'] < 2000000)
)
# One family dwelling subset
ofd = df[condition].copy()
# +
# Convert date column to datetime
ofd['SALE_DATE'] = pd.to_datetime(ofd['SALE_DATE'])
# -
# Convert year to int64
ofd['YEAR_BUILT'] = ofd['YEAR_BUILT'].astype(int)
ofd['LAND_SQUARE_FEET'] = pd.to_numeric(ofd['LAND_SQUARE_FEET'].str.replace(',', ''))
# Convert zip codes to strings
ofd['ZIP_CODE'] = (
ofd['ZIP_CODE']
.astype(int)
.astype(str)
)
# Drop some problematic columns
ofd = ofd.drop(['EASE-MENT', 'APARTMENT_NUMBER',
'TAX_CLASS_AT_TIME_OF_SALE'], axis=1)
# Do train/test split
train = ofd[ofd['SALE_DATE'].dt.month < 4]
test = ofd[ofd['SALE_DATE'].dt.month == 4]
# High cardinality categorical values to exclude
train.describe(exclude='number').T.sort_values(by='unique', ascending=False)
# +
# Do one-hot encoding of categorical features
# Exclude high cardinality or unhelpful
# categorical variables from features
target = 'SALE_PRICE'
high_card = ['ADDRESS',
'SALE_DATE',
'BUILDING_CLASS_CATEGORY','TAX_CLASS_AT_PRESENT',
'YEAR_BUILT', 'BUILDING_CLASS_AT_TIME_OF_SALE',
'RESIDENTIAL_UNITS', 'COMMERCIAL_UNITS','BUILDING_CLASS_AT_PRESENT']
features = train.columns.drop([target] + high_card)
# -
print(features)
print(len(features))
# Define feature matrices and target vectors
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# Shape before encoding
X_train.shape
# Perform one-hot encoding
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train = encoder.fit_transform(X_train)
X_test = encoder.transform(X_test)
# Now there are more columns in X_train and X_test
print(X_train.shape)
print(X_test.shape)
# +
# Find best k and use it to find best MAE with ridge regression
from sklearn.feature_selection import SelectKBest, f_regression
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_absolute_error
highest_mae = None
alpha_used_for_highest_mae = None
k_used_for_highest_mae = None
lowest_mae = None
best_alpha = None
k_used = None
for k in range(1, (X_train.shape[1]+1)):
selector = SelectKBest(score_func=f_regression, k=k)
X_train_select = selector.fit_transform(X_train, y_train)
X_test_select = selector.transform(X_test)
for alpha in [0.001, 0.01, 0.1, 1.0, 1, 100.0, 1000.0]:
model = Ridge(alpha=alpha, normalize=True)
model.fit(X_train_select, y_train)
y_pred = model.predict(X_test_select)
mae = mean_absolute_error(y_test, y_pred)
if k == 1:
lowest_mae = mae
highest_mae = mae
alpha_used_for_highest_mae = alpha
k_used_for_highest_mae = k
else:
if mae < lowest_mae:
lowest_mae = mae
best_alpha = alpha
k_used = k
if mae > highest_mae:
highest_mae = mae
alpha_used_for_highest_mae = alpha
k_used_for_highest_mae = k
# print("K: ", k)
# print(mae)
# print(alpha)
print("Lowest Ridge Regression Error:")
print(f"\tMAE: ${lowest_mae:,.2f}\n\tALPHA: "\
f"{best_alpha}\n\tK: {k} of {X_train.shape[1]}")
print("Highest Ridge Regression Error:")
print(f"\tMAE: ${highest_mae:,.2f}\n\tALPHA: "\
f"{alpha_used_for_highest_mae}\n\tK: {k_used_for_highest_mae} of {X_train.shape[1]}")
| module3-ridge-regression/LS_DS_213_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/NetzaiHernandez/daa_2021_1/blob/master/28septiembre.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="Ia0Mw_svCngp"
# + [markdown] id="PImWYm5OCrGB"
#
#
# + [markdown] id="tF2G5Wu6Czl5"
# # Sección 1
#
# + [markdown] id="bl0fCFeZC6r6"
# En este archivo aprenderemos a programar en pyhton con la herramienta google colabresearch
# tambien aprenderemos a guardar nuestros cambios a nuestro repositorio de github
# + [markdown] id="QxS5evdhDyUs"
# ## Código de ejemplo
# `edad = 10 print(edad)`
#
# + id="omHtA7oXG-_H" outputId="9de1112a-3375-435f-e104-e99911f48999" colab={"base_uri": "https://localhost:8080/", "height": 35}
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('kiwi')
print(frutas)
# + id="abMnF9lgHwJ8"
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo bb")
archivo.close()
| 28septiembre.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Random Forests Multi-node, Multi-GPU demo
#
# The experimental cuML multi-node, multi-GPU (MNMG) implementation of random forests leverages Dask to do embarrassingly-parallel model fitting. For a random forest with `N` trees being fit by `W` workers, each worker will build `N / W` trees. During inference, predictions from all `N` trees will be combined.
#
# The caller is responsible for partitioning the data efficiently via Dask. To build an accurate model, it's important to ensure that each worker has a representative chunk of the data. This can come by distributing the data evenly after ensuring that it is well shuffled. Or, given sufficient memory capacity, the caller can replicate the data to all workers. This approach will most closely simulate the single-GPU building approach.
#
# **Note:** cuML 0.9 contains the first, experimental preview release of the MNMG random forest model. The API is subject to change in future releases, and some known limitations remain (listed in the documentation).
#
# For more information on MNMG Random Forest models, see the documentation:
# * https://rapidsai.github.io/projects/cuml/en/latest/api.html#cuml.dask.ensemble.RandomForestClassifier
# * https://rapidsai.github.io/projects/cuml/en/latest/api.html#cuml.dask.ensemble.RandomForestRegressor
# +
import numpy as np
import sklearn
import pandas as pd
import cudf
import cuml
from sklearn.metrics import accuracy_score
from sklearn import model_selection, datasets
from cuml.dask.common import utils as dask_utils
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
import dask_cudf
from cuml.dask.ensemble import RandomForestClassifier as cumlDaskRF
from sklearn.ensemble import RandomForestClassifier as sklRF
# -
# ## Start Dask cluster
# +
# This will use all GPUs on the local host by default
cluster = LocalCUDACluster(threads_per_worker=1)
c = Client(cluster)
# Query the client for all connected workers
workers = c.has_what().keys()
n_workers = len(workers)
n_streams = 8 # Performance optimization
# -
# ## Define Parameters
#
# In addition to the number of examples, random forest fitting performance depends heavily on the number of columns in a dataset and (especially) on the maximum depth to which trees are allowed to grow. Lower `max_depth` values can greatly speed up fitting, though going too low may reduce accuracy.
# +
# Data parameters
train_size = 100000
test_size = 1000
n_samples = train_size + test_size
n_features = 20
# Random Forest building parameters
max_depth = 12
n_bins = 16
n_trees = 1000
# -
# ## Generate Data on host
#
# In this case, we generate data on the client (initial process) and pass it to the workers. You could also load data directly onto the workers via, for example, `dask_cudf.read_csv()`. See also the k-means MNMG notebook (kmeans_mnmg_demo.ipynb) for an alternative method of generating data on the worker nodes.
X, y = datasets.make_classification(n_samples=n_samples, n_features=n_features,
n_clusters_per_class=1, n_informative=int(n_features / 3),
random_state=123, n_classes=5)
y = y.astype(np.int32)
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=test_size)
# ## Distribute data to worker GPUs
# +
n_partitions = n_workers
# First convert to cudf (with real data, you would likely load in cuDF format to start)
X_train_cudf = cudf.DataFrame.from_pandas(pd.DataFrame(X_train))
y_train_cudf = cudf.Series(y_train)
# Partition with Dask
# In this case, each worker will train on 1/n_partitions fraction of the data
X_train_dask = dask_cudf.from_cudf(X_train_cudf, npartitions=n_partitions)
y_train_dask = dask_cudf.from_cudf(y_train_cudf, npartitions=n_partitions)
# Persist to cache the data in active memory
X_train_dask, y_train_dask = \
dask_utils.persist_across_workers(c, [X_train_dask, y_train_dask], workers=workers)
# -
# ## Build a scikit-learn model (single node)
#
# Dask does not currently have a simple wrapper for scikit-learn's RandomForest, but scikit-learn does offer multi-CPU support via joblib, which we'll use.
# +
# %%time
# Use all avilable CPU cores
skl_model = sklRF(max_depth=max_depth, n_estimators=n_trees, n_jobs=-1)
skl_model.fit(X_train, y_train)
# -
# ## Train the distributed cuML model
# +
# %%time
cuml_model = cumlDaskRF(max_depth=max_depth, n_estimators=n_trees, n_bins=n_bins, n_streams=n_streams)
cuml_model.fit(X_train_dask, y_train_dask)
wait(cuml_model.rfs) # Allow asynchronous training tasks to finish
# -
# # Predict and check accuracy
# +
skl_y_pred = skl_model.predict(X_test)
cuml_y_pred = cuml_model.predict(X_test)
# Due to randomness in the algorithm, you may see slight variation in accuracies
print("SKLearn accuracy: ", accuracy_score(y_test, skl_y_pred))
print("CuML accuracy: ", accuracy_score(y_test, cuml_y_pred))
| cuml/random_forest_mnmg_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Центр непрерывного образования
#
# # Программа «Python для автоматизации и анализа данных»
#
# # Введение в классы в Python
#
# *<NAME>, НИУ ВШЭ*
#
# ## Объектно-Ориентированное Программирование (ООП)
#
# Объектно-ориентированное программирование (ООП) является методологией разработки программного обеспечения, в основе которой лежит понятие класса и объекта, при этом сама программа создается как некоторая совокупность объектов, которые взаимодействую друг с другом и с внешним миром. Каждый объект является экземпляром некоторого класса. Классы образуют иерархии. Классы, как и функции, создаются и используются для удобства и упрощения разработки программ. Более подробно о понятии ООП можно прочитать на [википедии](https://ru.wikipedia.org/wiki/%D0%9E%D0%B1%D1%8A%D0%B5%D0%BA%D1%82%D0%BD%D0%BE-%D0%BE%D1%80%D0%B8%D0%B5%D0%BD%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%BD%D0%BE%D0%B5_%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5).
#
# Выделяют три основных “столпа” ООП - это инкапсуляция, наследование и полиморфизм.
#
# ### Инкапсуляция
#
# Под инкапсуляцией понимается сокрытие деталей реализации, данных и т.п. от внешней стороны. Например, можно определить класс `холодильник`, который будет содержать следующие данные: `производитель`, `объем`, `количество камер хранения`, `потребляемая мощность` и т.п., и методы: `открыть/закрыть холодильник`, `включить/выключить`, но при этом реализация того, как происходит непосредственно включение и выключение пользователю вашего класса не доступна, что позволяет ее менять без опасения, что это может отразиться на использующей класс «холодильник» программе. При этом класс становится новым типом данных в рамках разрабатываемой программы. Можно создавать переменные этого нового типа, такие переменные называются объекты.
#
# ### Наследование
#
# Под наследованием понимается возможность создания нового класса на базе существующего. При этом класс потомок будет содержать те же атрибуты и методы, что и базовый класс, но при этом его можно (и нужно) расширять через добавление новых методов и атрибутов.
#
# Примером базового класса, демонстрирующего наследование, можно определить класс `автомобиль`, имеющий атрибуты: масса, мощность двигателя, объем топливного бака и методы: завести и заглушить. У такого класса может быть потомок – `грузовой автомобиль`, он будет содержать те же атрибуты и методы, что и класс `автомобиль`, и дополнительные свойства: количество осей, мощность компрессора и т.п..
#
# ### Полиморфизм
#
# Полиморфизм позволяет одинаково обращаться с объектами, имеющими однотипный интерфейс, независимо от внутренней реализации объекта. Например, с объектом класса `грузовой автомобиль` можно производить те же операции, что и с объектом класса `автомобиль`, т.к. первый является наследником второго, при этом обратное утверждение неверно (во всяком случае не всегда). Другими словами полиморфизм предполагает разную реализацию методов с одинаковыми именами. Это очень полезно при наследовании, когда в классе наследнике можно переопределить методы класса родителя. Простым примером полиморфизма может служить функция `count()`, выполняющая одинаковое действие для различных типов обьектов: `'abc'.count('a')` и `[1, 2, 'a'].count('a')`. Оператор плюс полиморфичен при сложении чисел и при сложении строк.
1 + 1
[1] + [1]
# ## Создание классов в Python
#
# Создание класса в Python начинается с инструкции `class`. Вот так будет выглядеть минимальный класс:
class Car:
"""Необязательная строка документации класса"""
pass
# Класс состоит из объявления (инструкция `class`), имени класса (нашем случае это имя `Car`) и тела класса, которое содержит атрибуты и методы (в нашем минимальном классе есть только одна инструкция `pass`). Также хорошим тоном считается описывать что делает этот класс и его методы, сразу после его объявления.
#
# Несмотря на пустое тело класса `Car`, на его основе уже можно создать определенный объект, обладающий уникальным идентификатором. Для того чтобы создать объект класса необходимо воспользоваться следующим синтаксисом:
audi = Car()
# Определив новый класс, можно создавать сколько угодно объектов на его основе. Как уже было сказано выше, такая структура данных может включать в себя некие свойства, то есть переменные, которыми будет наделен каждый экземпляр класса.
#
# ### Статические и динамические атрибуты класса
#
# Как уже было сказано выше, класс может содержать `атрибуты` и `методы`. `Атрибут` может быть статическим и динамическим. Суть в том, что для работы со статическим атрибутом, вам не нужно создавать экземпляр класса, а для работы с динамическим – нужно. Например, создадим такой класс `Car`:
class Car:
default_color = "green"
def __init__(self, color, brand, doors_num=4):
if color == None:
self.color = self.default_color
else:
self.color = color
self.brand = brand
self.doors_num = doors_num
c = Car(None, 'BMW')
c.doors_num
# В представленном выше классе, атрибут default_color – это статический атрибут, и доступ к нему, как было сказано выше, можно получить не создавая объект класса Car
Car.default_color
Car.brand
# `color`, `brand` и `doors_num` – это динамические атрибуты, при их создании было использовано ключевое слово `self`. Про `self` и конструктор `def __init__` будет рассказано далее. Также обратите внимание на то, что внутри класса мы используем статический атрибут `default_color` для присвоения цвета машины, если мы его явно не задали.
#
# `if color == None:
# self.color = default_color
# else:
# self.color = color`
#
# Для доступа к `color`, `brand` и `doors_num` предварительно нужно создать объект класса Car:
bmw = Car(None,"BMW", 2)
print(bmw.brand)
print(bmw.color)
print(bmw.doors_num)
# Мы создали объект класса, не задав ему конкретный цвет, поэтому был использован стандартный.
#
# Если обратиться через класс, то получим ошибку:
Car.brand
# Иными словами статический атрибут - это стандартный атрибут класса, который общий для всех объектов этого класса. Давайте присвоим новое значение цвету.
Car.default_color = "red"
Car.default_color
# Создадим два объекта класса `Car` и проверим, что `default_color` у них совпадает:
bmw = Car(None,"BMW",2)
audi = Car(None,"AUDI", 4)
bmw.default_color
audi.default_color
# Если поменять значение default_color через имя класса `Car`, то все будет ожидаемо: у объектов `bmw` и `audi` это значение изменится, но если поменять его через экземпляр класса, то у экземпляра будет создан атрибут с таким же именем как статический, а доступ к последнему будет потерян:
bmw.default_color = "blue"
bmw.default_color
# А у `audi` и класса все останется по-прежнему:
audi.default_color
Car.default_color
# Наш класс можем представить в виде автозавода. Все машины изначально делают в одном цвете `default_color = green` - зеленом. Если мы, покупая машину, хотим перекрасить ее, мы задаем цвет `color` - Car("black","BMW",2). Т.е. мы перекрасим машину в черный цвет, а если его не укажем, то он автоматоматически будет в стандартном зеленом цвете. Через некоторое время завод меняет стандартный цвет, допустим на красный - `Car.default_color = "red"` И теперь все машины будут создаваться изначально в красном цвете.
# +
# изначально красим в зеленый
Car.default_color = "green"
car1 = Car(None,"Niva",2)
car2 = Car(None,"Niva",2)
car3 = Car(None,"Niva",4)
car4 = Car("black","Niva",4) # Покрасили машину в другой цвет
print(car1.color,car2.color,car3.color,car4.color)
# Завод перешел на новый цвет
Car.default_color = "red"
car5 = Car(None,"Niva",2)
car6 = Car("olive","Niva",2)
car7 = Car(None,"Niva",4)
car8 = Car(None,"Niva",4) # Покрасили машину в другой цвет
print(car1.color,car2.color,car3.color,car4.color)
print(car5.color,car6.color,car7.color,car8.color)
# -
# ## Аргумент self
#
# Рассмотрим зачем нужен и что означает `self` в функциях Python. Классам нужен способ, что ссылаться на самих себя. Это способ сообщения между экземплярами. Потому что мы должны взять значении атрибута класса именно своего экземпляра, а не чужого. `Self` таким образом заменяет идентификатор объекта. Помещать его нужно в каждую функцию чтобы иметь возможность вызвать ее на текущем объекте. Также с помощью этого ключевого слова можно получать доступ к полям класса в описываемом методе.
#
# Мы уже обращались с помощью `self` к `default_color` в нашем классе `Car`.
# +
class Car:
default_color = "green"
def __init__(self, color, brand, doors_num):
if color == None:
self.color = self.default_color
else:
self.color = color
self.brand = brand
self.doors_num = doors_num
fiat = Car(None,"Fiat",5)
fiat.color
# -
fiat.color = 'black'
fiat.color
# Если бы в качестве первого параметра не было указано `self`, то при попытке создать класс, вылезла ошибка:
# +
class Car:
default_color = "green"
def __init__(self, color, brand, doors_num):
if color == None:
self.color = default_color # нет обращения к self.default_color
else:
self.color = color
self.brand = brand
self.doors_num = doors_num
fiat = Car(None,"Fiat",5)
fiat.color
# -
# Класс не знает к переменной какого экземпляра класса он обращается, а `self` говорит ему обратиться к тому экземпляру, в котором он вызывается\создается
# ## Конструктор класса
#
# Обычно при создании класса, нам хочется его сразу инициализровать некоторыми данными. Например, когда мы создадем список `a = []`, мы можем сразу передать в него некоторые значения - `a = [1,2,3,4,5]`. Точно также можно сделать с нашими самописными классами. Для этой цели в ООП используется конструктор, принимающий необходимые параметры. До этого мы уже создавали его в нашем классе:
# +
class Car:
default_color = "зеленый"
def __init__(self, color, brand, doors_num):
if color == None:
self.color = default_color
else:
self.color = color
self.brand = brand
self.doors_num = doors_num
ford = Car("желтый", "Ford", 4)
print("Красивый " + ford.color + " " + ford.brand + " c "+ str(ford.doors_num) + " дверьми")
# -
# Внешне конструктор похож на обычный метод, однако вызвать его явным образом нельзя. Вместо этого он автоматически срабатывает каждый раз, когда программа создает новый объект для класса, в котором он расположен. Имя у каждого конструктора задается в виде идентификатора `__init__`. Получаемые им параметры можно присвоить полям будущего объекта, воспользовавшись ключевым словом `self`, как в вышеописанном примере.
#
# Таким образом, класс `Car` содержит три поля: `color` (цвет), `brand` (марка) и `doors_num` (количество дверей). Конструктор принимает параметры для изменения этих свойств во время инициализации нового объекта под названием `ford`. Каждый класс содержит в себе по крайней мере один конструктор по умолчанию, если ни одного из них не было задано явно (т.е. если мы не создадим конструктор в нашем классе, то будет использован пустой конструктор по умолчанию и класс все равно будет работать).
# ### Методы класса
#
# Добавим к нашему классу методы. Метод – это функция, находящаяся внутри класса и выполняющая определенную работу.
#
# Методы бывают статическими, классовыми и уровня экземпляра класса (будем их называть обычными меотдами). Статический метод создается с декоратором `@staticmethod`, классовый – с декоратором `@classmethod`, первым аргументом в него передается `cls` (ссылка на вызываемый класс), обычный метод создается без специального декоратора, ему первым аргументом передается `self`. Подробнее про сами декораторы, можно почитать [здесь](https://pythonworld.ru/osnovy/dekoratory.html).
class Car:
@staticmethod
def ex_static_method():
print("static method")
@classmethod
def ex_class_method(cls):
print("class method")
def ex_method(self):
print("method")
# Статический и классовый метод можно вызвать, не создавая экземпляр класса, для вызова ex_method() нужен объект:
# +
Car.ex_static_method()
Car.ex_class_method()
Car.ex_method()
# -
m = Car()
m.ex_method()
# **Статическим методам** не нужен определённый первый аргумент (ни self, ни cls). Их можно воспринимать как методы, которые `не знают, к какому классу относятся`.
#
# Таким образом, статические методы прикреплены к классу лишь для удобства и не могут менять состояние ни класса, ни его экземпляра. То есть статические методы не могут получить доступ к параметрам класса или объекта. Они работают только с теми данными, которые им передаются в качестве аргументов.
# **Классовые методы** принимают класс в качестве параметра, который принято обозначать как `cls`. В данном случае он указывает на класс `Car`, а не на объект этого класса.
#
# Методы класса привязаны к самому классу, а не его экземпляру. Они могут менять состояние класса, что отразится на всех объектах этого класса, но не могут менять конкретный объект.
#
# Встроенный пример метода класса — `dict.fromkeys()` — возвращает новый словарь с переданными элементами в качестве ключей.
dict.fromkeys('AEIOU') # <- вызывается при помощи класса dict
# **Метод экземпляра класса** это наиболее часто используемый вид методов. Методы экземпляра класса принимают объект класса как первый аргумент, который принято называть `self` и который указывает на сам экземпляр. Количество параметров метода не ограничено.
#
# Используя параметр `self`, мы можем менять состояние объекта и обращаться к другим его методам и параметрам. К тому же, используя атрибут `self.__class__`, мы получаем доступ к атрибутам класса и возможности менять состояние самого класса. То есть методы экземпляров класса позволяют менять как состояние определённого объекта, так и класса.
#
# Встроенный пример метода экземпляра — `str.upper()`:
"welcome".upper() # <- вызывается на строковых данных
# ### Когда какой тип метод применять
#
# Давайте рассмотрим более натуральный пример и выясним в чем разница между методами
# +
from datetime import date
class Car:
def __init__(self, brand, age):
self.brand = brand
self.age = age
@classmethod
def from_production_year(cls, brand, prod_year):
return cls(brand, date.today().year - prod_year)
@staticmethod
def is_warranty_active(age):
return age < 3
def info(self):
print("Car: " + self.brand)
print("Age: " + str(self.age))
if self.is_warranty_active(self.age):
print("Warranty is ACTIVE")
else:
print("Warranty is NOT active")
car1 = Car('Subaru', 5)
car2 = Car.from_production_year('Skoda', 2018)
# -
car1.brand, car1.age
car2.brand, car2.age
Car.is_warranty_active(25)
car1.info()
car2.info()
# Метод класса - `from_production_year` возвращает нам СОЗДАННЫЙ внутри функции экземпляр класса `Car` с вычисленным возрастом. Т.к. мы не можем внутри класса `Car` вызвать класс `Car`, мы и используем `cls`.
#
# Статический метод - `is_warranty_active` выясняет действительна ли еще гарантия. Как вы видете, он не обращается к возрасту машины в классе, а принимает ее в качестве аргумента - `age`.
#
# Метод экземпляра класса - `info`, через `self` обращается к своим атрибутам, вызывает статическую функцию, передвая туда возраст машины.
#
#
# Выбор того, какой из методов использовать, может показаться достаточно сложным. Тем не менее с опытом этот выбор делать гораздо проще. Чаще всего **метод класса** используется тогда, когда нужен генерирующий метод, возвращающий объект класса. Как видим, метод класса `from_production_year` используется для создания объекта класса `Car` по году производства машины, а не по указанному возрасту.
#
# Статические методы в основном используются как вспомогательные функции и работают с данными, которые им передаются.
#
# Итак:
# - Методы экземпляра класса получают доступ к объекту класса через параметр `self` и к классу через `self.__class__`.
# - Методы класса не могут получить доступ к определённому объекту класса, но имеют доступ к самому классу через `cls`.
# - Статические методы работают как обычные функции, но принадлежат области имён класса. Они не имеют доступа ни к самому классу, ни к его экземплярам.
# ## Деструктор
#
# Работа с деструктором, как правило, является прерогативой языков, предоставляющих более широкие возможности для управления памятью. Несмотря на грамотную работу сборщика мусора, обеспечивающего своевременное удаление ненужных объектов, вызов деструктора все еще остается доступным. Переопределить его можно в классе, задав имя `__del__`.
# +
class Data:
def __del__(self):
print("The object is destroyed")
data = Data()
del(data)
# -
# Как и конструктор, деструктор может содержать некий пользовательский код, сообщающий об успешном завершении работы метода. В данном примере создается экземпляр класса `Data` и вызывается его деструктор, принимающий в качестве параметра сам объект.
# ## Уровни доступа атрибута и метода (Инкапсуляция)
#
# В языках программирования Java, C#, C++ можно явно указать для переменной, что доступ к ней снаружи класса запрещен, это делается с помощью ключевых слов (private, protected и т.д.). В Python таких возможностей нет, и любой может обратиться к атрибутам и методам вашего класса, если возникнет такая необходимость. Это существенный недостаток этого языка, т.к. нарушается один из ключевых принципов ООП – инкапсуляция. Хорошим тоном считается, что для чтения/изменения какого-то атрибута должны использоваться специальные методы, которые называются `getter/setter`, их можно реализовать, но ничего не помешает изменить атрибут напрямую. При этом есть соглашение, что метод или атрибут, который начинается с `нижнего подчеркивания`, является скрытым, и снаружи класса трогать его не нужно (хотя сделать это можно).
#
# Внесем соответствующие изменения в класс Car:
class Car:
def __init__(self, brand, doors_num):
self._brand = brand
self._doors_num = doors_num
def get_brand(self):
return self._brand
def set_brand(self, b):
self._brand = b
def get_doors(self):
return self._doors_num
def set_doors(self, d):
self._doors = d
def info(self):
return "Nice car with " + str(self._doors) + " doors"
# В приведенном примере для доступа к `_brand` и` _doors_num` используются специальные методы, но ничего не мешает вам обратиться к ним (атрибутам) напрямую.
mersedes = Car("Mersedes", 6)
mersedes.get_brand()
# Так лучше не делать:
mersedes._brand
mersedes._brand = 'audi'
mersedes._brand
# Лучше так:
mersedes.set_brand('mersedes')
mersedes.get_brand()
# Если же атрибут или метод начинается с двух подчеркиваний, то тут напрямую вы к нему уже не обратитесь (простым образом). Модифицируем наш класс `Car`:
class Car:
def __init__(self, brand, doors_num):
self.__brand = brand
self.__doors_num = doors_num
def get_brand(self):
return self.__brand
def set_brand(self, b):
self.__brand = b
def get_doors(self):
return self.__doors_num
def set_doors(self, d):
self.__doors = d
def info(self):
return "Nice car with " + str(self.__doors_num) + " doors"
# Попытка обратиться к `__brand` напрямую вызовет ошибку, нужно работать только через get_brand():
mersedes = Car("Mersedes", 6)
mersedes.get_brand()
mersedes.__brand
# Но на самом деле это сделать можно, просто этот атрибут теперь для внешнего использования носит название: `_Car__brand`:
mersedes._Car__brand
# ## Наследование
#
# Возможность одному классу выступать в качестве наследника для другого, перенимая тем самым его свойства и методы, является важной особенностью ООП. Благодаря этой важной особенности пропадает необходимость переписывания кода для подобных или родственных по назначению классов.
#
# При наследовании классов в Python обязательно следует соблюдать одно условие: `класс-наследник` должен представлять собой более частный случай `класса-родителя`. В следующем примере показывается как класс `Car` наследуется классом `Truck`. При описании подкласса в Python, имя родительского класса записывается в круглых скобках.
class Car:
def __init__(self, brand, doors_num):
self.__brand = brand
self.__doors_num = doors_num
def get_brand(self):
return self.__brand
def set_brand(self, b):
self.__brand = b
def get_doors(self):
return self.__doors_num
def set_doors(self, d):
self.__doors = d
def info(self):
return "Nice car with " + str(self.__doors_num) + " doors"
class Truck(Car):
def __init__(self, brand, doors_num, load_weight, axes):
# self.__brand = brand
# self.__doors_num = doors_num
super().__init__(brand, doors_num)
self.__load_weight = load_weight
self.__axes = axes
def get_load(self):
return self.__load_weight
def set_load(self, l):
self.__load_weight = l
def get_axes(self):
return self.__axes
def set_axes(self, a):
self.__axes = a
# Родительским классом является `Car`, который при инициализации принимает бренд машины и количество дверей и предоставляет его через свойства. `Truck` – класс наследник от `Car`. Обратите внимание на его метод `__init__`: в нем первым делом вызывается конструктор его родительского класса: `super().__init__(brand, doors_num)`
#
# `super` – это ключевое слово, которое используется для обращения к родительскому классу. Теперь у объекта класса `Truck` помимо уже знакомых свойств `brand` и `doors_num` появились свойства `load_weight` и `axes`:
# +
truck = Truck("Kamaz",2,13000,6)
truck.get_brand()
# -
# И смотрите, методы из родительского класса работают!
truck.get_load()
truck.set_axes(8)
truck.get_axes()
# ### Множественное наследование
#
# Наследовать можно не только один класс, но и несколько одновременно, обретая тем самым их свойства и методы. В данном примере класс `Dog` выступает в роли подкласса для `Animal` и `Pet` , поскольку может являться и тем, и другим. От `Animal Dog` получает способность спать (метод `sleep`), в то время как `Pet` дает возможность играть с хозяином (метод `play`). В свою очередь, оба родительских класса унаследовали поле `name` от `Creature`. Класс `Dog` также получил это свойство и может его использовать. Так как мы не используем конструкторы в наследованных классах, то и вызывать через `super()` ничего не надо. Конструктор родительского класса, вызовется автоматически.
# +
class Creature:
def __init__(self, name):
self.name = name
class Animal(Creature):
def sleep(self):
print(self.name + " is sleeping")
class Pet(Creature):
def play(self):
print(self.name + " is playing")
class Dog(Animal, Pet):
def bark(self):
print(self.name + " is barking")
beast = Dog("Buddy")
beast.sleep()
beast.play()
beast.bark()
# -
# В вышеописанном примере создается объект класса `Dog`, получающий имя в конструкторе. Затем по очереди выполняются методы `sleep`, `play` и `bark`, двое из которых были унаследованы. Способность лаять является уникальной особенностью собаки, поскольку не каждое животное или домашний питомец умеет это делать.
# ## Полиморфизм
#
# Как уже было сказано во введении в рамках ООП полиморфизм, как правило, используется с позиции переопределения методов базового класса в классе наследнике. Проще всего это рассмотреть на примере. В нашем базовом класс `Car` есть метод `info()`, который печатает сводную информацию по объекту класса `Car` и переопределим этот метод в классе `Truck`, добавим в него дополнительные данные:
class Car:
def __init__(self, brand, doors_num):
self.__brand = brand
self.__doors_num = doors_num
def get_brand(self):
return self.__brand
def set_brand(self, b):
self.__brand = b
def get_doors(self):
return self.__doors_num
def set_doors(self, d):
self.__doors = d
def info(self):
return "Nice car with " + str(self.__doors_num) + " doors"
class Truck(Car):
def __init__(self, brand, doors_num, load_weight, axes):
super().__init__(brand, doors_num)
self.__load_weight = load_weight
self.__axes = axes
def get_load(self):
return self.__load_weight
def set_load(self, l):
self.__load_weight = l
def get_axes(self):
return self.__axes
def set_axes(self, a):
self.__axes = a
def info(self):
return "Nice car with " + str(self.get_doors()) + " doors and can carry " + str(self.__load_weight) + " kg of cargo"
# Посмотрим, как это работает
audi = Car("Audi", 4)
audi.info()
scania = Truck("Scania",2,6500,4)
scania.info()
# Таким образом, класс наследник может расширять функционал класса родителя.
# ## Абстрактные методы
#
# Поскольку в ООП присутствует возможность наследовать поведение родительского класса, иногда возникает необходимость в специфической реализации соответствующих методов. В качестве примера можно привести следующий код, где классы `Truck` и `Bus` являются потомками класса `Car`. Как и положено, они оба наследуют метод `honk` (гудеть), однако в родительском классе для него не существует реализации.
#
# Все потому, что машина представляет собой абстрактное понятие, а значит не способно издавать какой-то конкретный гудок. Однако для грузовика и автобуса данная команда зачастую имеет общепринятое значение. В таком случае можно утверждать, что метод `honk` из `Car` является абстрактным, поскольку не имеет собственного тела реализации.
# +
class Car:
def __init__(self, brand):
self.__brand = brand
def honk(self):
pass
class Truck(Car):
def honk(self):
print("RRRRrrrr")
class Bus(Car):
def honk(self):
print("UUUUUU")
Vanhool = Bus("Vanhool")
Iveco = Truck("Iveco")
Vanhool.honk()
Iveco.honk()
# -
# Как видно из примера, потомки `Truck` и `Bus` получают `horn`, после чего переопределяют его каждый по-своему. В этом заключается суть полиморфизма, позволяющего изменять ход работы определенного метода исходя из нужд конкретного класса. При этом название у него остается общим для всех наследников, что помогает избежать путаницы с именами.
# ## Перегрузка операторов
#
# Для обработки примитивных типов данных в языках программирования используются специальные операторы. К примеру, арифметические операции выполняются при помощи обычных знаков плюс, минус, умножить, разделить. Однако при работе с собственными типами информации вполне может потребоваться помощь этих операторов. Благодаря специальным функциям, их можно самостоятельно настроить под свои задачи.
#
# В данном примере создается класс `Point`, обладающий двумя полями: `x` и `y`. Для сравнения двух разных объектов такого типа можно написать специальный метод либо же просто перегрузить соответствующий оператор. Для этого потребуется переопределить функцию `__eq__` в собственном классе, реализовав новое поведение в ее теле.
1 == 2
# +
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __eq__(self, other):
return self.x == other.x and self.y == other.y
print(Point(2, 5) == Point(2, 5))
print(Point(3, 8) == Point(4, 6))
# -
# Переопределенный метод возвращает результат сравнения двух полей у различных объектов. Благодаря этому появилась возможность сравнивать две разных точки, пользуясь всего лишь обычным оператором. Результат его работы выводится при помощи метода `print`.
#
# Аналогично сравнению, можно реализовать в Python перегрузку операторов сложения, вычитания и других арифметических и логических действий. Так же можно сделать перегрузку стандартных функций str и len.
#
# Если мы не перегрузим оператор, то наш класс выдаст ошибку или будет работать некорректно:
# +
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
print(Point(2, 5) == Point(2, 5))
print(Point(3, 8) == Point(4, 6))
# -
# В первом случае, ответ должен был быть `True`
| class_7-8/classes/Classes_lecture.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--BOOK_INFORMATION-->
# <a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
# *This notebook contains an excerpt from the book [Machine Learning for OpenCV](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv) by <NAME>.
# The code is released under the [MIT license](https://opensource.org/licenses/MIT),
# and is available on [GitHub](https://github.com/mbeyeler/opencv-machine-learning).*
#
# *Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
# If you find this content useful, please consider supporting the work by
# [buying the book](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv)!*
# <!--NAVIGATION-->
# < [Measuring-Model-Performance-with-Scoring-Functions](03.01-Measuring-Model-Performance-with-Scoring-Functions.ipynb) | [Contents](../README.md) | [Using Regression Models to Predict Continuous Outcomes](03.03-Using-Regression-Models-to-Predict-Continuous-Outcomes.ipynb) >
# # Understanding the k-NN Classifier
#
# The $k$-NN algorithm is arguably one of the simplest machine learning algorithms. The
# reason for this is that we basically only need to store the training dataset. Then, in order to
# make a prediction for a new data point, we only need to find the closest data point in the
# training dataset-its **nearest neighbor**.
#
# In a nutshell, the $k$-NN algorithm argues that a data point probably belongs to the same
# class as its neighbors.
# Of course, some neighborhoods might be a little more complicated. In this case, we would
# not just consider our closest neighbor (where $k=1$), but instead our $k$ nearest neighbors.
#
# That's all there is to it.
# ## Implementing k-NN in OpenCV
#
# Using OpenCV, we can easily create a $k$-NN model via the function `cv2.ml.KNearest_create`. Building the model then involves the following steps:
# - Generate some training data.
# - Create a k-NN object for a given number k.
# - Find the k nearest neighbors of a new data point that we want to classify.
# - Assign the class label of the new data point by majority vote.
# - Plot the result.
#
# We first import all the necessary modules: OpenCV for the $k$-NN algorithm, NumPy for
# data munging, and Matplotlib for plotting. If you are working in a Jupyter Notebook, don't
# forget to call the `%matplotlib inline` magic:
# +
import numpy as np
import cv2
import matplotlib.pyplot as plt
# %matplotlib inline
# -
plt.style.use('ggplot')
# ### Generating the training data
#
# The first step is to generate some training data. For this, we will use NumPy's random
# number generator. As discussed in the previous section, we will fix the seed of the random
# number generator, so that re-running the script will always generate the same values:
np.random.seed(42)
# We can pick a single data point with `0 <= x <= 100` and `0 <= y <= 100`:
single_data_point = np.random.randint(0, 100, 2)
single_data_point
# As shown in the preceding output, this will pick two random integers between 0 and 100.
# We will interpret the first integer as the data point's $x$ coordinate on the map, and the
# second integer as the point's $y$ coordinate. Similarly, let's pick a label for the data point:
single_label = np.random.randint(0, 2)
single_label
# Turns out that this data point would have class 0.
#
# Let's wrap this process in a function that takes as input the number of data points to
# generate (that is, `num_samples`) and the number of features every data point has (that is,
# `num_features`):
def generate_data(num_samples, num_features=2):
"""Randomly generates a number of data points"""
data_size = (num_samples, num_features)
train_data = np.random.randint(0, 100, size=data_size)
labels_size = (num_samples, 1)
labels = np.random.randint(0, 2, size=labels_size)
return train_data.astype(np.float32), labels
# Let's put the function to test and generate an arbitrary number of data points, let's say
# eleven, whose coordinates are chosen randomly:
train_data, labels = generate_data(11)
train_data
# As we can see from the preceding output, the `train_data` variable is an 11 x 2 array, where
# each row corresponds to a single data point. We can also inspect the first data point with its
# corresponding label by indexing into the array:
train_data[0], labels[0]
# This tells us that the first data point is a blue square (because it has class 0) and lives at
# location $(x, y) = (71, 60)$ on the town map. If we want, we can plot this data point on
# the town map using Matplotlib:
plt.plot(train_data[0, 0], train_data[0, 1], 'sb')
plt.xlabel('x coordinate')
plt.ylabel('y coordinate')
# But what if we want to visualize the whole training set at once? Let's write a function for
# that. The function should take as input a list of all the data points that are blue squares
# (`all_blue`) and a list of the data points that are red triangles (`all_red`):
def plot_data(all_blue, all_red):
plt.figure(figsize=(10, 6))
plt.scatter(all_blue[:, 0], all_blue[:, 1], c='b', marker='s', s=180)
plt.scatter(all_red[:, 0], all_red[:, 1], c='r', marker='^', s=180)
plt.xlabel('x coordinate (feature 1)')
plt.ylabel('y coordinate (feature 2)')
# Let's try it on our dataset! First we have to split all the data points into red and blue sets. We
# can quickly select all the elements of the `labels` array created earlier that are equal to 0,
# using the following command (where `ravel` flattens the array):
labels.ravel() == 0
# All the blue data points are then all the rows of the `train_data` array created earlier,
# whose corresponding label is 0:
blue = train_data[labels.ravel() == 0]
# The same can be done for all the red data points:
red = train_data[labels.ravel() == 1]
# Finally, let's plot all the data points:
plot_data(blue, red)
# ### Training the classifier
#
# Now it's time to train the classifier.
#
# As all other machine learning functions, the $k$-NN classifier is part of OpenCV 3.1's `ml` module. You can create a new classifier using the following command:
knn = cv2.ml.KNearest_create()
# We then pass our training data to the train method:
knn.train(train_data, cv2.ml.ROW_SAMPLE, labels)
# Here, we have to tell knn that our data is an $N \times 2$ array (that is, every row is a data point).
# Upon success, the function returns True.
# ### Predicting the label of a new data point
#
# The other really helpful method that `knn` provides is called `findNearest`. It can be used to
# predict the label of a new data point based on its nearest neighbors.
#
# Thanks to our `generate_data` function, it is actually really easy to generate a new data
# point! We can think of a new data point as a dataset of size 1:
newcomer, _ = generate_data(1)
newcomer
# Our function also returns a random label, but we are not interested in that. Instead, we
# want to predict it using our trained classifier! We can tell Python to ignore an output value
# with an underscore (`_`).
#
# Let's have a look at our town map again. We will plot the training set as we did earlier, but
# also add the new data point as a green circle (since we don't know yet whether it is
# supposed to be a blue square or a red triangle):
plot_data(blue, red)
plt.plot(newcomer[0, 0], newcomer[0, 1], 'go', markersize=14);
# If you had to guess based on its neighbors, what label would you assign the new data pointblue
# or red?
#
# Well, it depends, doesn't it? If we look at the house closest to it (the one living roughly at $(x,
# y) = (85, 75)$), we would probably assign
# the new data point to be a red triangle as well. This is exactly what our classifier would
# predict for $k=1$:
ret, results, neighbor, dist = knn.findNearest(newcomer, 1)
print("Predicted label:\t", results)
print("Neighbor's label:\t", neighbor)
print("Distance to neighbor:\t", dist)
# Here, knn reports that the nearest neighbor is 250 arbitrary units away, that the neighbor
# has label 1 (which we said corresponds to red triangles), and that therefore the new data
# point should also have label 1. The same would be true if we looked at the $k=2$ nearest
# neighbors, and the $k=3$ nearest neighbors.
#
# But we want to be careful not to pick arbitrary
# even numbers for $k$. Why is that? Refer to page 64 for the answer.
# Finally, what would happen if we dramatically widened our search window and classified
# the new data point based on its $k=7$ nearest neighbors (circled with a solid line in the figure
# mentioned earlier)?
#
# Let's find out by calling the `findNearest` method with $k=7$ neighbors:
ret, results, neighbor, dist = knn.findNearest(newcomer, 7)
print("Predicted label:\t", results)
print("Neighbor's label:\t", neighbor)
print("Distance to neighbor:\t", dist)
# Suddenly, the predicted label is 0 (blue square). The reason is that we now have four
# neighbors within the solid circle that are blue squares (label 0), and only three that are red
# triangles (label 1). So the majority vote would suggest making the newcomer a blue square
# as well.
#
# For $k=6$, there is a tie:
ret, results, neighbors, dist = knn.findNearest(newcomer, 6)
print("Predicted label:\t", results)
print("Neighbors' labels:\t", neighbors)
print("Distance to neighbors:\t", dist)
# Alternatively, predictions can be made with the `predict` method. But first, need to set `k`:
knn.setDefaultK(7)
knn.predict(newcomer)
knn.setDefaultK(6)
knn.predict(newcomer)
# As you can see, the outcome of $k$-NN changes with the number $k$. However, often we do not
# know beforehand which number $k$ is the most suitable. A naive solution to this problem is
# just to try a bunch of values for $k$, and see which one performs best. We will learn more
# sophisticated solutions in later chapters of this book.
# <!--NAVIGATION-->
# < [Measuring-Model-Performance-with-Scoring-Functions](03.01-Measuring-Model-Performance-with-Scoring-Functions.ipynb) | [Contents](../README.md) | [Using Regression Models to Predict Continuous Outcomes](03.03-Using-Regression-Models-to-Predict-Continuous-Outcomes.ipynb) >
| notebooks/03.02-Understanding-the-k-NN-Algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Introduction
#
# Text classification algorithms are at the heart of a variety of software systems that process text data at scale. Email software uses text classification to determine whether incoming mail is sent to the inbox or filtered into the spam folder.
#
# Discussion forums use text classification to determine whether comments should be flagged as inappropriate.
#
# These are two examples of topic classification, categorizing a text document into one of a predefined set of topics. In many topic classification problems, this categorization is based primarily on keywords in the text.
#
# 
# This project involves a preliminary text process, feature extraction and training the classifiers to distinguish spam or non-spam emails.
#
# ### Data
# The Raw data we used is from Enron Corpus, which consists of 5172 training emails and 5857 testing emails in .txt format. Out of the 5172 training emails there are 1500 spam emails and 3672 ham emails. We are going to train the classification model with the training emails and to classify the testing set. Download data.zip in this repo for the email files.
import numpy as np
import pandas as pd
import time
import collections
import re
import random
import scipy.io
import glob
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer,TfidfVectorizer
from sklearn import preprocessing
from sklearn.svm import LinearSVC, SVC
from sklearn.metrics import accuracy_score, confusion_matrix,precision_score,f1_score,recall_score
from sklearn import metrics #Additional scklearn functions
from sklearn.model_selection import cross_val_score,GridSearchCV #Perforing grid search
from sklearn.feature_selection import SelectKBest
from sklearn.naive_bayes import BernoulliNB,GaussianNB
from nltk import PorterStemmer # Text Processing
import pickle
pd.set_option('display.max_columns', None)
### Load the dictionary containing the dataset
pickle_in = open("Data/Enron.pkl",'rb')
data_dict = pickle.load(pickle_in)
# +
# dict to dataframe
df = pd.DataFrame.from_dict(data_dict, orient='index')
df.replace('NaN', np.nan, inplace = True)
df.info()
# -
df.head()
df.tail()
len(df[df['poi']])
# There `146` Rows/observations and 21 Variables/Column are they in our dataset - 6 Email Features and 14 financial features 1 POI label
df.plot.scatter(x = 'salary', y = 'bonus')
df['salary'].idxmax()
df.drop('TOTAL',inplace=True) # Total Row is Deleted
df.plot.scatter(x = 'salary', y = 'bonus')
df['fraction_from_poi'] = df['from_poi_to_this_person'] / df['to_messages']
df['fraction_to_poi'] = df['from_this_person_to_poi'] / df['from_messages']
# + jupyter={"outputs_hidden": true}
ax = df[df['poi']==False].plot.scatter(x='fraction_from_poi',y='fraction_to_poi',color='blue', label='non-poi')
df[df['poi']==True].plot.scatter(x='fraction_from_poi',y='fraction_to_poi',color='red', label='poi',ax=ax)
# -
features_list = ['poi', 'salary', 'bonus', 'long_term_incentive', 'deferred_income', 'deferral_payments',
'loan_advances', 'other', 'expenses', 'director_fees', 'total_payments',
'exercised_stock_options', 'restricted_stock', 'restricted_stock_deferred',
'total_stock_value', 'to_messages', 'from_messages', 'from_this_person_to_poi',
'from_poi_to_this_person', 'shared_receipt_with_poi', 'fraction_from_poi', 'fraction_to_poi']
filled_df = df.fillna(value='NaN')
data_dict = filled_df.to_dict(orient='index')
my_dataset = data_dict
# + jupyter={"outputs_hidden": true}
my_dataset.keys()
# -
from feature_format import featureFormat, targetFeatureSplit
from tester import dump_classifier_and_data
# + jupyter={"outputs_hidden": true}
data = featureFormat(my_dataset, features_list, sort_keys = True)
data
# -
y, X = targetFeatureSplit(data)
X = np.array(X)
y = np.array(y)
# +
### Cross-validation
from sklearn.model_selection import GridSearchCV, StratifiedShuffleSplit, cross_val_score
from sklearn.preprocessing import StandardScaler
sss = StratifiedShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
SCALER = [None, StandardScaler()]
SELECTOR__K = [10, 13, 15, 18, 'all']
REDUCER__N_COMPONENTS = [2, 4, 6, 8, 10]
# -
def evaluate_model(grid, X, y, cv):
nested_score = cross_val_score(grid, X=X, y=y, cv=cv, n_jobs=-1)
print("Nested f1 score: {}".format(nested_score.mean()))
grid.fit(X, y)
print("Best parameters: {}".format(grid.best_params_))
cv_accuracy = []
cv_precision = []
cv_recall = []
cv_f1 = []
for train_index, test_index in cv.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
grid.best_estimator_.fit(X_train, y_train)
pred = grid.best_estimator_.predict(X_test)
cv_accuracy.append(accuracy_score(y_test, pred))
cv_precision.append(precision_score(y_test, pred))
cv_recall.append(recall_score(y_test, pred))
cv_f1.append(f1_score(y_test, pred))
print ("Mean Accuracy: {}".format(np.mean(cv_accuracy)))
print ("Mean Precision: {}".format(np.mean(cv_precision)))
print ("Mean Recall: {}".format(np.mean(cv_recall)))
print ("Mean f1: {}".format(np.mean(cv_f1)))
# +
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
pipe = Pipeline([('scaler',StandardScaler()),
('selector', SelectKBest()),
('reducer',PCA(random_state=42)),
('classifier',GaussianNB())
])
# +
param_grid = {"scaler":SCALER,
"selector__k":SELECTOR__K ,
'reducer__n_components': REDUCER__N_COMPONENTS
}
# + jupyter={"outputs_hidden": true}
gnb_grid = GridSearchCV(pipe,param_grid,scoring='f1',cv=sss)
gnb_grid
# + jupyter={"outputs_hidden": true}
evaluate_model(gnb_grid,X,y,sss)
# -
kbest = gnb_grid.best_estimator_.named_steps['selector']
features_array = np.array(features_list)
features_array = np.delete(features_array, 0)
indices = np.argsort(kbest.scores_)[::-1]
k_features = kbest.get_support().sum()
# +
features = []
for i in range(k_features):
features.append(features_array[indices[i]])
features = features[::-1]
scores = kbest.scores_[indices[range(k_features)]][::-1]
scores
# -
import matplotlib.pyplot as plt
plt.figure(figsize=(30,16))
plt.barh(range(k_features), scores)
plt.yticks(np.arange(0.4, k_features), features)
plt.title('SelectKBest Feature Importances')
plt.show()
| Machine Learning Interview Task/Text classification For Enron Email Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pickle
import numpy as np
import tensorflow as tf
# Load pickled data
with open('small_train_traffic.p', mode='rb') as f:
data = pickle.load(f)
# -
X_train, y_train = data['features'], data['labels']
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
model = Sequential()
model.add(Flatten(input_shape=(32, 32, 3)))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(5))
model.add(Activation('softmax'))
# +
X_normalized = np.array(X_train / 255.0 - 0.5 )
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
y_one_hot = label_binarizer.fit_transform(y_train)
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, epochs=3, validation_split=0.2)
| Traffic Sign Classifier/KERAS_DNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Source: Rivals
#hide
import core_constants as cc
import functions as fx
import json
import pandas as pd
import sqlite3 as sql
# ## Set Notebook Settings
# +
conference = 'sunbelt'
years = cc.get_defYears()
headers= cc.get_header()
schoolsList = cc.get_schoolsList()
teamDirectory = cc.get_htmlDir('rivals', conference, 'teams')
playerDirectory = cc.get_htmlDir('rivals', conference, 'recruits')
#testDirectory = '..//tests//'
# -
# ## Get & Save the Teams & Players Page HTML
# #### Source: https://maryland.rivals.com/commitments/football/2012
# > This page contains metadata of each player along with the Rivals ranking and stars. Unlike 247Sports, we process the fetch and save of both pages directly from a single function
fx.get_Rivals(conference, schoolsList, years, headers, sleepyTime=6)
# ## Process Local Rivals HTML Files
#
# > All of this processing is done locally, using the files saved in the previous few steps. This creates an exhaustive store of all the fields grabbed from the scrapes.
cc.save_records('scrapedData', 'rivals_' + conference, fx.process_Rivals(playerDirectory, conference, schoolsList))
# ## Save to Database
fx.toDB_Rivals()
# +
conferences = cc.get_availableConferences()
#conferences = ['sunbelt']
for conf in conferences:
print ("working on - " + conf)
conference = conf
years = cc.get_defYears()
headers= cc.get_header()
schoolsList = cc.get_schoolsList()
teamDirectory = cc.get_htmlDir('rivals', conference, 'teams')
playerDirectory = cc.get_htmlDir('rivals', conference, 'recruits')
if (conf == 'acc' or conf == 'pactwelve'):
cc.save_records('scrapedData', 'rivals_' + conference, fx.process_Rivals(playerDirectory, conference, schoolsList, 'utf-8'))
else:
cc.save_records('scrapedData', 'rivals_' + conference, fx.process_Rivals(playerDirectory, conference, schoolsList, 'windows-1252'))
# -
bigten, big twelve, sec, american, independents, cusa, mac, mwc, sunbelt - windows-1252
acc, pactwelve - utf-8
| j_notebooks/Rivals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Monte Carlo Simulation
#
# Working with Lennard-Jones potential today. $$U(r) = 4\epsilon \left [ \left( \frac{\sigma}{r} \right )^{12} - \left( \frac{\sigma}{r} \right )^6 \right ]$$
#
# Reduced equation: $$ U^*\left(r_{ij} \right) = 4 \left[\left(\frac{1}{r^*_{ij}}\right)^{12} -\left(\frac{1}{r^*_{ij}}\right)^{6} \right] $$
#
import math
def calculate_LJ(r_ij):
"""
The LJ interaction energy between two particles.
Computes the pairwise Lennard-Jones interaction energy based on the separation distance in reduced units.
Parameters:
```````````
r_ij : float
The separation distance in reduced units.
Returns:
````````
pairwise energy : float
The pairwise Lennard-Jones interaction energy in reduced units.
"""
inverse = 1/r_ij
pairwise_energy = 4 *(math.pow(inverse, 12) - math.pow(inverse, 6))
return pairwise_energy
assert calculate_LJ(1) == 0
assert calculate_LJ(math.pow(2, 1/6)) == -1
a = 1
if a is not None:
print('not none')
# +
def calculate_distance(coord1, coord2, box_length = None):
"""
Calculate the distance between two points. When box_length is set, we use the the minimum image convention to calculate.
Parameters:
```````````
coord1, coord2 : list
The atomic coordinates [x, y, z]
box_length : float, optional
The box length. The function assumes the box is a cube.
Returns:
````````
distance : float
The distance between the two atoms.
"""
distance = 0
for i in range(len(coord1)):
coord_dist = coord1[i] - coord2[i]
if box_length:
if abs(coord_dist) > box_length / 2:
coord_dist = coord_dist - (box_length * round(coord_dist / box_length))
distance += coord_dist**2
distance = math.sqrt(distance)
return distance
def calculate_tail_correction(num_particles, box_length, cutoff):
"""
Calculate the long range tail correction
"""
const1 = (8 * math.pi * num_particles ** 2) / (3 * box_length ** 3)
const2 = (1/3) * (1 / cutoff)**9 - (1 / cutoff) **3
return const1 * const2
# +
point1 = [0,0,0]
point2 = [0,0,8]
dist1 = calculate_distance(point1, point2, box_length = 10)
assert dist1 == 2
point3 = [0,0,0]
point4 = [0,1,1]
dist2 = calculate_distance(point3, point4)
assert dist2 == math.sqrt(2)
# -
coordinates = [[0, 0, 0], [0, math.pow(2, 1/6), 0], [0, 2*math.pow(2, 1/6), 0]]
def calculate_total_energy(coords, cutoff, box_length=None):
"""
Calculates the total interaction energy existing among a set of coordinates.
Parameters:
```````````
coords : list
Nested list of coordinates [x,y,z]
cutoff : float
The cutoff distance for the system
Returns:
````````
total_energy : float
The total interaction energy calculated from LJ potential.
"""
total_energy = 0
num_atoms = len(coords)
for i in range(num_atoms):
for j in range(i+1, num_atoms):
dist = calculate_distance(coords[i], coords[j], box_length=box_length)
if dist < cutoff:
energy = calculate_LJ(dist)
total_energy += energy
return total_energy
calculate_total_energy(atomic_coordinates, 3, 10)
len(atomic_coordinates)
# ## Calculating energy from NIST data
import os
def read_xyz(filepath):
"""
Reads coordinates from an xyz file.
Parameters
----------
filepath : str
The path to the xyz file to be processed.
Returns
-------
atomic_coordinates : list
A two dimensional list containing atomic coordinates
"""
with open(filepath) as f:
box_length = float(f.readline().split()[0])
num_atoms = float(f.readline())
coordinates = f.readlines()
atomic_coordinates = []
for atom in coordinates:
split_atoms = atom.split()
float_coords = []
# We split this way to get rid of the atom label.
for coord in split_atoms[1:]:
float_coords.append(float(coord))
atomic_coordinates.append(float_coords)
return atomic_coordinates, box_length
filename = os.path.join('lj_sample_configurations', 'lj_sample_config_periodic1.txt')
atomic_coordinates, box_length = read_xyz(filename)
calculate_total_energy(atomic_coordinates,3,10)
# #Flow of Calculations
def accept_or_reject(delta_e, beta):
"""
Accept or reject based on change in energy and temperature.
"""
if delta_e <= 0:
accept = True
else:
random_number = random.random()
p_acc = math.exp(-beta*delta_e)
if random_number < p_acc:
accept = True
else:
accept =False
return accept
delta_energy = -1
beta = 1
assert accept_or_reject(delta_energy, beta) is True
delta_energy = 0
beta = 1
assert accept_or_reject(delta_energy, beta) is True
# +
import random
random.seed(0)
random.random()
# -
delta_energy = 1
beta = 1
p_acc = math.exp(-1)
print(p_acc)
random.seed(0)
delta_e = 1
beta = 1
assert accept_or_reject(delta_e,beta) is False
random.seed(1)
delta_e = 1
beta = 1
assert accept_or_reject(delta_e,beta) is True
#Unset random seed
random.seed()
def calculate_pair_energy(coordinates, i_particle, box_length, cutoff):
"""
Calculate the interaction energy of a particle with its environment(all other particles in the system)
Parameters
----------
coordinates: list
The coordinates for all the particles in the system.
i_particles: int
The particle index for which to calculate the energy.
box_length: float
The length of the simulation box.
cutoff: float
The simulation cutoff. Beyond this distance, interactions are not calculated.
Returns
-------
e_total: float
The pairwise interaction energy of the i-th particle with all other particles in the system.
"""
e_total = 0
i_position = coordinates[i_particle]
num_atoms = len(coordinates)
for j_particle in range(num_atoms):
if i_particle != j_particle:
j_position = coordinates[j_particle]
dist = calculate_distance(i_position, j_position, box_length)
if dist < cutoff:
energy = calculate_LJ(dist)
e_total += energy
return e_total
# +
coordinates = [[0,0,0],[0,0,2**(1/6)],[0,0,2*(2**(1/6))]]
assert calculate_pair_energy(coordinates,1,10,3) == -2
assert calculate_pair_energy(coordinates,0,10,3) == calculate_pair_energy(coordinates,2,10,3)
# +
print(calculate_tail_correction(num_particles,box_length,cutoff))
print(calculate_total_energy(coordinates,box_length,cutoff))
# -
# # Simulation Loop
# +
import os
# Set simulation Parameters
reduced_temperature = 0.9
num_steps = 5000
max_displacement = 0.1
cutoff = 3
# Reporting information
freq = 1000
steps =[]
energies = []
# Calculated quantities
beta = 1/reduced_temperature
# Read initial coordinates
file_path = os.path.join('lj_sample_configurations', 'lj_sample_config_periodic1.txt')
coordinates, box_length = read_xyz(file_path)
num_particles = len(coordinates)
total_energy = calculate_total_energy(coordinates,cutoff,box_length)
print(total_energy)
total_energy += calculate_tail_correction(num_particles,box_length,cutoff)
for step in range(num_steps):
# 1. Randomly pick one of num_particles particles
random_particle = random.randrange(num_particles)
# 2. Calculate the interaction energy of the selected particle with the system. Store this value.
current_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)
# 3. Generate a random x,y,z displacement range (-max_displacement, max_displacement)-uniform
x_rand = random.uniform(-max_displacement, max_displacement)
y_rand = random.uniform(-max_displacement, max_displacement)
z_rand = random.uniform(-max_displacement, max_displacement)
# 4. Modify the coordinate of selected particle by generated displacements.
coordinates[random_particle][0] += x_rand
coordinates[random_particle][1] += y_rand
coordinates[random_particle][2] += z_rand
# 5. Calculate the new interaction energy of moved particle, store this value.
proposed_energy = calculate_pair_energy(coordinates, random_particle, box_length,cutoff)
# 6. Calculate energy change and decide if we accept the move.
delta_energy = proposed_energy - current_energy
accept = accept_or_reject(delta_energy, beta)
# 7. If accept, keep movement. If not revert to old position.
if accept:
total_energy += delta_energy
else:
# Move is not accepted, roll back coordinates
coordinates[random_particle][0] -= x_rand
coordinates[random_particle][1] -= y_rand
coordinates[random_particle][2] -= z_rand
# 8. Print the energy and store the coordinates at certain intervals
if step % freq == 0:
print(step, total_energy/num_particles)
steps.append(step)
energies.append(total_energy/num_particles)
# -
energies
# +
import matplotlib.pyplot as plt
# %matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(steps, energies)
# -
| day3_hw.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1> EDA & Data Visualization </h1>
# <hr>
# <h3>In this section we will visualize the data and make useful reports and dashboards in order to get more familiar and have a more clear vision of this data.</h3>
# <br><br>
# For this part, we import our data and change the index to datetime so we can work on the time series data. We will start by showing what are the top products that we sold across the globe. There two indicators that can show us how much benefit came from each product. In the first plot of the subplot below, we can see the top 20 products that were bought in the most amounts by the customers and in the second plot we can see which products have brought us the most monetary benefit.
#importing necessary libraries and the cleaned dataset
import pandas as pd, numpy as np, matplotlib.pyplot as plt, seaborn as sns
# %matplotlib inline
CleanDataset = r'../Cleaned-Dataset/OnlineRetail_Cleaned.csv'
Data_Cleaned = pd.read_csv(CleanDataset, index_col = 'InvoiceDate')
Data_Cleaned.index = pd.to_datetime(Data_Cleaned.index, format = '%Y-%m-%d %H:%M', box = False)
#top 20 products by quantity and finalprice
sns.set_style('whitegrid')
Top20Quan = Data_Cleaned.groupby('Description')['Quantity'].agg('sum').sort_values(ascending=False)[0:20]
Top20Price = Data_Cleaned.groupby('Description')['FinalPrice'].agg('sum').sort_values(ascending=False)[0:20]
#creating the subplot
fig,axs = plt.subplots(nrows=2, ncols=1, figsize = (12,12))
plt.subplots_adjust(hspace = 0.3)
fig.suptitle('Best Selling Products by Amount and Value', fontsize=15, x = 0.4, y = 0.98)
sns.barplot(x=Top20Quan.values, y=Top20Quan.index, ax= axs[0]).set(xlabel='Total amount of sales')
axs[0].set_title('By Amount', size=12, fontweight = 'bold')
sns.barplot(x=Top20Price.values, y=Top20Price.index, ax= axs[1]).set(xlabel='Total value of sales')
axs[1].set_title('By Value', size=12, fontweight = 'bold')
plt.show()
# The next statistic that we are interested in is that which products werer mostly returned by our customers and also which customers and from which countries had the most returned items in their transactions.
#finding the most returned items and the customers with the corresponding country
ReturnedItems = Data_Cleaned[Data_Cleaned.Quantity<0].groupby('Description')['Quantity'].sum()
ReturnedItems = ReturnedItems.abs().sort_values(ascending=False)[0:10]
ReturnCust = Data_Cleaned[Data_Cleaned.Quantity<0].groupby(['CustomerID','Country'])['Quantity'].sum()
ReturnCust = ReturnCust.abs().sort_values(ascending=False)[0:10]
#creting the subplot
fig, [ax1, ax2] = plt.subplots(nrows=2, ncols=1, figsize=(12,10))
ReturnedItems.sort_values().plot(kind='barh', ax=ax1).set_title('Most Returned Items', fontsize=15)
ReturnCust.sort_values().plot(kind='barh', ax=ax2).set_title('Customers with Most Returns', fontsize=15)
ax1.set(xlabel='Quantity')
ax2.set(xlabel='Quantity')
plt.subplots_adjust(hspace=0.4)
plt.show()
# In the jointplot below, we can see the pairwise comparison between the 'UnitPrice' and the 'Quantity' of the purchased products. It makes sense that as the price of a product increases, the amount of sales of that product would get smaller and customers are more inclined to buy products in larger quantities if they have lower prices.
#plotting the qunatity vs unitprice
Corr = sns.jointplot(x="Quantity", y="UnitPrice", data = Data_Cleaned[Data_Cleaned.FinalPrice>0], height = 7)
Corr.fig.suptitle("UnitPrice and Quantity Comparison", fontsize = 15, y = 1.1)
plt.show()
# In the next chart we are going to see the trend of the sales during the year in a weekly manner. We can get the weekly sales by resampling our time series data to weeks and get the sum of the values in each week. In the first charts we can see the weekly sales and in the second one the weekly returns by customers. After a sudden decline in January, we can see an almost upward trend in the sales. As for the returns, except for the second week of October, it is almost invariant but with a slight increase.
#resampling to get the weekly sales and returns
WeeklySale = Data_Cleaned[Data_Cleaned['Quantity']>0].Quantity.resample('W').sum()
WeeklyRet = Data_Cleaned[Data_Cleaned['Quantity']<0].Quantity.resample('W').sum().abs()
#creating the subplot
fig,[ax1, ax2] = plt.subplots(nrows=1,ncols=2, figsize = (15,5))
WeeklySale.plot(ax=ax1).set(xlabel="Month", ylabel="Quantity")
ax1.set_title("Weekly Sales Quantity", fontsize = 15)
WeeklyRet.plot(ax=ax2).set(xlabel="Month", ylabel="Quantity")
ax2.set_title("Weekly Returns Quantity", fontsize = 15)
plt.show()
# In the next chart, we are going to see how many items were sold and returned across the foreign countries. Since United Kingdom has the majority of sales and it will not give us any useful information, we will not show it in our chart so it would have a better and more informing look. It looks like our product were mostly sold in Netherlands and mostly returned in Ireland(EIRE).
#grouping data by the countries(except UK)
ByCountrySale = Data_Cleaned[(Data_Cleaned.Country != 'UNITED KINGDOM') & (Data_Cleaned.Quantity > 0)].groupby('Country')['Quantity'].sum()
ByCountryRet = Data_Cleaned[(Data_Cleaned.Country != 'UNITED KINGDOM') & (Data_Cleaned.Quantity < 0)].groupby('Country')['Quantity'].sum().abs()
#creating the subplot
fig, [ax1,ax2] = plt.subplots(nrows=2,ncols=1,figsize=(10,14))
ByCountrySale.plot(kind='bar', ax=ax1).set(ylabel = 'Quantity',xlabel='')
ax1.set_title('Sales', size=12, fontweight = 'bold')
ByCountryRet.plot(kind='bar', ax=ax2).set(ylabel = 'Quantity',xlabel='')
ax2.set_title('Returns', size=12, fontweight = 'bold')
plt.suptitle('Sales in Foreign Countries', fontsize = 15)
plt.subplots_adjust(hspace = 0.6)
plt.show()
# Since we got the day of the week in which the items were sold, we can use it to see the sales value by each day of the week. As it is obvious Thursday has the most and Sunday has the least value.
#creating the pie chart
Data_Cleaned.groupby('Day of week')['FinalPrice'].sum().plot(kind = 'pie', autopct = '%.2f%%', figsize=(7,7)).set(ylabel='')
plt.title('Percantages of Sales Value by Day of Week', fontsize = 15)
plt.show()
# We can filter out our best customers based on the value that they brought to the company and also show from which countries they come from.
#filtering customers by the total finalprice
Top10Customers = Data_Cleaned.groupby(['CustomerID','Country'])['FinalPrice'].sum().sort_values(ascending=False)[0:10]
#creating the barplot
plt.figure(figsize=(8,5))
sns.barplot(x=Top10Customers.values, y=Top10Customers.index).set(xlabel='Total Value',ylabel='CustomerID')
plt.suptitle('Top10 Customers and Country of Origin by Sales Value', fontsize = 15)
plt.show()
# Another statistic that we could use for the future planning, is how many of our customers are repeat customers, meaning that they bought products from us more than once. In the plot below we can see that almost 70% of the customers are repeat customers. In the other plot we can also see which customers from which countries had the most repeats.
#grouping customers by the number of their visits and separating them
MostRepeat = Data_Cleaned.groupby(['CustomerID','Country'])['InvoiceNo'].nunique().sort_values(ascending=False)
rep = MostRepeat[MostRepeat != 1].values
nrep = MostRepeat[MostRepeat == 1].values
ser = pd.Series([len(rep)/(len(rep)+len(nrep)),len(nrep)/(len(rep)+len(nrep))], index=['Repeat Customers','One-time Customers'])
#creating the subplot
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, figsize= (15,5), gridspec_kw= {'width_ratios':[3,1]})
plt.subplots_adjust(wspace=0.2)
sns.barplot(x=MostRepeat[0:10].values, y=MostRepeat[0:10].index, ax=ax1).set(xlabel='Number of Transactions(Repeats)',ylabel='CustomerID')
ser.plot(kind='pie', autopct='%.2f%%', ax=ax2).set(ylabel='')
plt.suptitle('Top Repeat Customers', fontsize=15)
plt.show()
# In the plots below, we can see the distribution plots of the the 'Quantity' and 'UnitPrice' attributes.
#creating distribution plots
fig , [ax1,ax2] = plt.subplots(nrows=1,ncols=2,figsize=(12,4))
with sns.axes_style('dark'):
sns.distplot(Data_Cleaned['Quantity'], ax=ax1)
sns.distplot(Data_Cleaned['UnitPrice'], ax=ax2)
fig.suptitle('UnitPrice and Quantity Distribution', fontsize = 15)
plt.show()
# In the last plot, we will use three features two show how the sales are distributed among different months and days of week. To show that, we will use seaborn's heatmap. The x-axis shows the day and the y-axis shows the month in which the items were bought. The color scale shows the total value of sales.
HM_Data = Data_Cleaned.pivot_table(index = 'InvoiceMonth',columns = 'Day of week', values = 'FinalPrice', aggfunc='sum')
plt.figure(figsize = (10,6))
sns.heatmap(HM_Data, cmap = 'vlag').set(xlabel='', ylabel='')
plt.title('Sales Value per Month and Day of Week', fontsize = 15)
plt.show()
| Data-Visualization/1-Visualization-and-Reports.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
'''
3.2.3 create_regression_prediction_c2.ipynb
Predicts features for Cluster 2
<NAME>
'''
# +
# uses pyCaret to find optimise model and tune it
# check version
from library.common import Core
core = Core()
from pycaret.utils import version
from pycaret.regression import *
version()
# + pycharm={"name": "#%%\n"}
# Set up pyCaret Regression
# load dataset
regions_list = core.list_of_regions
r = 2
cluster = regions_list[r]
print(cluster)
data = core.get_cluster_regression_datas(cluster = cluster, first = core.start_year, last = core.stop_year)
reg0 = setup(data, target = 'co2', session_id=123, log_experiment=True,
normalize = core.regression_normalize, normalize_method = core.normalize_method,
remove_outliers = core.remove_outliers, outliers_threshold = core.outliers_threshold,
verbose = False, silent = core.silent_mode,
experiment_name= f'carbon emission Cluster {r}')
best_model = compare_models(fold= core.regression_cv, sort = core.error_optimise)
# + pycharm={"name": "#%%\n"}
selected_model = 'et'
model = create_model(selected_model)
tuned_model = tune_model(model, n_iter=100, optimize = core.error_optimise)
# + pycharm={"name": "#%%\n"}
# Rejected candidate as the result was not optimal
# selected_model = 'lasso'
# model = create_model(selected_model)
# tuned_model = tune_model(model, n_iter=50, optimize = 'MSE')
# + pycharm={"name": "#%%\n"}
# Rejected candidate as the result was not optimal
# selected_model = 'llar'
# model = create_model(selected_model)
# tuned_model = tune_model(model, n_iter=50, optimize = 'MSE')
# + pycharm={"name": "#%%\n"}
tuned_model
# + pycharm={"name": "#%%\n"}
plot_model(tuned_model)
# + pycharm={"name": "#%%\n"}
plot_model(tuned_model, plot = 'error')
# + pycharm={"name": "#%%\n"}
plot_model(tuned_model, plot = 'feature')
# + pycharm={"name": "#%%\n"}
evaluate_model(tuned_model)
# + pycharm={"name": "#%%\n"}
try:
interpret_model(tuned_model)
except:
print('No plot for this model')
# + pycharm={"name": "#%%\n"}
try:
interpret_model(tuned_model, plot = 'correlation')
except:
print('No plot for this model')
# + pycharm={"name": "#%%\n"}
try:
interpret_model(tuned_model, plot = 'reason', observation = 12)
except:
print('No plot for this model')
# -
#
# + pycharm={"name": "#%%\n"}
final_model = finalize_model(tuned_model)
data = core.get_forecasts()
data = data.drop(columns = ['co2'])
data_unseen = data.loc[data.cluster.eq(cluster),]
# generate predictions on unseen data
predictions = predict_model(final_model, data = data_unseen)
predictions = predictions.rename(columns = {'Label': 'co2'})
predictions[['year', 'co2']]
| notebooks/3.2.3 create_regression_prediction_c2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from oda_api.api import DispatcherAPI
from oda_api.plot_tools import OdaImage,OdaLightCurve
from oda_api.data_products import BinaryData
import os
from astropy.io import fits
import numpy as np
from numpy import sqrt
import matplotlib.pyplot as plt
# %matplotlib inline
# + tags=["parameters"]
source_name='NGC 2110'
ra=88.047400
dec=-7.456247
radius=10.
Tstart='2003-03-15T00:00:00'
Tstop='2018-03-15T00:00:00'
E1_keV=30.
E2_keV=100.
host='www.astro.unige.ch/cdci/astrooda/dispatch-data'
rebin=10 # minimal significance in energy bin, for spectral plotting
# -
try: input = raw_input
except NameError: pass
token=input() # token for restricted access server
cookies=dict(_oauth2_proxy=token)
disp=DispatcherAPI(host=host)
disp=DispatcherAPI(host=host)
import requests
url="https://www.astro.unige.ch/cdci/astrooda/dispatch-data/gw/timesystem/api/v1.0/scwlist/cons/"
def queryxtime(**args):
params=Tstart+'/'+Tstop+'?&ra='+str(ra)+'&dec='+str(dec)+'&radius='+str(radius)+'&min_good_isgri=1000'
print(url+params)
return requests.get(url+params,cookies=cookies).json()
#if token!='':
scwlist=queryxtime()
m=len(scwlist)
pointings_osa10=[]
pointings_osa11=[]
for i in range(m):
if scwlist[i][-2:]=='10':
if(int(scwlist[i][:4])<1626):
pointings_osa10.append(scwlist[i]+'.001')
else:
pointings_osa11.append(scwlist[i]+'.001')
#else:
# pointings=np.genfromtxt('scws_3C279_isgri_10deg.txt', dtype='str')
m_osa10=len(pointings_osa10)
m_osa11=len(pointings_osa11)
# +
scw_lists_osa10=[]
scw_lists_osa11=[]
count=0
scw_string=''
for i in range(m_osa10):
if count<50:
scw_string=scw_string+str(pointings_osa10[i])+','
count+=1
else:
scw_lists_osa10.append(scw_string[:-1])
count=0
scw_string=str(pointings_osa10[i])+','
scw_lists_osa10.append(scw_string[:-1])
print(len(scw_lists_osa10))
count=0
scw_string=''
for i in range(m_osa11):
if count<50:
scw_string=scw_string+str(pointings_osa11[i])+','
count+=1
else:
scw_lists_osa11.append(scw_string[:-1])
count=0
scw_string=str(pointings_osa11[i])+','
scw_lists_osa11.append(scw_string[:-1])
print(len(scw_lists_osa11))
# -
data=disp.get_product(instrument='isgri',
product='isgri_image',
scw_list=scw_lists_osa10[0],
E1_keV=E1_keV,
E2_keV=E2_keV,
osa_version='OSA10.2',
RA=ra,
DEC=dec,
detection_threshold=3.5,
product_type='Real')
# +
data.dispatcher_catalog_1.table
# +
FLAG=0
torm=[]
for ID,n in enumerate(data.dispatcher_catalog_1.table['src_names']):
if(n=='NEW_1'):
torm.append(ID)
if(n==source_name):
FLAG=1
data.dispatcher_catalog_1.table.remove_rows(torm)
nrows=len(data.dispatcher_catalog_1.table['src_names'])
print(nrows)
print(FLAG)
# +
if FLAG==0:
data.dispatcher_catalog_1.table.add_row((0,'3C 279',0,ra,dec,0,2,0,0))
data.dispatcher_catalog_1.table
# -
api_cat=data.dispatcher_catalog_1.get_api_dictionary()
spectrum_results=[]
for i in range(len(scw_lists_osa10)):
print(i)
data=disp.get_product(instrument='isgri',
product='isgri_spectrum',
scw_list=scw_lists_osa10[i],
query_type='Real',
osa_version='OSA10.2',
RA=ra,
DEC=dec,
product_type='Real',
selected_catalog=api_cat)
spectrum_results.append(data)
# +
d=spectrum_results[0]
for ID,s in enumerate(d._p_list):
if (s.meta_data['src_name']==source_name):
if(s.meta_data['product']=='isgri_spectrum'):
ID_spec=ID
if(s.meta_data['product']=='isgri_arf'):
ID_arf=ID
if(s.meta_data['product']=='isgri_rmf'):
ID_rmf=ID
print(ID_spec, ID_arf, ID_rmf)
# +
d=spectrum_results[0]
spec=d._p_list[ID_spec].data_unit[1].data
arf=d._p_list[ID_arf].data_unit[1].data
rmf=d._p_list[ID_rmf].data_unit[2].data
ch=spec['CHANNEL']
rate=spec['RATE']*0.
err=spec['STAT_ERR']*0.
syst=spec['SYS_ERR']*0.
rate.fill(0)
err.fill(0)
syst.fill(0)
qual=spec['QUALITY']
matrix=rmf['MATRIX']*0.
specresp=arf['SPECRESP']*0.
tot_expos=0.
corr_expos=np.zeros(len(rate))
print(len(rate))
for k in range(len(scw_lists_osa10)):
d=spectrum_results[k]
spec=d._p_list[ID_spec].data_unit[1].data
arf=d._p_list[ID_arf].data_unit[1].data
rmf=d._p_list[ID_rmf].data_unit[2].data
expos=d._p_list[0].data_unit[1].header['EXPOSURE']
tot_expos=tot_expos+expos
print(k,expos)
for j in range(len(rate)):
if(spec['QUALITY'][j]==0):
rate[j]=rate[j]+spec['RATE'][j]/(spec['STAT_ERR'][j])**2
err[j]=err[j]+1./(spec['STAT_ERR'][j])**2
syst[j]=syst[j]+(spec['SYS_ERR'][j])**2*expos
corr_expos[j]=corr_expos[j]+expos
matrix=matrix+rmf['MATRIX']*expos
specresp=specresp+arf['SPECRESP']*expos
for i in range(len(rate)):
if err[i]>0.:
rate[i]=rate[i]/err[i]
err[i]=1./sqrt(err[i])
matrix=matrix/tot_expos
specresp=specresp/tot_expos
syst=sqrt(syst/(corr_expos+1.))
print('Total exposure:',tot_expos)
# -
print(rate)
print(err)
d._p_list[ID_spec].data_unit[1].data['RATE']=rate
d._p_list[ID_spec].data_unit[1].data['STAT_ERR']=err
d._p_list[ID_rmf].data_unit[2].data['MATRIX']=matrix
d._p_list[ID_arf].data_unit[1].data['SPECRESP']=specresp
name=source_name.replace(" ", "")
specname=name+'_spectrum_osa10.fits'
arfname=name+'_arf_osa10.fits.gz'
rmfname=name+'_rmf_osa10.fits.gz'
data._p_list[ID_spec].write_fits_file(specname)
data._p_list[ID_arf].write_fits_file(arfname)
data._p_list[ID_rmf].write_fits_file(rmfname)
hdul = fits.open(specname, mode='update')
hdr=hdul[1].header
hdr.set('EXPOSURE', tot_expos)
hdul.close()
# !./spectrum_fit_osa10.sh $name $rebin
spectrum_results1=[]
for i in range(len(scw_lists_osa11)):
print(i)
data=disp.get_product(instrument='isgri',
product='isgri_spectrum',
scw_list=scw_lists_osa11[i],
query_type='Real',
osa_version='OSA11.0',
RA=ra,
DEC=dec,
product_type='Real',
selected_catalog=api_cat)
spectrum_results1.append(data)
# +
d=spectrum_results1[0]
for ID,s in enumerate(d._p_list):
if (s.meta_data['src_name']==source_name):
if(s.meta_data['product']=='isgri_spectrum'):
ID_spec=ID
if(s.meta_data['product']=='isgri_arf'):
ID_arf=ID
if(s.meta_data['product']=='isgri_rmf'):
ID_rmf=ID
print(ID_spec, ID_arf, ID_rmf)
# +
d=spectrum_results1[0]
spec=d._p_list[ID_spec].data_unit[1].data
arf=d._p_list[ID_arf].data_unit[1].data
rmf=d._p_list[ID_rmf].data_unit[2].data
ch=spec['CHANNEL']
rate=spec['RATE']*0.
err=spec['STAT_ERR']*0.
syst=spec['SYS_ERR']*0.
rate.fill(0)
err.fill(0)
syst.fill(0)
qual=spec['QUALITY']
matrix=rmf['MATRIX']*0.
specresp=arf['SPECRESP']*0.
tot_expos=0.
corr_expos=np.zeros(len(rate))
print(len(rate))
for k in range(len(scw_lists_osa11)):
d=spectrum_results1[k]
spec=d._p_list[ID_spec].data_unit[1].data
arf=d._p_list[ID_arf].data_unit[1].data
rmf=d._p_list[ID_rmf].data_unit[2].data
expos=d._p_list[0].data_unit[1].header['EXPOSURE']
tot_expos=tot_expos+expos
print(k,expos)
for j in range(len(rate)):
if(spec['QUALITY'][j]==0):
rate[j]=rate[j]+spec['RATE'][j]/(spec['STAT_ERR'][j])**2
err[j]=err[j]+1./(spec['STAT_ERR'][j])**2
syst[j]=syst[j]+(spec['SYS_ERR'][j])**2*expos
corr_expos[j]=corr_expos[j]+expos
matrix=matrix+rmf['MATRIX']*expos
specresp=specresp+arf['SPECRESP']*expos
for i in range(len(rate)):
if err[i]>0.:
rate[i]=rate[i]/err[i]
err[i]=1./sqrt(err[i])
matrix=matrix/tot_expos
specresp=specresp/tot_expos
syst=sqrt(syst/(corr_expos+1.))
print('Total exposure:',tot_expos)
# -
print(rate)
print(err)
d._p_list[ID_spec].data_unit[1].data['RATE']=rate
d._p_list[ID_spec].data_unit[1].data['STAT_ERR']=err
d._p_list[ID_rmf].data_unit[2].data['MATRIX']=matrix
d._p_list[ID_arf].data_unit[1].data['SPECRESP']=specresp
name=source_name.replace(" ", "")
specname=name+'_spectrum_osa11.fits'
arfname=name+'_arf_osa11.fits.gz'
rmfname=name+'_rmf_osa11.fits.gz'
data._p_list[ID_spec].write_fits_file(specname)
data._p_list[ID_arf].write_fits_file(arfname)
data._p_list[ID_rmf].write_fits_file(rmfname)
hdul = fits.open(specname, mode='update')
hdr=hdul[1].header
hdr.set('EXPOSURE', tot_expos)
hdul.close()
# !./spectrum_fit_osa11.sh $name $rebin
# +
plt.figure(figsize=(10,7))
spectrum=np.genfromtxt(name+'_spectrum_osa10.txt',skip_header=3)
en=spectrum[:,0]
en_err=spectrum[:,1]
fl=spectrum[:,2]
fl_err=spectrum[:,3]
mo=spectrum[:,4]
plt.errorbar(en,fl,xerr=en_err,yerr=fl_err,linestyle='none',linewidth=8,color='black',alpha=0.5,label='OSA 10.2 (2003-2015)')
plt.plot(en,mo,color='black',linewidth=4)
spectrum=np.genfromtxt(name+'_spectrum_osa11.txt',skip_header=3)
en=spectrum[:,0]
en_err=spectrum[:,1]
fl=spectrum[:,2]
fl_err=spectrum[:,3]
mo=spectrum[:,4]
plt.errorbar(en,fl,xerr=en_err,yerr=fl_err,linestyle='none',linewidth=8,color='blue',alpha=0.5,label='OSA 11.2 (2016-..)')
plt.plot(en,mo,color='blue',linewidth=4)
plt.tick_params(axis='both', which='major', labelsize=16)
plt.xscale('log')
plt.yscale('log')
plt.ylim(1.e-3,6.e-1)
plt.xlim(15,350)
plt.xlabel('$E$, keV',fontsize=16)
plt.ylabel('$E^2F_E$, keV/(cm$^2$s)',fontsize=16)
plt.legend(loc='lower left',fontsize=16)
plt.savefig(name+'_spectrum.pdf',format='pdf',dpi=100)
# + tags=["outputs"]
spectrum_pdf=name+'_spectrum.pdf'
| examples/spectrum_longterm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 02 - Multilayer perceptrons in gluon
import mxnet as mx
import numpy as np
from mxnet import gluon
from tqdm import tqdm
# ## Context
data_ctx = mx.cpu()
model_ctx = mx.cpu()
# ## MNIST Dataset
batch_size = 64
num_inputs = 784
num_outputs = 10
num_examples = 60000
def transform(data, label):
return data.astype(np.float32) / 255, label.astype(np.float32)
train_data = gluon.data.DataLoader(dataset=gluon.data.vision.MNIST(train=True, transform=transform),
batch_size=batch_size,
shuffle=True)
test_data = gluon.data.DataLoader(dataset=gluon.data.vision.MNIST(train=False, transform=transform),
batch_size=batch_size,
shuffle=False)
# ## Define MLP model with mx.Block
class MLP(gluon.Block):
def __init__(self, **kwargs):
super(MLP, self).__init__(**kwargs)
with self.name_scope():
self.dense0 = gluon.nn.Dense(64)
self.dense1 = gluon.nn.Dense(64)
self.dense2 = gluon.nn.Dense(10)
def forward(self, x):
x = mx.nd.relu(self.dense0(x))
x = mx.nd.relu(self.dense1(x))
x = self.dense2(x)
return x
net = MLP()
net.collect_params().initialize(mx.init.Normal(sigma=.01),
ctx=model_ctx)
# ## Example of a single forward pass
data = mx.nd.ones(shape=[1, 784])
# +
class MLP(gluon.Block):
def __init__(self, **kwargs):
super(MLP, self).__init__(**kwargs)
with self.name_scope():
self.dense0 = gluon.nn.Dense(units=64, activation="relu")
self.dense1 = gluon.nn.Dense(units=64, activation="relu")
self.dense2 = gluon.nn.Dense(units=10)
def forward(self, x):
x = self.dense0(x)
print("-" * 70)
print("Hidden Representation 1: %s" % x)
x = self.dense1(x)
print("-" * 70)
print("Hidden Representation 2: %s" % x)
x = self.dense2(x)
print("-" * 70)
print("Network output: %s" % x)
print("-" * 70)
return x
net = MLP()
net.collect_params().initialize(mx.init.Normal(sigma=.01), ctx=model_ctx)
net(data.as_in_context(model_ctx))
# -
# ## Faster modeling with gluon.nn.Sequential
num_hidden = 64
# Defining a sequential model
net = gluon.nn.Sequential()
with net.name_scope():
net.add(gluon.nn.Dense(units=num_hidden,
activation="relu"))
net.add(gluon.nn.Dense(units=num_hidden,
activation="relu"))
net.add(gluon.nn.Dense(units=num_outputs))
# Parameter initialization
net.collect_params().initialize(mx.init.Normal(sigma=.1),
ctx=model_ctx)
# Softmax cross-entropy
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
# Optimizer
trainer = gluon.Trainer(params=net.collect_params(),
optimizer='sgd',
optimizer_params={'learning_rate': 0.01})
# ## Evaluation
def evaluate_accuracy(data_iterator, net):
acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
data = data.as_in_context(model_ctx).reshape((-1, 784))
label = label.as_in_context(model_ctx)
output = net(data)
predictions = mx.nd.argmax(data=output,
axis=1)
# Updating accuracy metric
acc.update(preds=predictions,
labels=label)
return acc.get()[1]
# ## Training
epochs = 10
smoothing_constant = .01
for e in tqdm(range(epochs)):
cumulative_loss = 0
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(model_ctx).reshape((-1, 784))
label = label.as_in_context(model_ctx)
with mx.autograd.record():
output = net(data)
loss = softmax_cross_entropy(output, label)
loss.backward()
trainer.step(data.shape[0])
cumulative_loss += mx.nd.sum(loss).asscalar()
test_accuracy = evaluate_accuracy(test_data, net)
train_accuracy = evaluate_accuracy(train_data, net)
print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" %
(e, cumulative_loss/num_examples, train_accuracy, test_accuracy))
train_accuracy
test_accuracy
| In Progress - Deep Learning - The Straight Dope/03 - Deep Neural Networks/02 - Multilayer perceptrons in gluon.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 64-bit
# name: python3
# ---
from matplotlib import pyplot as plt
plt.style.use('classic')
from sklearn import linear_model
import pandas as pd
import numpy as np
data = pd.read_csv("HousePrices.csv")
data.shape
data
data.plot(kind='scatter', x = "area", y="price")
plt.show()
# Correlation Co-efficient
data.corr()
# change to df
area = pd.DataFrame(data["area"])
price = pd.DataFrame(data["price"])
price
#Build linear model
lm = linear_model.LinearRegression()
model = lm.fit(area, price)
model.coef_
model.intercept_
# Model Evaluation
model.score(area, price)
# Predict new value of price
price_new = 567000
price_new = np.array(price_new).reshape(1, -1)
pricePred = model.predict(price_new)
pricePred
# Predict more values
X = ([672676, 582682])
X = pd.DataFrame(X)
Y = model.predict(X)
Y = pd.DataFrame(Y)
df = pd.concat([X, Y], axis = 1, keys = ['price_new', 'pricePred'])
df
# +
# Visualize the results
data.plot(kind = "scatter", x = 'area', y = "price")
# Regression line
plt.plot(area, model.predict(area), color = "red", linewidth = 2)
plt.legend(loc='upper left',fancybox=True, framealpha=1, shadow=True, borderpad=1)
plt.show()
| 1.Simple Linear Regression/3.Model Accurcay/predictHousePrice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
from skimage.morphology import extrema
from skimage.morphology import watershed as skwater
from PIL import Image
img = Image.open('C:/Users/B<NAME>y/IMAGE PROCESSING/brain.tif')
def ShowImage(img):
plt.figure(figsize=(10, 10))
if ctype=='bgr':
b,g,r = cv2.split(img) # get b,g,r
rgb_img = cv2.merge([r,g,b]) # switch it to rgb
plt.imshow(rgb_img)
elif ctype=='hsv':
rgb = cv2.cvtColor(img,cv2.COLOR_HSV2RGB)
plt.imshow(rgb)
elif ctype=='gray':
plt.imshow(img,cmap='gray')
elif ctype=='rgb':
plt.imshow(img)
else:
raise Exception("Unknown colour type")
plt.axis('off')
plt.title(title)
plt.show()
plt.show()
img
from PIL import Image
im = Image.open('C:/Users/Bipasha Roy/IMAGE PROCESSING/brain.tif')
im.save('test.jpeg')
# +
#im = Image.open('C:/Users/<NAME>/Desktop/test.jpeg')
#rgb_im = im.convert('RGB')
#r, g, b = rgb_im.getpixel((5, 5))
#print(r, g, b)
#(65, 100, 137)
# +
#rgb_im
# +
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('C:/Users/<NAME>/IMAGE PROCESSING/test.jpeg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# -
gray
ret, thresh
# +
# noise removal
kernel = np.ones((3,3),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# sure background area
sure_bg = cv2.dilate(opening,kernel,iterations=3)
# Finding sure foreground area
dist_transform = cv2.distanceTransform(opening,cv2.DIST_L2,5)
ret, sure_fg = cv2.threshold(dist_transform,0.7*dist_transform.max(),255,0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg,sure_fg)
# -
unknown
ret, sure_fg
# +
# Marker labelling
ret, markers = cv2.connectedComponents(sure_fg)
# Add one to all labels so that sure background is not 0, but 1
markers = markers+1
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
# -
markers = cv2.watershed(img,markers)
img[markers == -1] = [255,1,1]
img
from numpy import array
from scipy.misc import toimage
imm=toimage(img)
imm
imm1=toimage(markers)
imm1
imm2=toimage(thresh)
imm2
imm.save('C:/Users/Bipasha Roy/IMAGE PROCESSING/out1.png')
imm1.save('C:/Users/Bipasha Roy/IMAGE PROCESSING/out2.png')
imm2.save('C:/Users/<NAME>/IMAGE PROCESSING/out3.png')
im.convert('1').show()
im.convert('L').show()
im.convert('RGB').show()
im.convert('L').show()
| SR MAM PROJECT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # How Jupyter is used
#
# Rule, Adam, <NAME>, and <NAME>. "[Exploration and Explanation in Computational Notebooks.](http://adamrule.com/files/papers/chi_2018_computational_notebooks_final_web.pdf)" Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018.
#
#
# **ABSTRACT**
# Computational notebooks combine code, visualizations, and
# text in a single document. Researchers, data analysts, and
# even journalists are rapidly adopting this new medium. We
# present three studies of how they are using notebooks to
# document and share exploratory data analyses. In the first,
# we analyzed over 1 million computational notebooks on
# GitHub, finding that one in four had no explanatory text but
# consisted entirely of visualizations or code. In a second
# study, we examined over 200 academic computational
# notebooks, finding that although the vast majority described
# methods, only a minority discussed reasoning or results. In
# a third study, we interviewed 15 academic data analysts,
# finding that most considered computational notebooks personal,
# exploratory, and messy. Importantly, they typically
# used other media to share analyses. These studies demonstrate
# a tension between exploration and explanation in
# constructing and sharing computational notebooks. We
# conclude with opportunities to encourage explanation in
# computational media without hindering exploration
# # Markdown examples
# # h1
# ## h2
#
# Paragraphs are separated by a blank line.
#
# 2nd paragraph. *Italic*, **bold**, and `monospace`. Itemized lists
# look like:
#
# * this one
# * that one
# * the other one
#
# Here's a numbered list:
#
# 1. first item
# 2. second item
# 3. third item
#
# > Block quotes are
# > written like so.
# >
# > They can span multiple paragraphs,
# > if you like.
#
# ~~~
# define foobar() {
# print "Welcome to flavor country!";
# }
# ~~~
#
# Here's a link to [a website](http://foo.bar), to a [local
# doc](local-doc.html), and to a [section heading in the current
# doc](#an-h2-header). Here's a footnote [^1].
#
# [^1]: Some footnote text.
#
# 
#
# Inline math equation: $\omega = d\phi / dt$. Display
# math should get its own line like so:
#
# $$I = \int \rho R^{2} dV$$
#
#
# | Column1 | Column2 |
# |---------|---------|
# | 1 | a |
# | 2 | b |
# | 3 | c |
#
# +
def say_hello(recipient):
return 'Hello, {}!'.format(recipient)
say_hello('Tim')
# -
# # Further reading
# [Jupyter Notebook Tutorial](https://www.dataquest.io/blog/jupyter-notebook-tutorial/)
| Intro to Jupyter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
# %pylab inline
# %matplotlib inline
from IPython.display import display, Markdown, HTML
from curves import Hilbert_Curve, Line_Curve
from demo_helper import animate_curve
# -
SQUARE_SIZE = 32
# ## Line Curve
# +
line_curve = Line_Curve(SQUARE_SIZE, start_coor=(0, 0))
display(animate_curve(line_curve))
# -
# ## Hilbert Curve
# +
line_curve = Hilbert_Curve(SQUARE_SIZE, start_coor=(0, 0))
display(animate_curve(line_curve))
| demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Decision Trees ::
#
# **Class is implemented only using numpy and visualization are shown using matplotlib. Code for class of decision tree is given in trees.py file**
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Decision-Trees-::" data-toc-modified-id="Decision-Trees-::-1"><span class="toc-item-num">1 </span>Decision Trees ::</a></span><ul class="toc-item"><li><span><a href="#Import-classifier-and-regressor-from-given-file-trees.py" data-toc-modified-id="Import-classifier-and-regressor-from-given-file-trees.py-1.1"><span class="toc-item-num">1.1 </span>Import classifier and regressor from given file <strong><em>trees.py</em></strong></a></span></li><li><span><a href="#for-dataset-and-spliting,-we-need-sklearn-(Optional,-if-you-have-your-own-data)" data-toc-modified-id="for-dataset-and-spliting,-we-need-sklearn-(Optional,-if-you-have-your-own-data)-1.2"><span class="toc-item-num">1.2 </span>for dataset and spliting, we need sklearn (Optional, if you have your own data)</a></span></li></ul></li><li><span><a href="#Classification-Tree" data-toc-modified-id="Classification-Tree-2"><span class="toc-item-num">2 </span>Classification Tree</a></span><ul class="toc-item"><li><span><a href="#Iris-Dataset" data-toc-modified-id="Iris-Dataset-2.1"><span class="toc-item-num">2.1 </span>Iris Dataset</a></span><ul class="toc-item"><li><span><a href="#Fitting-a-model-(displaying-the-tree-building)" data-toc-modified-id="Fitting-a-model-(displaying-the-tree-building)-2.1.1"><span class="toc-item-num">2.1.1 </span>Fitting a model (displaying the tree building)</a></span></li><li><span><a href="#Plotting-the-resulting-tree" data-toc-modified-id="Plotting-the-resulting-tree-2.1.2"><span class="toc-item-num">2.1.2 </span>Plotting the resulting tree</a></span></li><li><span><a href="#Plotting-Tree-with-color-branches" data-toc-modified-id="Plotting-Tree-with-color-branches-2.1.3"><span class="toc-item-num">2.1.3 </span>Plotting Tree with color branches</a></span></li><li><span><a href="#Prediting-and-computing-Accuracy" data-toc-modified-id="Prediting-and-computing-Accuracy-2.1.4"><span class="toc-item-num">2.1.4 </span>Prediting and computing Accuracy</a></span></li></ul></li><li><span><a href="#Iris-data-with-smaller-tree" data-toc-modified-id="Iris-data-with-smaller-tree-2.2"><span class="toc-item-num">2.2 </span>Iris data with smaller tree</a></span></li><li><span><a href="#Breast-Cancer-data" data-toc-modified-id="Breast-Cancer-data-2.3"><span class="toc-item-num">2.3 </span>Breast Cancer data</a></span><ul class="toc-item"><li><span><a href="#Fitting-model-with-displaying-the-details-of-tree-in-process-(verbose=2)" data-toc-modified-id="Fitting-model-with-displaying-the-details-of-tree-in-process-(verbose=2)-2.3.1"><span class="toc-item-num">2.3.1 </span>Fitting model with displaying the details of tree in process (verbose=2)</a></span></li><li><span><a href="#Fitting-model-with-displaying-the-progress-only-(verbose=1)" data-toc-modified-id="Fitting-model-with-displaying-the-progress-only-(verbose=1)-2.3.2"><span class="toc-item-num">2.3.2 </span>Fitting model with displaying the progress only (verbose=1)</a></span></li><li><span><a href="#Plotting-Decison-Tree" data-toc-modified-id="Plotting-Decison-Tree-2.3.3"><span class="toc-item-num">2.3.3 </span>Plotting Decison Tree</a></span></li><li><span><a href="#Predicting-and-computing-MSE" data-toc-modified-id="Predicting-and-computing-MSE-2.3.4"><span class="toc-item-num">2.3.4 </span>Predicting and computing MSE</a></span></li></ul></li></ul></li><li><span><a href="#Regression-Tree" data-toc-modified-id="Regression-Tree-3"><span class="toc-item-num">3 </span>Regression Tree</a></span><ul class="toc-item"><li><span><a href="#Boston-House-price" data-toc-modified-id="Boston-House-price-3.1"><span class="toc-item-num">3.1 </span>Boston House price</a></span></li><li><span><a href="#Boston-Data-with-smaller-tree" data-toc-modified-id="Boston-Data-with-smaller-tree-3.2"><span class="toc-item-num">3.2 </span>Boston Data with smaller tree</a></span></li></ul></li></ul></div>
# -
import numpy as np
import matplotlib.pyplot as plt
# ## Import classifier and regressor from given file ***trees.py***
# + code_folding=[183, 218, 228, 235, 244, 268, 275, 292, 321, 345, 366, 379, 385, 413, 423]
from trees import ClassificationTree, RegressionTree
# -
# ## for dataset and spliting, we need sklearn (Optional, if you have your own data)
from sklearn import datasets
from sklearn.model_selection import train_test_split
# # Classification Tree
# ## Iris Dataset
# Loading and spliting for training and testing
# +
data = datasets.load_iris()
X = data.data
y = data.target
feature_names = data.feature_names #Optional
Xt,Xs, yt, ys = train_test_split(X,y,test_size=0.3)
print(X.shape,y.shape, Xt.shape, yt.shape, Xs.shape, ys.shape)
# -
# ### Fitting a model (displaying the tree building)
clf = ClassificationTree()
clf.fit(Xt,yt,verbose=2,feature_names=feature_names)
# ### Plotting the resulting tree
plt.figure(figsize=(15,8))
clf.plotTree(show=True)
# ### Plotting Tree with color branches
plt.figure(figsize=(15,8))
clf.plotTree(show=True,DiffBranchColor=True)
# ### Prediting and computing Accuracy
ytp = clf.predict(Xt)
ysp = clf.predict(Xs)
print('Training Accuracy: ',np.mean(ytp==yt))
print('Testing Accuracy: ',np.mean(ysp==ys))
# ## Iris data with smaller tree
clf = ClassificationTree(max_depth=2)
clf.fit(Xt,yt,verbose=1,feature_names=feature_names)
#plt.figure(figsize=(15,8))
plt.figure(figsize=(7,6))
clf.plotTree(show=True)
ytp = clf.predict(Xt)
ysp = clf.predict(Xs)
print('Training Accuracy: ',np.mean(ytp==yt))
print('Testing Accuracy: ',np.mean(ysp==ys))
# ## Breast Cancer data
# +
data = datasets.load_breast_cancer()
X = data.data
y = data.target
feature_names = data.feature_names #Optional
Xt,Xs, yt, ys = train_test_split(X,y,test_size=0.3)
print(X.shape,y.shape, Xt.shape, yt.shape, Xs.shape, ys.shape)
# -
# ### Fitting model with displaying the details of tree in process (verbose=2)
clf = ClassificationTree()
clf.fit(Xt,yt,verbose=2,feature_names=feature_names)
# ### Fitting model with displaying the progress only (verbose=1)
clf = ClassificationTree()
clf.fit(Xt,yt,verbose=1,feature_names=feature_names)
# ### Plotting Decison Tree
# plt.figure(figsize=(15,8))
# clf.plotTree(show=True)
# ### Predicting and computing MSE
ytp = clf.predict(Xt)
ysp = clf.predict(Xs)
print('Training Accuracy: ',np.mean(ytp==yt))
print('Testing Accuracy: ',np.mean(ysp==ys))
# **It's overfitting, try with smaller trees by decresing the max_depth of classifier**
# # Regression Tree
# ## Boston House price
# +
data = datasets.load_boston()
X = data.data
y = data.target
feature_names = data.feature_names #Optional
Xt,Xs, yt, ys = train_test_split(X,y,test_size=0.3)
print(X.shape,y.shape, Xt.shape, yt.shape, Xs.shape, ys.shape)
# -
rgr = RegressionTree()
rgr.fit(Xt,yt,verbose=1,feature_names = feature_names)
plt.figure(figsize=(15,15))
rgr.plotTree(show=True,scale=True, showtitle =False, showDirection=False,DiffBranchColor=True)
ytp = rgr.predict(Xt)
ysp = rgr.predict(Xs)
print('Training MSE: ',np.mean((ytp-yt)**2))
print('Testing MSE: ',np.mean((ysp-ys)**2))
# ## Boston Data with smaller tree
rgr = RegressionTree(max_depth=3)
rgr.fit(Xt,yt,verbose=1,feature_names = feature_names)
# +
plt.figure(figsize=(15,8))
rgr.plotTree(show=True,scale=True, showtitle =True, showDirection=False,DiffBranchColor=True)
ytp = rgr.predict(Xt)
ysp = rgr.predict(Xs)
print('Training MSE: ',np.mean((ytp-yt)**2))
print('Testing MSE: ',np.mean((ysp-ys)**2))
| Trees/Example-Classification_and_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Process Results
# ## Imports
# +
import pandas as pd
import cufflinks as cf
import pickle as pk
import sys
import numpy as np
sys.path.append('./..')
cf.go_offline()
# -
from auxiliar.VectorizerHelper import vectorizer, vectorizerIdf, preprocessor
from auxiliar import parameters
from auxiliar.HtmlParser import HtmlParser
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import recall_score
colors=['red', 'blue','red', 'blue','red', 'blue','red', 'blue','red', 'blue','red', 'blue','red', 'blue','red', 'blue','red', 'blue','red', 'blue']
def compute_metrics(predictions, real):
metrics = dict()
bin_preds = predictions
metrics['mse'] = mean_squared_error(bin_preds, real)
metrics['recall'] = recall_score(bin_preds, real, average='micro')
metrics['f1'] = f1_score(bin_preds, real, average='micro')
metrics['acc'] = accuracy_score(bin_preds, real)
return metrics
# ## Machine Learning
results_base_line = pd.read_pickle('machine_learning/tweeter/base_line/3-clases/results.pkl').to_dict()
results_grid = pd.read_pickle('machine_learning/tweeter/grid_search/3-clases/results-idf.pkl').to_dict()
results_test = pd.read_pickle('machine_learning/tweeter/grid_search/3-clases/test_results.pkl').to_dict()
with open('machine_learning/tweeter/grid_search/3-clases/grid_results-idf.pkl', 'rb') as fp:
grid = pk.load(fp)
def get_results_df(res, nrange):
keys = res.keys()
results = []
for k in keys:
for i in range(nrange):
results.append(compute_metrics(res[k]['predicted'][i], res[k]['real'][i]))
results_df = pd.DataFrame(results).transpose()
results_df.columns = pd.MultiIndex.from_product([keys, range(nrange)])
results_df = results_df.transpose().reset_index().groupby(['level_0']).mean()
results_df = results_df.drop(columns=['level_1'])
return results_df
# base line results over test set
def get_results_df_cine(res, nrange):
keys = res.keys()
results = []
for k in keys:
for i in range(nrange):
results.append(compute_metrics(res[k]['cine_predicted'][i], res[k]['cine_real'][i]))
results_df = pd.DataFrame(results).transpose()
results_df.columns = pd.MultiIndex.from_product([keys, range(nrange)])
results_df = results_df.transpose().reset_index().groupby(['level_0']).mean()
results_df = results_df.drop(columns=['level_1'])
return results_df
def get_results_df_2(res, nrange):
keys = res.keys()
results = []
for k in keys:
for i in range(nrange):
results.append(compute_metrics(res[k]['predicted'][i], res[k]['real'][i]))
results_df = pd.DataFrame(results).transpose()
results_df.columns = pd.MultiIndex.from_product([keys, range(nrange)])
# results_df = results_df.transpose().reset_index().groupby(['level_0']).mean()
# results_df = results_df.drop(columns=['level_1'])
return results_df
# ### Base line
get_results_df(results_base_line, 10).style.highlight_max()
print(round(get_results_df(results_base_line, 10) * 100, 2).to_latex())
print(round(get_results_df_cine(results_base_line, 10) * 100, 2).to_latex())
# means of grid search results over k-fold
# ### Grid search - train
get_results_df_2(results_grid, 10).stack().transpose()['f1'].transpose().iplot()
get_results_df(results_grid, 10).style.highlight_max()
print(round(get_results_df(results_grid, 10) * 100, 2).to_latex())
# Grid search results over test set
get_results_df(results_test, 1).style.highlight_max()
print(round(get_results_df(results_test, 1) * 100, 2).to_latex())
print(round(get_results_df_cine(results_test, 1) * 100, 2).to_latex())
# ### Grid search - best params
grid['lr'].best_params_
grid['ls'].best_params_
grid['mb'].best_params_
grid['rf'].best_params_
# ### Grid search - test
pd.DataFrame(results_test['lr']['real'][0]).iplot(kind='histogram')
pd.DataFrame([results_test['lr']['predicted'][0], results_test['ls']['predicted'][0],\
results_test['mb']['predicted'][0], results_test['rf']['predicted'][0]], index=['lr', 'ls' , 'mb', 'rf'])\
.transpose().iplot(kind='histogram')
# We could highlight Naive bayes model over the others because it seems to have better generalization over test cases
# ## Deep Learning
# ### Get Results
# +
lstm_base = pd.read_pickle('deep_learning/tweeter/3-clases/lstm_val_lstm.pkl')
lstm_base_evas = pd.read_pickle('deep_learning/tweeter/3-clases/lstm_val_lstm_evas.pkl')
lstm_base_pred = pd.read_pickle('deep_learning/tweeter/3-clases/lstm_val_preds.pkl')
lstm_base_pred_cine = pd.read_pickle('deep_learning/tweeter/3-clases/lstm_valcine_preds.pkl')
lstm_simpler = pd.read_pickle('deep_learning/tweeter/3-clases/lstm_simple_lstm.pkl')
lstm_simpler_evas = pd.read_pickle('deep_learning/tweeter/3-clases/lstm_simple_lstm_evas.pkl')
lstm_simpler_preds = pd.read_pickle('deep_learning/tweeter/3-clases/lstm_simple_preds.pkl')
lstm_simpler_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/lstm_simplecine_preds.pkl')
lstm_dropout = pd.read_pickle('deep_learning/tweeter/3-clases/dropout_lstm_lstm.pkl')
lstm_dropout_evas = pd.read_pickle('deep_learning/tweeter/3-clases/dropout_lstm_lstm_evas.pkl')
lstm_dropout_preds = pd.read_pickle('deep_learning/tweeter/3-clases/dropout_lstm_preds.pkl')
lstm_dropout_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/dropout_lstmcine_preds.pkl')
lstm_dropout2 = pd.read_pickle('deep_learning/tweeter/3-clases/dropout2_lstm_lstm.pkl')
lstm_dropout2_evas = pd.read_pickle('deep_learning/tweeter/3-clases/dropout2_lstm_lstm_evas.pkl')
lstm_dropout2_preds = pd.read_pickle('deep_learning/tweeter/3-clases/dropout2_lstm_preds.pkl')
lstm_dropout2_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/dropout2_lstmcine_preds.pkl')
lstm_bn = pd.read_pickle('deep_learning/tweeter/3-clases/bn_lstm_lstm.pkl')
lstm_bn_evas = pd.read_pickle('deep_learning/tweeter/3-clases/bn_lstm_lstm_evas.pkl')
lstm_bn_preds = pd.read_pickle('deep_learning/tweeter/3-clases/bn_lstm_preds.pkl')
lstm_bn_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/bn_lstmcine_preds.pkl')
lstm_glorot = pd.read_pickle('deep_learning/tweeter/3-clases/glorot_lstm_lstm.pkl')
lstm_glorot_evas = pd.read_pickle('deep_learning/tweeter/3-clases/glorot_lstm_lstm_evas.pkl')
lstm_glorot_preds = pd.read_pickle('deep_learning/tweeter/3-clases/glorot_lstm_preds.pkl')
lstm_glorot_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/glorot_lstmcine_preds.pkl')
lstm_glorot_wo_bn = pd.read_pickle('deep_learning/tweeter/3-clases/glorot__wobn_lstm_lstm.pkl')
lstm_glorot_wo_bn_evas = pd.read_pickle('deep_learning/tweeter/3-clases/glorot__wobn_lstm_lstm_evas.pkl')
lstm_glorot_wo_bn_preds = pd.read_pickle('deep_learning/tweeter/3-clases/glorot__wobn_lstm_preds.pkl')
lstm_glorot_wo_bn_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/glorot__wobn_lstmcine_preds.pkl')
lstm_double = pd.read_pickle('deep_learning/tweeter/3-clases/double_lstm_lstm.pkl')
lstm_double_evas = pd.read_pickle('deep_learning/tweeter/3-clases/double_lstm_lstm_evas.pkl')
lstm_double_preds = pd.read_pickle('deep_learning/tweeter/3-clases/double_lstm_preds.pkl')
lstm_double_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/double_lstmcine_preds.pkl')
lstm_conv = pd.read_pickle('deep_learning/tweeter/3-clases/convolutional_lstm.pkl')
lstm_conv_evas = pd.read_pickle('deep_learning/tweeter/3-clases/convolutional_lstm_evas.pkl')
lstm_conv_preds = pd.read_pickle('deep_learning/tweeter/3-clases/convolutional_preds.pkl')
lstm_conv_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/convolutionalcine_preds.pkl')
lstm_conv1d = pd.read_pickle('deep_learning/tweeter/3-clases/convolutional1d_lstm.pkl')
lstm_conv1d_evas = pd.read_pickle('deep_learning/tweeter/3-clases/convolutional1d_lstm_evas.pkl')
lstm_conv1d_preds = pd.read_pickle('deep_learning/tweeter/3-clases/convolutional1d_preds.pkl')
lstm_conv1d_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/convolutional1dcine_preds.pkl')
lstm_bidirectional = pd.read_pickle('deep_learning/tweeter/3-clases/bidirectional_lstm.pkl')
lstm_bidirectional_evas = pd.read_pickle('deep_learning/tweeter/3-clases/bidirectional_lstm_evas.pkl')
lstm_bidirectional_preds = pd.read_pickle('deep_learning/tweeter/3-clases/bidirectional_preds.pkl')
lstm_bidirectional_preds_cine = pd.read_pickle('deep_learning/tweeter/3-clases/bidirectionalcine_preds.pkl')
# -
# ### Plot results
print('base')
lstm_base.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('simplification')
lstm_simpler.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('dropout')
lstm_dropout.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('dropout 0.2')
lstm_dropout2.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('batch normalization')
lstm_bn.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('glorot initialization')
lstm_glorot.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('glorot initialization without batch normalization')
lstm_glorot_wo_bn.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('double lstm')
lstm_double.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('convolutional lstm')
lstm_conv.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('convolutional 2d lstm')
lstm_conv1d.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('bidirectional lstm')
lstm_bidirectional.loc[:, pd.IndexSlice[:, ['loss', 'val_loss']]].iplot(colors=colors)
print('base')
lstm_base_evas.iplot()
print('simplification')
lstm_simpler_evas.iplot()
print('dropout')
lstm_dropout_evas.iplot()
print('dropout 0.2')
lstm_dropout2_evas.iplot()
print('batch normalization')
lstm_bn_evas.iplot()
print('glorot initialization')
lstm_glorot_evas.iplot()
print('glorot initialization without batch normalization')
lstm_glorot_wo_bn_evas.iplot()
print('double lstm')
lstm_double_evas.iplot()
print('convolutional lstm')
lstm_conv_evas.iplot()
print('convolutional 2d lstm')
lstm_conv1d_evas.iplot()
print('bidirectional lstm')
lstm_bidirectional_evas.iplot()
# ### Table results
# #### Train results
means = pd.concat([\
lstm_base.stack(level=0).mean(),\
lstm_simpler.stack(level=0).mean(),\
lstm_dropout.stack(level=0).mean(),\
lstm_dropout2.stack(level=0).mean(),\
lstm_bn.stack(level=0).mean(),\
lstm_glorot.stack(level=0).mean(),\
lstm_glorot_wo_bn.stack(level=0).mean(),\
lstm_double.stack(level=0).mean(),\
lstm_conv.stack(level=0).mean(),\
lstm_conv1d.stack(level=0).mean(),\
lstm_bidirectional.stack(level=0).mean()
], axis=1)
means.columns = ['base', 'simpler', 'dropout', 'dropout 0.2', 'batch norm', 'glorot', 'glorot_wo_bn', 'double', 'conv', 'conv1d', 'bidirectional']
means.transpose().style.highlight_min(subset=pd.IndexSlice[:, ['val_loss']])
print(round(means.transpose() * 100, 2).to_latex())
# The better results in terms of val_loss (more robust model) are the ones achieved by batch norm model, so we can assume this model will generalize better on predictions. The resuts we get on accuracy could depend highly on how the data is distributed, so if a model predicts a high rate os positive texts it could be getting higher accuracy due to there are a lot more positive texts in corpus
# #### validation results
means_test = pd.concat([\
lstm_base_evas.mean(),\
lstm_simpler_evas.mean(),\
lstm_dropout_evas.mean(),\
lstm_dropout2_evas.mean(),\
lstm_bn_evas.mean(),\
lstm_glorot_evas.mean(),\
lstm_glorot_wo_bn_evas.mean(),\
lstm_double_evas.mean(),\
lstm_conv_evas.mean(),\
lstm_conv1d_evas.mean(),\
lstm_bidirectional_evas.mean()
], axis=1)
means_test.columns = ['base', 'simpler', 'dropout', 'dropout 0.2', 'batch norm', 'glorot', 'glorot_wo_bn', 'double', 'conv', 'conv1d', 'bidirectional']
means_test.transpose().style.highlight_max()
means_test.transpose().to_latex()
# #### Test results
# +
lstm_base_pred['f_res'] = lstm_base_pred.apply(lambda x: np.argmax(x), axis=1)
lstm_simpler_preds['f_res'] = lstm_simpler_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_dropout_preds['f_res'] = lstm_dropout_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_dropout2_preds['f_res'] = lstm_dropout2_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_bn_preds['f_res'] = lstm_bn_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_glorot_preds['f_res'] = lstm_glorot_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_glorot_wo_bn_preds['f_res'] = lstm_glorot_wo_bn_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_double_preds['f_res'] = lstm_double_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_conv_preds['f_res'] = lstm_conv_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_conv1d_preds['f_res'] = lstm_conv1d_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_bidirectional_preds['f_res'] = lstm_bidirectional_preds.apply(lambda x: np.argmax(x), axis=1)
lstm_base_pred_cine['f_res'] = lstm_base_pred_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_simpler_preds_cine['f_res'] = lstm_simpler_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_dropout_preds_cine['f_res'] = lstm_dropout_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_dropout2_preds_cine['f_res'] = lstm_dropout2_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_bn_preds_cine['f_res'] = lstm_bn_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_glorot_preds_cine['f_res'] = lstm_glorot_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_glorot_wo_bn_preds_cine['f_res'] = lstm_glorot_wo_bn_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_double_preds_cine['f_res'] = lstm_double_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_conv_preds_cine['f_res'] = lstm_conv_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_conv1d_preds_cine['f_res'] = lstm_conv1d_preds_cine.apply(lambda x: np.argmax(x), axis=1)
lstm_bidirectional_preds_cine['f_res'] = lstm_bidirectional_preds_cine.apply(lambda x: np.argmax(x), axis=1)
# -
lstm_base_pred_cine
# +
preds = pd.concat([\
lstm_base_pred['f_res'],\
lstm_simpler_preds['f_res'],\
lstm_dropout_preds['f_res'],\
lstm_dropout2_preds['f_res'],\
lstm_bn_preds['f_res'],\
lstm_glorot_preds['f_res'],\
lstm_glorot_wo_bn_preds['f_res'],\
lstm_double_preds['f_res'],\
lstm_conv_preds['f_res'],\
lstm_conv1d_preds['f_res'],\
lstm_bidirectional_preds['f_res']\
], axis=1)
preds_cine = pd.concat([\
lstm_base_pred_cine['f_res'],\
lstm_simpler_preds_cine['f_res'],\
lstm_dropout_preds_cine['f_res'],\
lstm_dropout2_preds_cine['f_res'],\
lstm_bn_preds_cine['f_res'],\
lstm_glorot_preds_cine['f_res'],\
lstm_glorot_wo_bn_preds_cine['f_res'],\
lstm_double_preds_cine['f_res'],\
lstm_conv_preds_cine['f_res'],\
lstm_conv1d_preds_cine['f_res'],\
lstm_bidirectional_preds_cine['f_res']\
], axis=1)
# -
preds = preds.applymap(lambda x: 1 if x >= 0.5 else 0)
preds.columns = ['base', 'simpler', 'dropout', 'dropout 0.2', 'batch norm', 'glorot', 'glorot_wo_bn', 'double', 'conv', 'conv1d', 'bidirectional']
preds_cine.columns = ['base', 'simpler', 'dropout', 'dropout 0.2', 'batch norm', 'glorot', 'glorot_wo_bn', 'double', 'conv', 'conv1d', 'bidirectional']
preds.shape
# +
metrics = []
metrics_cine = []
for p in preds.columns:
metrics.append(compute_metrics(preds[p], results_test['ls']['real'][0]))
for p in preds_cine.columns:
metrics_cine.append(compute_metrics(preds_cine[p], results_test['ls']['cine_real'][0]))
# -
pd.DataFrame(metrics, index = preds.columns).style.highlight_max(axis=0)
print(round(pd.DataFrame(metrics, index = preds.columns) * 100, 2).to_latex())
print(round(pd.DataFrame(metrics_cine, index = preds.columns) * 100, 2).to_latex())
pd.concat([pd.DataFrame(results_test['lr']['real'][0]), preds ]).iplot(kind='histogram')
| results/get_results-3-clases.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/mrigendra-sudo/Quad_Hackers/blob/main/classified.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="oN3T-l--aTRw"
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
from sklearn.tree import DecisionTreeClassifier
# + id="LWoISNVOH3rV"
train = pd.read_csv("/content/International students Time management data.csv")
# + colab={"base_uri": "https://localhost:8080/", "height": 402} id="Emu71wFjIClL" outputId="ae6ccb57-9d24-48f9-a497-ab420cae8903"
dropc = ["Number","Age","Gender","Nationality","Program","Course","English","Academic","Attendance"]
y = train.drop(dropc,axis=1)
y= y.fillna(3,axis=0)
y
# + colab={"base_uri": "https://localhost:8080/"} id="qkZsbvf3JlDZ" outputId="0f44130b-6aae-428b-f02b-c6f398a4d4ec"
y = np.sum(y,axis=1)
y
# + id="gyWNTbp_LQye"
train = train.drop(["6","7","8","9",'10','11','12','13','14','15','16','17'],axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 402} id="GUAqyr8vMNAA" outputId="f2c7d672-cb7a-4d92-b28d-996241ceb6d5"
train['y'] = y
train = train.dropna()
train
# + id="SD3kQbc2MkvN"
train = train.drop(["Number","Nationality","English"],axis=1)
# + id="mPJCshPlOr_A"
y = train['y']
train = train.drop('y',axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 455} id="g5IA_eewOvlM" outputId="e759f4b7-d045-41c5-8960-98aa8e6a57a8"
train = pd.get_dummies(train)
train
# + id="dwQGeuYAPWPX"
pro = y > 45
medium = y>37
needs_improvement = y<37
# + id="Xuvg0E6YQraN"
train['pro'] = pro
train['medium'] = medium
train['needs_improvement'] = needs_improvement
# + id="8ThjoQhZShCK"
y = pd.get_dummies(y)
# + id="JegQjw83Ss98"
train = pd.get_dummies(train)
# + id="A404hp53UHWV"
train[['pro','medium','needs_improvement']] = (train[['pro','medium','needs_improvement']]).astype(int)
# + colab={"base_uri": "https://localhost:8080/", "height": 455} id="pzZLpm_wU9CQ" outputId="e0e668a5-7432-4e6a-f372-9f26a13fac94"
train
# + id="vjfPGtezVCZ7"
y = train[['pro','medium','needs_improvement']]
train = train.drop(['pro','medium','needs_improvement'],axis=1)
# + id="Wzlob7RLVL4h"
x_train,x_test,y_train,y_test = train_test_split(train,y,test_size=0.2)
# + id="KbFy_mKrWCUT"
tree = DecisionTreeClassifier(max_features='log2')
# + colab={"base_uri": "https://localhost:8080/"} id="LLkcfVt3WdOz" outputId="74007393-2d5e-4c1c-d1e9-12478d10da58"
tree.fit(x_train,y_train)
# + id="bLn3UgZoWwqK"
ypred = tree.predict(x_test)
# + colab={"base_uri": "https://localhost:8080/"} id="4WNZM3TCWrnI" outputId="bb48a551-ccbd-4e45-ccb6-5fa657e9ad10"
score = f1_score(y_test,ypred,average='micro')
score
# + colab={"base_uri": "https://localhost:8080/"} id="OGPwbIJQaYqx" outputId="620d0e6e-26bb-4f86-8bb8-50eecebeb004"
tree.predict(x_test)
# + id="DuAumiK3aZGU"
# + id="9d_TnqkTafDl"
# + colab={"base_uri": "https://localhost:8080/", "height": 669} id="qyNG7GMlamCe" outputId="380c6a8c-56e4-4531-e6eb-dc75785ab461"
y_test
# + id="Wq5Ce6hMgAML"
ypred1 = tree.predict(x_train)
# + colab={"base_uri": "https://localhost:8080/"} id="V_MubQJAhaGr" outputId="e7ff1268-ab8c-4111-c259-0496fdd16965"
moelval = f1_score(y_train,ypred1,average='micro')
moelval
# + colab={"base_uri": "https://localhost:8080/"} id="NuVMHXSKl7T1" outputId="bda6a62a-2c3b-4189-adc6-a546fe2f52cf"
from sklearn.externals import joblib
# + colab={"base_uri": "https://localhost:8080/"} id="_9_AkK60m5pM" outputId="c7a546dc-4e09-437e-edac-37f5eed361ea"
joblib.dump(tree, 'model.pkl')
# + colab={"base_uri": "https://localhost:8080/"} id="g2VTtxy1npoq" outputId="603f09c4-0720-4c25-bc78-a316bbc97579"
# + colab={"base_uri": "https://localhost:8080/"} id="51jfHhZ-nVPP" outputId="4712f794-d983-4138-f43c-bfa147c7aea0"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="1PM56Fl3p1N_" outputId="04145a23-035d-4788-c4cf-eb252650a52a"
from google.colab import files
files.download('model.pkl')
# + id="W3dxl9N42n9g"
from sklearn.metrics import confusion_matrix
# + colab={"base_uri": "https://localhost:8080/"} id="97P-g5qN2oA3" outputId="c5f66903-54de-440b-e285-de469eba6b79"
confusion_matrix(y_test.values.argmax(axis=1),ypred.argmax(axis=1))
# + id="p5sVPbxc2oI5"
# + id="a8L2q_4Z2xbn"
# + id="_3PHJrq85UcS"
# + id="7P33UcHs5UeB"
| classified.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.1 64-bit
# name: python3
# ---
# +
import random
import math
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def dist(P, A, B):
x2 = (A.x - P.x) * (B.y - P.y) - (A.y - P.y) * (B.x - P.x)
area = abs(x2)
AB = ( (A.x - B.x) ** 2 + (A.y - B.y) ** 2 ) ** 0.5
return ( area / AB )
n = 100000
s = 0
for i in range(0, n):
x = 0
y = 0
for i in range(1,16):
p = random.randrange(1,7)
if p <= 2:
x += 3
if p >= 3:
y += 1
p = Point(x,y)
a = Point(0,0)
b = Point(1,-3/4)
r = dist(p,a,b)
s += r
print(s, s/n)
# -
# 답 3. 17
| 2021_Math/m17.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
# # Keypoint regression
# Single keypoint regression consists of localizing a keypoint in an image. Here we'll be training on a head pose dataset, where every image has a person in it and the head of the person is annotated. Since keypoint datasets all have different formats, we have to do a bit more manual work to get the task dataset loaded. First we import everything we'll need:
import CairoMakie; CairoMakie.activate!(type="png")
using DelimitedFiles: readdlm
using FastAI, ImageShow
using FastAI.FilePathsBase, FastAI.StaticArrays
import FastAI.DataAugmentation
# ## Creating a task data container
# [`datasetpath`](#) downloads the files, but it's up to us to load them into a usable format. In the end, the task data container should contain tuples of an image and a keypoint each.
path = datasetpath("biwi_head_pose")
files = FileDataset(path);
files[1:10]
# First we create a [`FileDataset`](#) from the directory where the dataset has been downloaded to:
# Loading a `FileDataset` simply treats every file as a single observation. However, that is not what we want here: for every observation we have one image and one annotation file that make up one observation and we want to ignore all other files, like the README. To achieve this, we'll create two data containers containing all the image paths and annotation paths respectively by filtering the container with all paths.
imagefiles = loadfolderdata(path, filterfn=isimagefile)
annotfiles = loadfolderdata(path, filterfn=p -> occursin("pose", pathname(p)))
(getobs(imagefiles, 1), getobs(annotfiles, 1))
# Next we need to map functions over each observation that load the data from the files. An image file can be loaded using the [`loadfile`](#) utility. The keypoints have a custom format, so we write a helper function to parse them from a text file. The details of how the format is loaded aren't important.
# +
readcalibrationfile(p) = readdlm(string(p))[1:3, 1:3]
CAL = readcalibrationfile(joinpath(path, "01", "rgb.cal"))
function loadannotfile(annotpath, cal = CAL)
ctr = readdlm(string(annotpath))[4,:]
cx = ctr[1] * cal[1,1]/ctr[3] + cal[1,3]
cy = ctr[2] * cal[2,2]/ctr[3] + cal[2,3]
return [SVector(cy, cx) .+ 1]
end
# -
# Now we can use [`mapobs`](#) to lazily map the loading function over the container. Note that beside loading the image and keypoint, we also extract the subject ID from the path. We'll use this in a bit for splitting the dataset appropriately and we don't have access to the path information anymore once we have a container of loaded data.
data = (
mapobs(loadfile, imagefiles),
mapobs(loadannotfile, annotfiles)
)
ids = map(p -> parse(Int, pathname(pathparent(p))), imagefiles)
obs = image, ks = getobs(data, 2000)
# We can visualize an observation using [`DataAugmentation.showitems`](#) if we wrap the data in item types:
DataAugmentation.showitems((
DataAugmentation.Image(image),
DataAugmentation.Keypoints(ks, size(image)))
)
# Before we can start using this data container for training, we need to split it into a training and validation dataset. Since there are 13 different persons with many images each, randomly splitting the container does not make sense. The validation dataset would then contain many images that are very similar to those seen in training, and would hence say little about the generalization ability of a model. We instead use the first 12 subjects as a training dataset and validate on the last.
traindata = datasubset(data, (1:nobs(data))[ids .!= 13])
validdata = datasubset(data, (1:nobs(data))[ids .== 13])
nobs(traindata), nobs(validdata)
# ## The learning task
# Next we need to define a [learning task](./learningtasks.md) that encodes and augments each image and keypoint in a form that we can train a model on. We need to create a `LearningTask` struct for which we can define these transformations. Here we make use of [`ProjectiveTransforms`](#) for resizing, cropping and augmenting the image and keypoint and [`ImagePreprocessing`](#) to reshape and normalize the image. Finally, [`KeypointPreprocessing`](#) makes sure keypoints fall between -1 and 1.
sz = (224, 224)
task = SupervisedTask(
(Image{2}(), Keypoints{2}(1)),
(
ProjectiveTransforms(sz, buffered=true, augmentations=augs_projection(max_warp=0)),
ImagePreprocessing(),
KeypointPreprocessing(sz),
)
)
# We can check that each image is resized to `(224, 224)` and the keypoints are normalized:
im, k = getobs(traindata, 1)
x, y = encodesample(task, Training(), (im, k))
summary(x), y
# Decoding the encoded targets should give back a point within the original image bounds:
FastAI.decodeypred(task, Training(), y)
xs, ys = FastAI.makebatch(task, traindata, 1:2)
showbatch(task, (xs, ys))
# That is looking good! We can see that the keypoint is aligned with center of the head even after heavy augmentation. Now it is finally time to train a model.
# ## Training
# We'll use a modified ResNet as a model backbone. and add a couple layers that regress the keypoint. [`taskmodel`](#) knows how to do this by looking at the data blocks used and calling [`blockmodel`](#)`(KeypointTensor{2, Float32}((1,)), KeypointTensor{2, Float32}((1,)), backbone)`.
# The implementation, for reference, looks like this:
# ```julia
# function blockmodel(inblock::ImageTensor{N}, outblock::KeypointTensor{N}, backbone) where N
# outsz = Flux.outputsize(backbone, (ntuple(_ -> 256, N)..., inblock.nchannels, 1))
# outch = outsz[end-1]
# head = Models.visionhead(outch, prod(outblock.sz)*N, p = 0.)
# return Chain(backbone, head)
# end
# ```
backbone = Models.xresnet18()
model = taskmodel(task, backbone);
# Next we create a pair of training and validation data loaders. They take care of batching and loading the data in parallel in the background.
traindl, validdl = FastAI.taskdataloaders(traindata, validdata, task, 16)
# With the addition of an optimizer and a loss function, we can now create a [`Learner`](#) and start training. Just like [`taskmodel`](#), [`tasklossfn`](#) selects the appropriate loss function for a `BlockTask`s blocks. Here both the encoded target block and model output block are `block = KeypointTensor{2, Float32}((1,))`, so `blocklossfn(block, block)` is called which returns Mean Squared Error as a suitable loss function.
import Flux
learner = Learner(
model,
(traindl, validdl),
Flux.ADAM(),
tasklossfn(task),
ToGPU())
fitonecycle!(learner, 5)
# The loss is going down which is a good sign, but visualizing the predictions against the ground truth will give us a better idea of how well the model performs. We'll use [`showoutputs`](#) to compare batches of encoded targets and model outputs. For this we can run the model on a batch from the validation dataset and see how it performs.
showoutputs(task, learner; n = 3, context = Validation())
# We can also see that the trained model generalizes well to the heavy augmentation employed during training. The augmentation also explains why the training loss is so much higher than the validation loss.
showoutputs(task, learner; n = 3, context = Training())
| notebooks/keypointregression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fig 15. Scatter plot of research centres by number of projects and total funding obtained during the whole analysed period (1993-2019)
# #### Import libraries
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib as mpl
import random
plt.style.use('seaborn-muted')
def read_csv(path):
"""
:param path: of desir the xlsx file
:return: String that contains all project descriptions
"""
df = pd.read_csv(path)
filtered_df = df.fillna(0)
return filtered_df
# -
# #### Define projects data path
file_path = "../data/mapeo_proyectos.csv"
# #### Define function to clean projects data
def prepare_data(data, number_of_centres):
data.drop_duplicates(subset ="ID proyecto", keep = "first", inplace = True)
data = data[data["Organismo"] != 0]
mini = data[["ID proyecto","Financiación", "Organismo"]]
grouped = mini.groupby("Organismo", as_index=False)
df2 = grouped.agg({'Financiación':['count', 'sum']})
df2.columns = ["organismo", "count", "funding"]
df2.set_index("organismo")
# df2 = df2.sort_values(by=['count'], ascending= False)
df2 = df2.reset_index(drop=True)
acronimos = read_csv("../data/universidades.csv")
acronimos = acronimos[['Organismo', 'Acrónimo']]
acronimos['Organismo'] = acronimos['Organismo'].str.upper()
acronimos['Organismo'] = acronimos['Organismo'].str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8')
df2['organismo'] = df2['organismo'].str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8')
acronimos['Organismo'] = acronimos['Organismo'].str.upper()
acronimos = acronimos.rename(columns={"Organismo": "organismo", "Acrónimo": "acronimo"})
df_def = pd.merge(df2, acronimos, on='organismo', how='inner', validate="one_to_many")
result = df_def.groupby("acronimo", sort=False).sum().reset_index()
result = result.sort_values(by=['count'], ascending=False).reset_index(drop=True)
#result = pd.merge(result, acronimos, on='acronimo', how='inner', validate="one_to_many")
# return df_def.head(number_of_centres), acronimos
return result, acronimos
# #### Run program.
# ### You can edit "number of centres" parameter to show as many center as you want
number_of_centres = 26
proyectos = read_csv(file_path)
df, acr = prepare_data(proyectos, number_of_centres)
# #### Print graph
# +
df2 = df.head(number_of_centres)
plt.figure()
fig, ax = plt.subplots(figsize=(15,10))
ax.scatter(df2["count"], df2["funding"], alpha=0.5)
ax.set(ylabel="funding (millions)", xlabel="Number of projects")
ax.set_ylim(bottom=0)
ax.set_xlim(left=0)
# annotate each point
n = df2["acronimo"]
for i, txt in enumerate(n):
ax.annotate(txt, (df2["count"][i], df2["funding"][i]), xytext=(10,0), textcoords='offset points', fontsize=10)
plt.grid()
plt.show()
# -
df.head(number_of_centres)
| notebooks/15. Scatter plot of research centres by number of projects and total funding.ipynb |
# # 10 Taking Advantage of First Class Objects
# ### 10.1 First Class Objects
#
# Python exposes many language features and places
# almost no constraints on what types data
# structures can hold.
#
# Here's an example of using a dictionary of functions to create a
# simple calculator. In some languages the only reasonable solution
# would require a `case` or `switch` statement, or a series of `if`
# statements. If you've been using such a language for a while, this
# example may help you expand the range of solutions you can imagine in
# Python.
#
# Let's iteratively write code to get this behaviour:
#
# assert calc('7+3') == 10
# assert calc('9-5') == 4
# assert calc('9/3') == 3
#
7+3
expr = '7+3'
lhs, op, rhs = expr
lhs, op, rhs
lhs, rhs = int(lhs), int(rhs)
lhs, op, rhs
op, lhs, rhs
def perform_operation(op, lhs, rhs):
if op == '+':
return lhs + rhs
if op == '-':
return lhs - rhs
if op == '/':
return lhs / rhs
perform_operation('+', 7, 3) == 10
# The `perform_operation` function has a lot of boilerplate repetition.
# Let's use a data structure instead to use less code and make it easier to extend.
import operator
operator.add(7, 3)
OPERATOR_MAPPING = {
'+': operator.add,
'-': operator.sub,
'/': operator.truediv,
}
OPERATOR_MAPPING['+']
OPERATOR_MAPPING['+'](7, 3)
def perform_operation(op, lhs, rhs):
return OPERATOR_MAPPING[op](lhs, rhs)
perform_operation('+', 7, 3) == 10
def calc(expr):
lhs, op, rhs = expr
lhs, rhs = int(lhs), int(rhs)
return perform_operation(op, lhs, rhs)
calc('7+3')
calc('9-5')
calc('9/3')
calc('3*4')
OPERATOR_MAPPING['*'] = operator.mul
calc('3*4')
# Let's look at another example. Suppose we have data where every
# line is fixed length with fixed length records in it and we want to
# pull fields out of it by name:
#
# PYTHON_RELEASES = [
# 'Python 3.4.0 2014-03-17',
# 'Python 3.3.0 2012-09-29',
# 'Python 3.2.0 2011-02-20',
# 'Python 3.1.0 2009-06-26',
# 'Python 3.0.0 2008-12-03',
# 'Python 2.7.9 2014-12-10',
# 'Python 2.7.8 2014-07-02',
# ]
#
# release34 = PYTHON_RELEASES[0]
#
# release = ReleaseFields(release34) # 3.4.0
# assert release.name == 'Python'
# assert release.version == '3.4.0'
# assert release.date == '2014-03-17'
# This works:
class ReleaseFields:
def __init__(self, data):
self.data = data
@property
def name(self):
return self.data[0:6]
@property
def version(self):
return self.data[7:12]
@property
def date(self):
return self.data[13:23]
release34 = 'Python 3.4.0 2014-03-17'
release = ReleaseFields(release34)
assert release.name == 'Python'
assert release.version == '3.4.0'
assert release.date == '2014-03-17'
# However, the following is better especially if there are many fields
# or as part of a libary which handle lots of different record formats:
class ReleaseFields:
slices = {
'name': slice(0, 6),
'version': slice(7, 12),
'date': slice(13, 23)
}
def __init__(self, data):
self.data = data
def __getattr__(self, attribute):
if attribute in self.slices:
return self.data[self.slices[attribute]]
raise AttributeError(
"{!r} has no attribute {!r}"
.format(self, attribute))
release = ReleaseFields(release34)
assert release.name == 'Python'
assert release.version == '3.4.0'
assert release.date == '2014-03-17'
# Confirm that trying to access an attribute that doesn't exist fails
# correctly. (Note they won't in Python 2.x unless you add `(object)`
# after `class ReleaseFields`).
release.foo == 'exception'
# If you find yourself writing lots of boilerplate code as in the
# first versions of the calculator and fixed length record class
# above, you may want to try changing it to use a Python data
# structure with first class objects.
# ### 10.2 Binding Data with Functions
# It is often useful to bind data to a function. A method clearly
# does that, binding the instance's attributes with the method behaviour,
# but it's not the only way.
def log(severity, message):
print('{}: {}'.format(severity.upper(), message))
log('warning', 'this is a warning')
log('error', 'this is an error')
# Create a new function that specifies one argument.
def warning(message):
log('warning', message)
warning('this is a warning')
# Create a closure from a function that specifies an argument.
def create_logger(severity):
def logger(message):
log(severity, message)
return logger
warning2 = create_logger('warning')
warning2('this is a warning')
# Create a partial function.
import functools
warning3 = functools.partial(log, 'warning')
warning3
warning3.func is log
warning3.args, warning3.keywords
warning3('this is a warning')
# Use a bound method.
SENTENCE_PUNCUATION = '.?!'
sentence = 'This is a sentence!'
sentence[-1] in SENTENCE_PUNCUATION
'.' in SENTENCE_PUNCUATION
SENTENCE_PUNCUATION.__contains__('.')
SENTENCE_PUNCUATION.__contains__(',')
is_end_of_a_sentence = SENTENCE_PUNCUATION.__contains__
is_end_of_a_sentence('.')
is_end_of_a_sentence(',')
# Create a class with a `__call__` method.
class SentenceEndsWith:
def __init__(self, characters):
self.punctuation = characters
def __call__(self, sentence):
return sentence[-1] in self.punctuation
is_end_of_a_sentence_dot1 = SentenceEndsWith('.')
is_end_of_a_sentence_dot1('This is a test.')
is_end_of_a_sentence_dot1('This is a test!')
is_end_of_a_sentence_any = SentenceEndsWith('.!?')
is_end_of_a_sentence_any('This is a test.')
is_end_of_a_sentence_any('This is a test!')
# Another way that mutable data can be bound to a function is with
# parameter evaluation, which is sometimes done by mistake.
def f1(parameter=print('The parameter is initialized now!')):
if parameter is None:
print('The parameter is None')
return parameter
f1()
f1() is None
f1('Not None')
def f2(parameter=[0]):
parameter[0] += 1
return parameter[0]
f2()
f2()
f2()
f2()
| speakers/Stuart Williams/PyCon-2016-Python-Epiphanies/Python-Epiphanies-07-10-Taking-Advantage-of-First-Class-Objects.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Econophysics I
# ## Exercise 03 - H07
#
# ### <NAME>
# ### Universität Duisburg-Essen
# 05.05.2020
import numpy as np
from matplotlib import pyplot as plt
# ## Exercise 03. Homework 07. Point 05
#
# Additional task: generate the random numbers for the random walk after $q_{2} \left(\varepsilon\right)$ directly from the random numbers they have received for $q_{1} \left(\varepsilon\right)$. Use the sign for that. Compare both random walks.
# +
# Constant values
a1 = 3. ** 0.5
b1 = 1. / (2. * a1)
a2 = 1.
b2 = 0.5
# -
# Number of random numbers
N = 10000
# +
# random number between -a1 and a1
eps_1 = 2. * (np.random.random(N) - 0.5) * a1
# random number between -a2 and a2. From the distribution
# the result can only be -a2 or a2.
eps_2 = ((2. * np.random.randint(2, size=N)) - 1) * a2
# Sign from eps_2
eps_2_signs = np.sign(eps_1)
# +
fig2 = plt.figure(figsize=(16,9))
plt.plot(np.cumsum(eps_1), label="$q_1$")
plt.plot(np.cumsum(eps_2_signs), label=r'$q_1 \rightarrow q_2$')
plt.xlabel('t', fontsize=40)
plt.ylabel('Random walk', fontsize=40)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.legend(loc='best', fontsize=30)
plt.tight_layout()
plt.grid(True)
| week_3/Exercise03_H07_05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pprint
from keras.utils.data_utils import get_file
from keras.utils import np_utils, Sequence
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import skipgrams
from keras.models import Sequential, Model
from keras.layers import Dense, Embedding, Dot, Reshape, Activation, Concatenate, Lambda
from keras import Input
import keras.backend as K
import numpy as np
from sklearn.model_selection import train_test_split
import tensorflow_datasets as tfds
# +
# path = './alice.txt'
path = "./PrideAndPrejudice.txt"
sentences = [line.strip() for line in open(path) if line != '\n']
sentences = sentences[:5000]
# +
tokenizer = Tokenizer(num_words = 2000)
tokenizer.fit_on_texts(sentences)
corpus = tokenizer.texts_to_sequences(sentences)
V = len(tokenizer.word_index) + 1
dim = 2
window_size = 5
# -
# # Skipgram without negative sampling
# ## Data processing
# +
pair_data, labels = [], []
for words in corpus:
temp_pair_data, temp_labels = skipgrams(words, V, window_size, negative_samples=0, shuffle=True)
pair_data += temp_pair_data
labels += temp_labels
pair_data = [[np_utils.to_categorical(ele1, V), \
np_utils.to_categorical(ele2, V)] \
for ele1, ele2 in pair_data]
X = np.array([ele[0] for ele in pair_data])
y = np.array([ele[1] for ele in pair_data])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
# -
X_train.shape
# ## Build model
model = Sequential()
model.add(Input(shape = V))
model.add(Dense(dim, activation='softmax'))
model.add(Dense(V, activation='softmax'))
model.layers[0]._name = "Hidden_Layer"
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
model.summary()
model.fit(X_train, y_train, batch_size = 64, validation_data = (X_test, y_test), epochs = 2)
# ## Check hidden layer result
vectors = model.get_weights()[0]
import matplotlib.pyplot as plt
word_lst = ['do','did','have','had', 'think', 'thought']
plt.figure(figsize = (7, 7))
for word in word_lst:
plt.plot(vectors[tokenizer.word_index[word]][0], \
vectors[tokenizer.word_index[word]][1], 'd')
plt.text(vectors[tokenizer.word_index[word]][0], \
vectors[tokenizer.word_index[word]][1], word)
# +
word_lst = ['he', 'him', 'she', 'her']
plt.figure(figsize = (7, 7))
for word in word_lst:
plt.plot(vectors[tokenizer.word_index[word]][0], \
vectors[tokenizer.word_index[word]][1], 'd')
plt.text(vectors[tokenizer.word_index[word]][0], \
vectors[tokenizer.word_index[word]][1], word)
# -
# In this toy example, we used data from "Alice" and trained a model with 2 hidden units. Therefore, we managed to get a vectorization from V = 4776 to dim = 2. From the above plot, we can observe that the model can learn some relation behind the word.
# # Skipgram with negative sampling
from keras.layers import Input, Flatten, Embedding, dot, Activation
from tqdm.notebook import tqdm
def build_model(input_dim, embedding_dim):
target_input_token = Input(shape=[1])
context_input_token = Input(shape=[1])
target_embed = Embedding(output_dim = embedding_dim, \
input_dim = input_dim, input_length = 1, \
name = "target_embed")(target_input_token)
context_embed = Embedding(output_dim = embedding_dim, \
input_dim = input_dim, input_length = 1, \
name = "context_embed")(context_input_token)
target_vec = Flatten()(target_embed)
context_vec = Flatten()(context_embed)
y = Activation('sigmoid')(dot([target_vec, context_vec], axes = 1))
model = Model(inputs=[target_input_token, context_input_token], outputs = [y])
model.compile(loss = 'binary_crossentropy', optimizer = 'rmsprop')
return model
model = build_model(V, 10)
model.summary()
def train_model(model, sentences, num_neg_samples = 30, window_size = 5, epochs = 2, V = V):
for epoch in range(epochs):
loss = 0.
for i, sent in enumerate(tqdm(sentences)):
couples, labels = skipgrams(sequence=sent, vocabulary_size = V, \
window_size = window_size, \
negative_samples=num_neg_samples)
if couples:
words, contexts = zip(*couples)
words = np.array(words, dtype=np.int32)
contexts = np.array(contexts, dtype=np.int32)
y = np.array(labels, dtype=np.int32)
loss += model.train_on_batch([words, contexts], y)
print('num epoch: {} loss: {}'.format(epoch, loss))
return model
train_model(model, corpus)
vectors = model.get_weights()[0]
# +
word_lst = ['he', 'him', 'she', 'her']
plt.figure(figsize = (7, 7))
for word in word_lst:
plt.plot(vectors[tokenizer.word_index[word]][0], \
vectors[tokenizer.word_index[word]][1], 'd')
plt.text(vectors[tokenizer.word_index[word]][0], \
vectors[tokenizer.word_index[word]][1], word)
# -
word_lst = ['do','did','have','had']
plt.figure(figsize = (7, 7))
for word in word_lst:
plt.plot(vectors[tokenizer.word_index[word]][0], \
vectors[tokenizer.word_index[word]][1], 'd')
plt.text(vectors[tokenizer.word_index[word]][0], \
vectors[tokenizer.word_index[word]][1], word)
# We can observe some similar result as the previous method.
| Embedding/.ipynb_checkpoints/word2vec_example-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Important: This notebook will only work with fastai-0.7.x. Do not try to run any fastai-1.x code from this path in the repository because it will load fastai-0.7.x**
# %matplotlib inline
# %reload_ext autoreload
# %autoreload 2
# +
from fastai.conv_learner import *
from fastai.dataset import *
from fastai.models.resnet import vgg_resnet50
import json
# -
torch.cuda.set_device(3)
torch.backends.cudnn.benchmark=True
# ## Data
PATH = Path('data/carvana')
MASKS_FN = 'train_masks.csv'
META_FN = 'metadata.csv'
masks_csv = pd.read_csv(PATH/MASKS_FN)
meta_csv = pd.read_csv(PATH/META_FN)
def show_img(im, figsize=None, ax=None, alpha=None):
if not ax: fig,ax = plt.subplots(figsize=figsize)
ax.imshow(im, alpha=alpha)
ax.set_axis_off()
return ax
TRAIN_DN = 'train'
MASKS_DN = 'train_masks_png'
sz = 128
bs = 64
nw = 16
TRAIN_DN = 'train-128'
MASKS_DN = 'train_masks-128'
sz = 128
bs = 64
nw = 16
class MatchedFilesDataset(FilesDataset):
def __init__(self, fnames, y, transform, path):
self.y=y
assert(len(fnames)==len(y))
super().__init__(fnames, transform, path)
def get_y(self, i): return open_image(os.path.join(self.path, self.y[i]))
def get_c(self): return 0
x_names = np.array([Path(TRAIN_DN)/o for o in masks_csv['img']])
y_names = np.array([Path(MASKS_DN)/f'{o[:-4]}_mask.png' for o in masks_csv['img']])
val_idxs = list(range(1008))
((val_x,trn_x),(val_y,trn_y)) = split_by_idx(val_idxs, x_names, y_names)
aug_tfms = [RandomRotate(4, tfm_y=TfmType.CLASS),
RandomFlip(tfm_y=TfmType.CLASS),
RandomLighting(0.05, 0.05, tfm_y=TfmType.CLASS)]
tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS, aug_tfms=aug_tfms)
datasets = ImageData.get_ds(MatchedFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, path=PATH)
md = ImageData(PATH, datasets, bs, num_workers=16, classes=None)
denorm = md.trn_ds.denorm
x,y = next(iter(md.trn_dl))
x.shape,y.shape
# ## Simple upsample
f = resnet34
cut,lr_cut = model_meta[f]
def get_base():
layers = cut_model(f(True), cut)
return nn.Sequential(*layers)
def dice(pred, targs):
pred = (pred>0).float()
return 2. * (pred*targs).sum() / (pred+targs).sum()
class StdUpsample(nn.Module):
def __init__(self, nin, nout):
super().__init__()
self.conv = nn.ConvTranspose2d(nin, nout, 2, stride=2)
self.bn = nn.BatchNorm2d(nout)
def forward(self, x): return self.bn(F.relu(self.conv(x)))
class Upsample34(nn.Module):
def __init__(self, rn):
super().__init__()
self.rn = rn
self.features = nn.Sequential(
rn, nn.ReLU(),
StdUpsample(512,256),
StdUpsample(256,256),
StdUpsample(256,256),
StdUpsample(256,256),
nn.ConvTranspose2d(256, 1, 2, stride=2))
def forward(self,x): return self.features(x)[:,0]
class UpsampleModel():
def __init__(self,model,name='upsample'):
self.model,self.name = model,name
def get_layer_groups(self, precompute):
lgs = list(split_by_idxs(children(self.model.rn), [lr_cut]))
return lgs + [children(self.model.features)[1:]]
m_base = get_base()
m = to_gpu(Upsample34(m_base))
models = UpsampleModel(m)
learn = ConvLearner(md, models)
learn.opt_fn=optim.Adam
learn.crit=nn.BCEWithLogitsLoss()
learn.metrics=[accuracy_thresh(0.5),dice]
learn.summary()
learn.freeze_to(1)
learn.lr_find()
learn.sched.plot()
lr=4e-2
wd=1e-7
lrs = np.array([lr/100,lr/10,lr])/2
learn.fit(lr,1, wds=wd, cycle_len=4,use_clr=(20,8))
learn.save('tmp')
learn.load('tmp')
learn.unfreeze()
learn.bn_freeze(True)
learn.fit(lrs,1,cycle_len=4,use_clr=(20,8))
learn.save('128')
x,y = next(iter(md.val_dl))
py = to_np(learn.model(V(x)))
show_img(py[0]>0);
show_img(y[0]);
# ## U-net (ish)
class SaveFeatures():
features=None
def __init__(self, m): self.hook = m.register_forward_hook(self.hook_fn)
def hook_fn(self, module, input, output): self.features = output
def remove(self): self.hook.remove()
class UnetBlock(nn.Module):
def __init__(self, up_in, x_in, n_out):
super().__init__()
up_out = x_out = n_out//2
self.x_conv = nn.Conv2d(x_in, x_out, 1)
self.tr_conv = nn.ConvTranspose2d(up_in, up_out, 2, stride=2)
self.bn = nn.BatchNorm2d(n_out)
def forward(self, up_p, x_p):
up_p = self.tr_conv(up_p)
x_p = self.x_conv(x_p)
cat_p = torch.cat([up_p,x_p], dim=1)
return self.bn(F.relu(cat_p))
class Unet34(nn.Module):
def __init__(self, rn):
super().__init__()
self.rn = rn
self.sfs = [SaveFeatures(rn[i]) for i in [2,4,5,6]]
self.up1 = UnetBlock(512,256,256)
self.up2 = UnetBlock(256,128,256)
self.up3 = UnetBlock(256,64,256)
self.up4 = UnetBlock(256,64,256)
self.up5 = UnetBlock(256,3,16)
self.up6 = nn.ConvTranspose2d(16, 1, 1)
def forward(self,x):
inp = x
x = F.relu(self.rn(x))
x = self.up1(x, self.sfs[3].features)
x = self.up2(x, self.sfs[2].features)
x = self.up3(x, self.sfs[1].features)
x = self.up4(x, self.sfs[0].features)
x = self.up5(x, inp)
x = self.up6(x)
return x[:,0]
def close(self):
for sf in self.sfs: sf.remove()
class UnetModel():
def __init__(self,model,name='unet'):
self.model,self.name = model,name
def get_layer_groups(self, precompute):
lgs = list(split_by_idxs(children(self.model.rn), [lr_cut]))
return lgs + [children(self.model)[1:]]
m_base = get_base()
m = to_gpu(Unet34(m_base))
models = UnetModel(m)
learn = ConvLearner(md, models)
learn.opt_fn=optim.Adam
learn.crit=nn.BCEWithLogitsLoss()
learn.metrics=[accuracy_thresh(0.5),dice]
[o.features.size() for o in m.sfs]
learn.freeze_to(1)
learn.lr_find()
learn.sched.plot()
# +
lr=4e-2
wd=1e-7
lrs = np.array([lr/200,lr/20,lr])/2
# -
learn.fit(lr,1,wds=wd,cycle_len=8,use_clr=(5,8))
learn.save('128urn-tmp')
learn.load('128urn-tmp')
learn.unfreeze()
learn.bn_freeze(True)
learn.fit(lrs/2, 1, wds=wd, cycle_len=10,use_clr=(20,10))
learn.save('128urn-0')
learn.load('128urn-0')
x,y = next(iter(md.val_dl))
py = to_np(learn.model(V(x)))
show_img(py[0]>0);
show_img(y[0]);
m.close()
# ## 512x512
TRAIN_DN = 'train'
MASKS_DN = 'train_masks_png'
sz=512
bs=8
x_names = np.array([Path(TRAIN_DN)/o for o in masks_csv['img']])
y_names = np.array([Path(MASKS_DN)/f'{o[:-4]}_mask.png' for o in masks_csv['img']])
val_idxs = list(range(1008))
((val_x,trn_x),(val_y,trn_y)) = split_by_idx(val_idxs, x_names, y_names)
tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS, aug_tfms=aug_tfms)
datasets = ImageData.get_ds(MatchedFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, path=PATH)
md = ImageData(PATH, datasets, bs, num_workers=16, classes=None)
denorm = md.trn_ds.denorm
# +
lr=2e-2
wd=1e-7
lrs = np.array([lr/200,lr/20,lr])/2
# -
m_base = get_base()
m = to_gpu(Unet34(m_base))
models = UnetModel(m)
learn = ConvLearner(md, models)
learn.opt_fn=optim.Adam
learn.crit=nn.BCEWithLogitsLoss()
learn.metrics=[accuracy_thresh(0.5),dice]
learn.freeze_to(1)
learn.load('128urn-0')
learn.fit(lr,1,wds=wd, cycle_len=5,use_clr=(5,5))
learn.save('512urn-tmp')
learn.unfreeze()
learn.bn_freeze(True)
learn.load('512urn-tmp')
learn.fit(lrs/2,1,wds=wd, cycle_len=8,use_clr=(20,8))
learn.fit(lrs/2,1,wds=wd, cycle_len=8,use_clr=(20,8))
learn.save('512urn')
learn.load('512urn')
x,y = next(iter(md.val_dl))
py = to_np(learn.model(V(x)))
show_img(py[0]>0);
show_img(y[0]);
m.close()
# ## 1024x1024
sz=1024
bs=4
tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS)
datasets = ImageData.get_ds(MatchedFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, path=PATH)
md = ImageData(PATH, datasets, bs, num_workers=16, classes=None)
denorm = md.trn_ds.denorm
m_base = get_base()
m = to_gpu(Unet34(m_base))
models = UnetModel(m)
learn = ConvLearner(md, models)
learn.opt_fn=optim.Adam
learn.crit=nn.BCEWithLogitsLoss()
learn.metrics=[accuracy_thresh(0.5),dice]
learn.load('512urn')
learn.freeze_to(1)
learn.fit(lr,1, wds=wd, cycle_len=2,use_clr=(5,4))
learn.save('1024urn-tmp')
learn.load('1024urn-tmp')
learn.unfreeze()
learn.bn_freeze(True)
lrs = np.array([lr/200,lr/30,lr])
learn.fit(lrs/10,1, wds=wd,cycle_len=4,use_clr=(20,8))
learn.fit(lrs/10,1, wds=wd,cycle_len=4,use_clr=(20,8))
learn.sched.plot_loss()
learn.save('1024urn')
learn.load('1024urn')
x,y = next(iter(md.val_dl))
py = to_np(learn.model(V(x)))
show_img(py[0]>0);
show_img(y[0]);
| courses/dl2/carvana-unet-lrg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# 1- Importar librerías
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from keras.models import Sequential
from keras.layers import Conv2D,MaxPooling2D,Flatten,Dense,Activation
import warnings
warnings.filterwarnings('ignore')
#creamos la red
model = Sequential([
#Create Convolution2d Layer with activation Relu
Conv2D(32,(3,3),input_shape=(64,64,3)),
Activation('relu'),
#Create MaxsPooling2D Function
MaxPooling2D(pool_size=(2,2)),
#Create Convolution 2d Layer again
Conv2D(32,(3,3)),
Activation('relu'),
MaxPooling2D(pool_size=(2,2)),
#Flatten layer !
Flatten(),
#Full connection
Dense(units=128),
Activation('relu'),
Dense(units=1),
Activation('sigmoid'),
])
# For a binary classification problem
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
#We will use ImageDataGenerator for generating data from the image set.
#We already know that small the size of computational data, faster is the processing of neural network.
#So we will scale the image by factor of 1/255 to bring all the RGB values in range [0,1].
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('cat-and-dog/training_set/training_set/',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('cat-and-dog/test_set/test_set/',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
# -
model.summary()
history=model.fit_generator(training_set,
samples_per_epoch=8000,
nb_epoch=20,
validation_data=test_set,
nb_val_samples=50)
# +
#Dibujar resultados
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
#RESULTADO
print('acc',sum(acc)/len(acc)*100,'%','val_acc',sum(val_acc)/len(val_acc)*100,'%')
# +
import numpy as np
from keras.preprocessing import image
test_image=image.load_img('cat-and-dog/test_set/test_set/dogs/dog.4007.jpg',target_size=(64,64))
test_image=image.img_to_array(test_image)
test_image=np.expand_dims(test_image,axis=0)
result=model.predict_classes(test_image)
if result[0][0] >=0.5:
prediction='Creo que la imagen 4007 es un PERRO (DOG)'
else:
prediction='Creo que la imagen 4007 es un GATO (CAT)'
print(prediction)
fotodog4007="cat-and-dog/test_set/test_set/dogs/dog.4007.jpg"
plt.imshow(plt.imread(fotodog4007))
# -
# save transfer learning model for offline prediction purposes
model.save('dogsandcat_.h5') #genera el archivo en el directorio
| 12_ Deep-Learning/CNN_Keras_cats_dogs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 2 - Shor's algorithm
# ## Historical background
#
# In computing, we often measure the performance of an algorithm by how it grows with the size of the input problem. For example, addition has an algorithm that grows linearly with the size of the numbers we're adding. There are some computing problems for which the best algorithms we have grow _exponentially_ with the size of the input, and this means inputs with a relatively modest size are too big to solve using any computer on earth. We're so sure of this, much of the internet's security depends on certain problems being unsolvable.
#
# In 1994, <NAME> showed that it’s possible to factor a number into its primes efficiently on a quantum computer.[1] This is big news, as the best classical algorithm we know of is one of these algorithms that grows exponentially. And in fact, [RSA encryption](https://en.wikipedia.org/wiki/RSA_(cryptosystem)) relies on factoring large enough numbers being infeasible. To factor integers that are too big for our current classical computers will require millions of qubits and gates, and these circuits are far too big to run on today’s quantum computers successfully.
#
# So how did <NAME>, <NAME>, <NAME>, <NAME>, <NAME> and <NAME> manage to factor 15 on a quantum computer, all the way back in 2001?![2]
#
# The difficulty in creating circuits for Shor’s algorithm is creating the circuit that computes a controlled $ay \bmod N$. While we know how to create these circuits using a polynomial number of gates, these are still too large for today’s computers. Fortunately, if we know some information about the problem a priori, then we can sometimes ‘cheat’ and create more efficient circuits.
#
# To run this circuit on the hardware available to them, the authors of the above paper found a very simple circuit that performed $7y \bmod 15$. This made the circuit small enough to run on their hardware. By the end of this exercise, you will have created a circuit for $35y \bmod N$ that can be used in Shor’s algorithm and can run on `ibmq_santiago`.
#
# If you want to understand what's going on in this exercise, you should check out the [Qiskit Textbook page on Shor's algorithm](https://qiskit.org/textbook/ch-algorithms/shor.html), but if this is too involved for you, you can complete the exercise without this.
#
# ### References
# 1. Shor, <NAME>. "Algorithms for quantum computation: discrete logarithms and factoring." Proceedings 35th annual symposium on foundations of computer science. Ieee, 1994.
# 1. Vandersypen, <NAME>, et al. "Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance." Nature 414.6866 (2001): 883-887.
# ## tl;dr: Shor’s algorithm
#
# There is an algorithm called [_quantum phase estimation_](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html) that tells us the phase a gate introduces to a certain type of state. For example, inputs to phase estimation algorithm could be the state $|1\rangle$ and the gate $Z$. If the $Z$-gate acts on the state $|1\rangle$, we get back the same state with an added global phase of $\pi$:
#
# $$
# Z|1\rangle = -|1\rangle = e^{i\pi} |1\rangle
# $$
#
# And the quantum phase estimation algorithm could work this out for us. You can see another example [here](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html#2.-Example:-T-gate-).
#
# Shor showed that if we do phase estimation on a gate, $U$, that has the behavior $U|y\rangle = |a y\bmod N\rangle$, we can quickly get some information about $N$’s factors.
# ## The problem
#
# In this exercise, we will factor 35 by doing phase estimation on a circuit that implements $13y \bmod 35$. The exercise is to create a circuit that does this, and is also small enough to run on `ibmq_santiago`! This is not an easy task, so the first thing we’re going to do is cheat.
#
# A detail of Shor’s algorithm is that our circuit only needs to work on states we can reach through applying $U$ to the starting state $|1\rangle$. I.e. we can use _any_ circuit that has the behavior:
#
# $$
# \begin{aligned}
# U|1\rangle &= |13\rangle \\
# UU|1\rangle &= |29\rangle \\
# UUU|1\rangle &= |27\rangle \\
# UUUU|1\rangle &= |1\rangle \\
# \end{aligned}
# $$
#
# So how can we make this easier for us? Since we only need to correctly transform 4 different states, we can encode these onto two qubits. For this exercise, we will choose to map the 2-qubit computational basis states to the numbers like so:
#
# $$
# \begin{aligned}
# |1\rangle &\rightarrow |00\rangle \\
# |13\rangle &\rightarrow |01\rangle \\
# |29\rangle &\rightarrow |10\rangle \\
# |27\rangle &\rightarrow |11\rangle \\
# \end{aligned}
# $$
#
# Why is this “cheating”? Well, to take advantage of this optimization, we need to know all the states $U$ is going to affect, which means we have to compute $ay \bmod N$ until we get back to 1 again, and that means we know the period of $a^x \bmod N$ and can therefore get the factors of $N$. Any optimization like this, in which we use information that would tell us the value $r$, is obviously not going to scale to problems that classical computers can’t solve.
#
# But the purpose of this exercise is just to verify that Shor’s algorithm does in fact work as intended, and we’re not going to worry about the fact that we cheated to get a circuit for $U$.
#
# <div id='u-definition'></div>
# <div class="alert alert-block alert-success">
#
# **Exercise 2a:** Create a circuit ($U$) that performs the transformation:
#
# $$
# \begin{aligned}
# U|00\rangle &= |01\rangle \\
# U|01\rangle &= |10\rangle \\
# U|10\rangle &= |11\rangle \\
# U|11\rangle &= |00\rangle \\
# \end{aligned}
# $$
#
# and is controlled by another qubit. The circuit will act on a 2-qubit target register named 'target', and be controlled by another single-qubit register named 'control'. You should assign your finished circuit to the variable '`cu`'.
#
# </div>
# +
from qiskit import QuantumCircuit
from qiskit import QuantumRegister, QuantumCircuit
from math import pi
c = QuantumRegister(1, 'control')
t = QuantumRegister(2, 'target')
cu = QuantumCircuit(c, t, name="Controlled 13^x mod 35")
# WRITE YOUR CODE BETWEEN THESE LINES - START
cu.cx(c[0],t[0])
cu.u(0, 0, pi/2, t[1])
cu.u(pi/2, -pi/2, pi/2, t[1])
cu.u(0, 0, pi/2, t[1])
cu.cx(t[0], t[1])
cu.u(0, 0, -pi/4, t[1])
cu.cx(c[0], t[1])
cu.u(0, 0, pi/4, t[1])
cu.cx(t[0], t[1])
cu.u(0, 0, -pi/4, t[1])
cu.cx(c[0], t[1])
cu.u(0, 0, pi/4,t[0])
cu.u(0, 0, pi/4, t[1])
cu.cx(c[0],t[0])
cu.u(0, 0, pi/2, t[1])
cu.u(0, 0, pi/4, c[0])
cu.u(0, 0, -pi/4,t[0])
cu.u(pi/2, -pi/2, pi/2, t[1])
cu.cx(c[0],t[0])
cu.u(0, 0, pi/2, t[1])
cu.cx(c[0], t[1])
# WRITE YOUR CODE BETWEEN THESE LINES - END
cu.draw('mpl')
# -
# And run the cell below to check your answer:
# Check your answer using following code
from qc_grader import grade_ex2a
grade_ex2a(cu)
# Congratulations! You’ve completed the hard part.
#
# We read the output of the phase estimation algorithm by measuring qubits, so we will need to make sure our 'counting' register contains enough qubits to read off $r$. In our case, $r = 4$, which means we only need $\log_2(4) = 2$ qubits (cheating again because we know $r$ beforehand), but since Santiago has 5 qubits, and we've only used 2 for the 'target' register, we'll use all remaining 3 qubits as our counting register.
#
# To do phase estimation on $U$, we need to create circuits that perform $U^{2^x}$ ($U$ repeated $2^x$ times) for each qubit (with index $x$) in our register of $n$ counting qubits. In our case this means we need three circuits that implement:
#
# $$ U, \; U^2, \; \text{and} \; U^4 $$
#
# So the next step is to create a circuit that performs $U^2$ (i.e. a circuit equivalent to applying $U$ twice).
#
# <div class="alert alert-block alert-success">
#
# **Exercise 2b:** Create a circuit ($U^2$) that performs the transformation:
#
# $$
# \begin{aligned}
# U|00\rangle &= |10\rangle \\
# U|01\rangle &= |11\rangle \\
# U|10\rangle &= |00\rangle \\
# U|11\rangle &= |01\rangle \\
# \end{aligned}
# $$
#
# and is controlled by another qubit. The circuit will act on a 2-qubit target register named 'target', and be controlled by another single-qubit register named 'control'. You should assign your finished circuit to the variable '`cu2`'.
# </div>
# +
c = QuantumRegister(1, 'control')
t = QuantumRegister(2, 'target')
cu2 = QuantumCircuit(c, t)
# WRITE YOUR CODE BETWEEN THESE LINES - START
cu2.cx(c[0],t[0])
cu2.u(0, 0, pi/2, t[1])
cu2.u(pi/2, -pi/2, pi/2, t[1])
cu2.u(0, 0, pi/2, t[1])
cu2.cx(t[0], t[1])
cu2.u(0, 0, -pi/4, t[1])
cu2.cx(c[0], t[1])
cu2.u(0, 0, pi/4, t[1])
cu2.cx(t[0], t[1])
cu2.u(0, 0, -pi/4, t[1])
cu2.cx(c[0], t[1])
cu2.u(0, 0, pi/4,t[0])
cu2.u(0, 0, pi/4, t[1])
cu2.cx(c[0],t[0])
cu2.u(0, 0, pi/2, t[1])
cu2.u(0, 0, pi/4, c[0])
cu2.u(0, 0, -pi/4,t[0])
cu2.u(pi/2, -pi/2, pi/2, t[1])
cu2.cx(c[0],t[0])
cu2.u(0, 0, pi/2, t[1])
cu2.cx(c[0], t[1])
cu2.cx(c[0],t[0])
cu2.u(0, 0, pi/2, t[1])
cu2.u(pi/2, -pi/2, pi/2, t[1])
cu2.u(0, 0, pi/2, t[1])
cu2.cx(t[0], t[1])
cu2.u(0, 0, -pi/4, t[1])
cu2.cx(c[0], t[1])
cu2.u(0, 0, pi/4, t[1])
cu2.cx(t[0], t[1])
cu2.u(0, 0, -pi/4, t[1])
cu2.cx(c[0], t[1])
cu2.u(0, 0, pi/4,t[0])
cu2.u(0, 0, pi/4, t[1])
cu2.cx(c[0],t[0])
cu2.u(0, 0, pi/2, t[1])
cu2.u(0, 0, pi/4, c[0])
cu2.u(0, 0, -pi/4,t[0])
cu2.u(pi/2, -pi/2, pi/2, t[1])
cu2.cx(c[0],t[0])
cu2.u(0, 0, pi/2, t[1])
cu2.cx(c[0], t[1])
# WRITE YOUR CODE BETWEEN THESE LINES - END
cu2.draw('mpl')
# -
# And you can check your answer below:
# Check your answer using following code
from qc_grader import grade_ex2b
grade_ex2b(cu2)
# Finally, we also need a circuit that is equivalent to applying $U$ four times (i.e. we need the circuit $U^4$).
#
# <div class="alert alert-block alert-success">
#
# **Exercise 2c:** Create a circuit ($U^4$) that performs the transformation:
#
# $$
# \begin{aligned}
# U|00\rangle &= |00\rangle \\
# U|01\rangle &= |01\rangle \\
# U|10\rangle &= |10\rangle \\
# U|11\rangle &= |11\rangle \\
# \end{aligned}
# $$
#
# and is controlled by another qubit. The circuit will act on a 2-qubit target register named 'target', and be controlled by another single-qubit register named 'control'. You should assign your finished circuit to the variable '`cu4`'. _Hint: The best solution is very simple._
# </div>
# +
c = QuantumRegister(1, 'control')
t = QuantumRegister(2, 'target')
cu4 = QuantumCircuit(c, t)
# WRITE YOUR CODE BETWEEN THESE LINES - START
# WRITE YOUR CODE BETWEEN THESE LINES - END
cu4.draw('mpl')
# -
# You can check your answer using the code below:
# Check your answer using following code
from qc_grader import grade_ex2c
grade_ex2c(cu4)
# <div class="alert alert-block alert-success">
#
# **Exercise 2 final:** Now we have controlled $U$, $U^2$ and $U^4$, we can combine this into a circuit that carries out the quantum part of Shor’s algorithm.
#
# The initialization part is easy: we need to put the counting register into the state $|{+}{+}{+}\rangle$ (which we can do with three H-gates) and we need the target register to be in the state $|1\rangle$ (which we mapped to the computational basis state $|00\rangle$, so we don’t need to do anything here). We'll do all this for you.
#
# _Your_ task is to create a circuit that carries out the controlled-$U$s, that will be used in-between the initialization and the inverse quantum Fourier transform. More formally, we want a circuit:
#
#
# $$
# CU_{c_0 t}CU^2_{c_1 t}CU^4_{c_2 t}
# $$
#
# Where $c_0$, $c_1$ and $c_2$ are the three qubits in the ‘counting’ register, $t$ is the ‘target’ register, and $U$ is as <a href="#u-definition">defined in the first part of this exercise</a>. In this notation, $CU_{a b}$ means $CU$ is controlled by $a$ and acts on $b$. An easy solution to this is to simply combine the circuits `cu`, `cu2` and `cu4` that you created above, but you will most likely find a more efficient circuit that has the same behavior!
#
# </div>
# <div class="alert alert-block alert-danger">
#
# Your circuit can only contain [CNOTs](https://qiskit.org/documentation/stubs/qiskit.circuit.library.CXGate.html) and single qubit [U-gates](https://qiskit.org/documentation/stubs/qiskit.circuit.library.UGate.html). Your score will be the number of CNOTs you use (less is better), as multi-qubit gates are usually much more difficult to carry out on hardware than single-qubit gates. If you're struggling with this requirement, we've included a line of code next to the submission that will convert your circuit to this form, although you're likely to do better by hand.
#
# </div>
# Code to combine your previous solutions into your final submission
cqr = QuantumRegister(3, 'control')
tqr = QuantumRegister(2, 'target')
cux = QuantumCircuit(cqr, tqr)
solutions = [cu, cu2, cu4]
for i in range(3):
cux = cux.compose(solutions[i], [cqr[i], tqr[0], tqr[1]])
cux.draw('mpl')
# Check your answer using following code
from qc_grader import grade_ex2_final
# Uncomment the two lines below if you need to convert your circuit to CNOTs and single-qubit gates
#from qiskit import transpile
#cux = transpile(cux, basis_gates=['cx','u'])
grade_ex2_final(cux)
# Once you're happy with the circuit, you can submit it below:
# Submit your answer. You can re-submit at any time.
from qc_grader import submit_ex2_final
submit_ex2_final(cux)
# Congratulations! You've finished the exercise. Read on to see your circuit used to factor 35, and see how it performs .
#
# ## Using your circuit to factorize 35
#
# The code cell below takes your submission for the exercise and uses it to create a circuit that will give us $\tfrac{s}{r}$, where $s$ is a random integer between $0$ and $r-1$, and $r$ is the period of the function $f(x) = 13^x \bmod 35$.
# +
from qiskit.circuit.library import QFT
from qiskit import ClassicalRegister
# Create the circuit object
cr = ClassicalRegister(3)
shor_circuit = QuantumCircuit(cqr, tqr, cr)
# Initialise the qubits
shor_circuit.h(cqr)
# Add your circuit
shor_circuit = shor_circuit.compose(cux)
# Perform the inverse QFT and extract the output
shor_circuit.append(QFT(3, inverse=True), cqr)
shor_circuit.measure(cqr, cr)
shor_circuit.draw('mpl')
# -
# Let's transpile this circuit and see how large it is, and how many CNOTs it uses:
from qiskit import Aer, transpile
from qiskit.visualization import plot_histogram
qasm_sim = Aer.get_backend('aer_simulator')
tqc = transpile(shor_circuit, basis_gates=['u', 'cx'], optimization_level=3)
print(f"circuit depth: {tqc.depth()}")
print(f"circuit contains {tqc.count_ops()['cx']} CNOTs")
# And let's see what we get:
counts = qasm_sim.run(tqc).result().get_counts()
plot_histogram(counts)
# Assuming everything has worked correctly, we should see equal probability of measuring the numbers $0$, $2$, $4$ and $8$. This is because phase estimation gives us $2^n \cdot \tfrac{s}{r}$, where $n$ is the number of qubits in our counting register (here $n = 3$, $s$ is a random integer between $0$ and $r-1$, and $r$ is the number we're trying to calculate). Let's convert these to fractions that tell us $s/r$ (this is something we can easily calculate classically):
from fractions import Fraction
n = 3 # n is number of qubits in our 'counting' register
# Cycle through each measurement string
for measurement in counts.keys():
# Convert the binary string to an 'int', and divide by 2^n
decimal = int(measurement, 2)/2**n
# Use the continued fractions algorithm to convert to form a/b
print(Fraction(decimal).limit_denominator())
# We can see the denominator of some of the results will tell us the correct answer $r = 4$. We can verify $r=4$ quickly:
13**4 % 35
# So how do we get the factors from this? There is then a high probability that the greatest common divisor of $N$ and either $a^{r/2}-1$ or $a^{r/2}+1$ is a factor of $N$, and the greatest common divisor is also something we can easily calculate classically.
from math import gcd # Greatest common divisor
for x in [-1, 1]:
print(f"Guessed factor: {gcd(13**(4//2)+x, 35)}")
# We only need to find one factor, and can use it to divide $N$ to find the other factor. But in this case, _both_ $a^{r/2}-1$ or $a^{r/2}+1$ give us $35$'s factors. We can again verify this is correct:
7*5
# ## Running on `ibmq_santiago`
#
# We promised this would run on Santiago, so here we will show you how to do that. In this example we will use a simulated Santiago device for convenience, but you can switch this out for the real device if you want:
# +
from qiskit.test.mock import FakeSantiago
from qiskit import assemble
from qiskit.visualization import plot_histogram
santiago = FakeSantiago()
real_device = False
## Uncomment this code block to run on the real device
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
santiago = provider.get_backend('ibmq_santiago')
real_device = True
# We need to transpile for Santiago
tqc = transpile(shor_circuit, santiago, optimization_level=3)
if not real_device:
tqc = assemble(tqc)
# Run the circuit and print the counts
counts = santiago.run(tqc).result().get_counts()
plot_histogram(counts)
# -
# If your score was low enough, you should see we have a high probability of measuring $0$, $2$, $4$ or $8$ as we saw with the perfect simulation. You will see some extra results due to inaccuracies in the processor and unwanted things interacting with our qubits. This 'noise' gets worse the longer our circuit is, as longer computation time means more time for unwanted interactions, and more gates means more potential errors. This is why we needed to cheat to create the smallest circuit possible.
#
# In the near future, our quantum systems will improve enough that we can start using more advanced error mitigation techniques to overcome these problems, which will mean we can run large enough circuits that we can [perform Shor's algorithm without cheating](https://arxiv.org/pdf/quant-ph/0205095.pdf).
# ## Additional information
#
# **Created by:** <NAME>
#
# **Version:** 1.0.0
| solutions by participants/ex2/ex2-GiacomoDiStasio-24cnot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import packages
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('ggplot')
# %matplotlib inline
# +
# load data
df = pd.read_csv('county_data.csv')
# +
# inspect data
df.head()
# -
df.tail(10)
# +
# check for Missing Values
df.isnull().sum()
# +
# explore data type
df.dtypes
# -
# Since we are only interested in the PNW lets focus on that region
pnw = df[df['climate_region'] == 'Northwest']
pnw.head()
pnw['county'].unique()
ax = pnw.plot.bar(y='wh', rot=0)
# +
#df.sort_values(by='col1', ascending=False)
pnw.sort_values(by='wh',ascending = False)
# -
pnwdesc = pnw.sort_values(by='wh',ascending = False)
fig = pnwdesc.plot(kind='bar',x='county',y='wh')
| WK1_EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Install ship_mapper
# ----------------------------------
# First, activate `Tutorials` environment
# In console...
#
# `conda install -c conda-forge basemap basemap-data-hires`
# In console...
#
# `conda install -c anaconda netcdf4`
# ------------------
# In console...
#
# `pip install git+https://github.com/Diego-Ibarra/ship_mapper`
| tutorials/Install_ship_mapper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:myenv]
# language: python
# name: conda-env-myenv-py
# ---
from Sapphire.Utilities.Building import nanoico #Let's build a small example nanocluster
nanoico.Nanoalloy.create_alloy_ico(['Au', 'Au', 'Au', 'Au', 'Au'], 6, 4.07, 'Au561.xyz')
# Using the building tools in Sapphire, we are able to construct a small Au icosahedron of 561 atoms.
# We may now proceed to perform some targetted alanysis on this object.
# +
from Sapphire.Post_Process import Adjacent,Kernels,DistFuncs #Some basic tools geometric analysis
from ase.io import read #Whilst this is a part of the core Sapphire setup, we can demonstrate its uitility more easily here
au = read('Au561.xyz')
pos = au.positions
dist = DistFuncs.Euc_Dist(pos) #Get a list of pairwise distances.
#Now why don't we visualise the pair distances as a distribution?
import matplotlib.pyplot as plt
a,b,c = plt.hist(dist, density=True, bins = 200)
# -
# This does not look particularly helpful, and so we shall make use of Sapphire's Kernel Density Estimators (KDEs) to get a smoother approximation on the pair distance distribution function.
#
# We primarily advocate for the use of the Gaussian kernels, though others are supported.
# A bandwidth of 0.05 should be adequate to acquire a good balance between smoothness and detail in the resultant distribution.
#
# As a default, Sapphire only considers the first 6 Å of pair distances for the sake of speed when computing the full KDE. However, this is something which is easily varied, as explored in the PDDF tutorial.
K = Kernels.Gauss(dist, 0.05)
fig,ax = plt.subplots()
ax.plot(K.Space, K.Density, color = 'k', label = 'Gaussian KDE')
ax1 = ax.twinx()
a,b,c = ax1.hist(dist, density=True, bins = 200, color = 'r', label = 'Raw Histogram')
plt.xlim(0,6)
ax.set_xlabel('Pair distance (Å)')
ax.set_ylabel('Freq arb. un.')
fig.legend()
# As we can see, there is a distinct advantage in using the KDE method to compute the pair distance distribution function over the raw histogram. Namely, we are able to extract information more easily, and being an analytic function, taking derivatives to find minima is simple!
#
#
# Next, we compute the adjacency matrix which is stored as a sparse scipy matrix to save on memory overheads. This simply requires the atomic positions, the pair distances we computed earlier, and the first minimum of the pair distance distribution function.
A = Adjacent.Adjacency_Matrix(Positions=pos, Distances=dist, R_Cut=K.R_Cut)
adj = A.ReturnAdj()
# We shall introduce the CNA function by running through the main workflow explicitly before hiding much of the machinary behind the curtain of python.
from Sapphire.CNA.FrameSignature import CNA
Sigs = {} #We shall not presupose the existence of any CNA signatures.
for i, atom in enumerate(au): #Iterate over all atoms in the cluster
cna = CNA(adj = adj)
cna.NN(i) #Aquire the nearest neighbours to the reference atom being considered
for neigh in cna.neigh:
sig = tuple((cna.R(i,neigh), cna.S(), cna.T()))
try:
Sigs[sig]+=1
except KeyError:
print(sig)
Sigs[sig] = 1
# As we can see, there are 5 CNA signatures associated with the Ih structure which have been identified. This is indeed as it should be.
#
# 555 - indicative of a 5-fold symmetry axis.
# 422 & 421 - Indicative of FCC environment
# 322 & 321 are generally representative of surfaces with (111) Miller indices. This shall be expanded on later in the tutorial.
#
# With our signatures collected, we may now evaluate their distribution.
# +
import numpy as np
xaxis = np.arange(0,len(Sigs)) #Define the range for the x axis
yaxis = np.array([x for x in Sigs.values()]) #Set the y axis values from the dictionary
xlabels = [ str(x) for x in Sigs.keys() ] #Create a list of labels for the bar chart
plt.bar(xaxis,yaxis)
plt.xticks(xaxis,xlabels, rotation = 45)
plt.xlabel('CNA signature')
plt.ylabel('Counts')
# -
# Now that we have a good feeling for what this distribution may feel like, we can begin to evaluate what the bonding environment itself may actually look like.
#
# As such, we provide below two example functions to extract the positional information, given the cna signature, for the atoms and their evaluated bonds.
#
# We may sort the bonds into 3 lists representing the following quantities:
#
# 1. pair_edge = The bonding pair in question.
# 2. s_edge = All of the bonds shared between the reference pair and their shared neighbours.
# 3. t_edge = All of the bonds shared by the shared neighbours only.
#
# We sort these three lissts manually so that we may have an easier time distinguishing environments.
#
# We shall then extract five pairs, each of which have a unique cna signature, and plot graphically their bonding environments.
# +
def plotting_tool(cna=None, strut = None, reference = None, friend = None):
node_xyz = np.zeros((2+len(cna.bonds), 3))
node_xyz[0] = strut[int(reference)].position
node_xyz[1] = strut[int(friend)].position
for i,node in enumerate(cna.bonds):
node_xyz[i+2] = strut[int(node)].position
pair_edge = np.zeros((2,3))
s_edge = np.zeros((2*len(cna.bonds),2,3))
t_edge = np.zeros((len(cna.perm),2,3))
pair_edge[0] = node_xyz[0]
pair_edge[1] = node_xyz[1]
for i, bond in enumerate(cna.bonds):
s_edge[2*i][0] = au[int(reference)].position
s_edge[2*i+1][0] = au[int(friend)].position
s_edge[2*i][1] = au[int(bond)].position
s_edge[2*i+1][1] = au[int(bond)].position
for i, bond in enumerate(cna.perm):
t_edge[i][0] = au[int(bond[0])].position
t_edge[i][1] = au[int(bond[1])].position
return node_xyz, pair_edge, s_edge, t_edge
def sig_plot(nodes, pair_edge, s_edges, t_edges, angle_A = 30, angle_B = 210):
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
# Plot the nodes - alpha is scaled by "depth" automatically
options = {"edgecolors": "tab:gray", "node_size": 200, "alpha": 0.9}
for i in [0,1]:
ax.scatter(*nodes[i].T, s=400, ec="w", color = 'r')
for i, node in enumerate(nodes):
if i >1:
ax.scatter(*nodes[i].T, s=400, ec="w", color = 'k')
ax.plot(*pair_edge.T, color="r", linewidth = 10)
for vizedge in s_edges:
ax.plot(*vizedge.T, color="k")
for vizedge in t_edges:
ax.plot(*vizedge.T, color="g")
def _format_axes(ax):
"""Visualization options for the 3D axes."""
# Turn gridlines off
ax.grid(False)
# Suppress tick labels
# Set axes labels
ax.set_xlabel("x (Å)")
ax.set_ylabel("y (Å)")
ax.set_zlabel("z (Å)")
ax.view_init(angle_A, angle_B)
_format_axes(ax)
#fig.tight_layout()
plt.show()
# -
cna = CNA(adj = adj)
cna_555 = cna
cna_555.NN(0)
print(cna_555.neigh)
cna_555.R(0,1)
cna_555.S()
cna_555.T()
a,b,c,d = plotting_tool(cna_555,au,0,1)
sig_plot(a,b,c,d)
cna = CNA(adj = adj)
cna_422= cna
cna_422.NN(10)
print(cna_422.neigh)
r = cna_422.R(10,26)
s = cna_422.S()
t = cna_422.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_422,au,10,26)
sig_plot(a,b,c,d, 15, 180)
cna = CNA(adj = adj)
cna_421 = cna
cna_421.NN(41)
print(cna_421.neigh)
r = cna_421.R(41,51)
s = cna_421.S()
t = cna_421.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_421,au,0,1)
sig_plot(a,b,c,d,45,0)
cna = CNA(adj = adj)
cna_322 = cna
cna_322.NN(500)
print(cna_322.neigh)
r = cna_322.R(500,501)
s = cna_322.S()
t = cna_322.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_322,au,0,1)
sig_plot(a,b,c,d,135,135)
cna = CNA(adj = adj, Fingerprint = True)
cna_311 = cna
cna_311.NN(560)
print(cna_311.neigh)
r = cna_311.R(560,558)
s = cna_311.S()
t = cna_311.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_311,au,560,558)
sig_plot(a,b,c,d, 60, 90)
cna = CNA(adj = adj, Fingerprint = True)
cna.calculate()
pattern_dict = list(set(cna.Fingerprint))
y_axis = np.zeros(len(pattern_dict), int)
x_axis = np.arange(0, len(pattern_dict))
xlabels = [ str(x) for x in pattern_dict ] #Create a list of labels for the bar chart
for pat in cna.Fingerprint:
y_axis[pattern_dict.index(pat)] += 1
plt.barh(x_axis,y_axis)
len(xlabels)
plt.yticks(x_axis,xlabels, rotation = 0)
plt.xlabel('Counts')
plt.ylabel('Pattern')
cna = CNA(adj = adj, Fingerprint = True)
cna.calculate()
cna.Fingerprint
fig = plt.figure()
gs = fig.add_gridspec(1,2, hspace=0, wspace = 0)
axs = gs.subplots(sharex=True, sharey=True)
ax1,ax2 = axs.flatten()
cna = CNA(adj = adj, Fingerprint = True)
cna.NN(560)
r = cna_311.R(560,558)
s = cna_311.S()
t = cna_311.T()
print(tuple((r,s,t)))
a,b,c,d = plotting_tool(cna_311,au,560,558)
ax1 = sig_plot(a,b,c,d, 60, 90)
cna = CNA(adj = adj, Fingerprint = True)
cna.NN(560)
for nn in cna.neigh:
print(nn)
r = cna.R(560,nn)
s = cna.S()
t = cna.T()
print(tuple((r,s,t)))
#,b,c,d = plotting_tool(cna,au,560,558)
#ig_plot(a,b,c,d, 60, 90)
| main/Sapphire/Tutorials/CNA/Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="uAVuwjwIvgAH" executionInfo={"status": "ok", "timestamp": 1625794228641, "user_tz": -330, "elapsed": 2039, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}}
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] id="2b0es7VLvgAM"
# Remember that in week 1 we had generated open-loop commands for a set of manoeuvres such as
# $[("straight", 5), ("right", 90), ("straight", 6), ("left", 90)]$
#
# Let us do repeat, but with a change. Instead of left/ right, simply use turn and a signed angle.
# $[("straight", 5), ("turn", -90), ("straight", 6), ("turn", 90)]$
#
# You can use cubic_spiral() from previous notebook
# + id="bhkPFx35vgAO" executionInfo={"status": "ok", "timestamp": 1625794228759, "user_tz": -330, "elapsed": 96, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}}
v = 1
dt = 0.1
num_st_pts = int(v/dt)
num_pts = 50
def cubic_spiral(theta_i, theta_f, n=10):
x = np.linspace(0, 1, num=n)
#-2*x**3 + 3*x**2
return (theta_f-theta_i)*(-2*x**3 + 3*x**2) + theta_i
def straight(dist, curr_pose, n=num_st_pts):
# the straight-line may be along x or y axis
x0, y0, t0 = curr_pose
xf, yf = x0 + dist*np.cos(t0), y0 + dist*np.sin(t0)
x = (xf - x0) * np.linspace(0, 1, n) + x0
y = (yf - y0) * np.linspace(0, 1, n) + y0
return x, y, t0*np.ones_like(x)
def turn(change, curr_pose, n=num_pts):
# adjust scaling constant for desired turn radius
x0, y0, t0 = curr_pose
theta = cubic_spiral(t0, t0 + np.deg2rad(change), n)
x= x0 + np.cumsum(v*np.cos(theta)*dt)
y= y0 + np.cumsum(v*np.sin(theta)*dt)
return x, y, theta
def generate_trajectory(route, init_pose = (0, 0,np.pi/2)):
curr_pose = init_pose
func = {'straight': straight, 'turn': turn}
x, y, t = np.array([]), np.array([]),np.array([])
for manoeuvre, command in route:
px, py, pt = func[manoeuvre](command, curr_pose)
curr_pose = px[-1],py[-1],pt[-1] # New current pose
x = np.concatenate([x, px])
y = np.concatenate([y, py])
t = np.concatenate([t, pt])
return x, y, t
# + [markdown] id="re-EN4-IvgAP"
# ### Plot the trajectory
# plot the trajectory and the change in orientation in separate plots
# + colab={"base_uri": "https://localhost:8080/", "height": 370} id="jnhR2RXmvgAP" executionInfo={"status": "ok", "timestamp": 1625794229628, "user_tz": -330, "elapsed": 855, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}} outputId="5175d14a-39d2-4155-9c8c-741c25059b6f"
route = [
("straight", 5),
("turn", -90),
("straight", 6),
("turn", 90)
]
x, y, th = generate_trajectory(route)
plt.figure(figsize=(12, 5), dpi=80)
plt.subplot(1,2,1)
plt.axis('equal')
plt.title("XY plot")
plt.plot(x, y)
plt.grid()
plt.subplot(1,2,2)
plt.title("Theta")
plt.plot(th)
plt.grid()
# + [markdown] id="th3IYgg-vgAP"
# ## Convert
#
# A* or Djikstra gives a sequence of $\{(x_i, y_i)\}$. We need to convert it to a sequence of {"straight", "turn"} if we are use generate_trajectory()
#
# Let us look at a simple method. Assume that the successive line segments are orthogonal (reasonable in the grid world). If we find the corner point, we can demarcate.
#
# For 3 consecutive points $(x_1,y_1), (x_2, y_2), (x_3, y_3)$ if
# $(x_1 - x_2)(y_3-y2) - (x_3-x_2)(y_2-y_1) \neq 0$, then $(x_2, y_2)$ is a corner point. This is much because the $\frac{\Delta Y}{\Delta X}$ value has canged (slope).
#
# Think about what is happening if
#
# 1. $(x_1 - x_2)(y_3-y2) - (x_3-x_2)(y_2-y_1) > 0$
#
# 2. $(x_1 - x_2)(y_3-y2) - (x_3-x_2)(y_2-y_1) < 0$
# + colab={"base_uri": "https://localhost:8080/", "height": 671} id="q4MEijvIvgAQ" executionInfo={"status": "ok", "timestamp": 1625794231161, "user_tz": -330, "elapsed": 1493, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}} outputId="9ac17aa9-3bb4-4b07-d583-c03bf1ae8f92"
# here is a code to generate 2 orthogonal
# line segments of lengths 6
s1, s2 = 6, 6
y1 = list(range(s1))
x1 = [0]*s1
x2 = list(range(s2))
y2 = [y1[-1]]*s2
x, y = x1[:-1]+x2, y1[:-1]+y2
plt.figure()
plt.title("Path")
plt.plot(x, y)
plt.grid()
#find the corner point and plot it
interest_points = [(x[0], y[0])] # Interest points (corners + start and stop)
for i in range(1, len(x)-1, 1):
x1, y1 = x[i-1], y[i-1]
x2, y2 = x[i], y[i]
x3, y3 = x[i+1], y[i+1]
ang12 = np.arctan2(y2-y1, x2-x1)
ang23 = np.arctan2(y3-y2, x3-x2)
ang_rel = ang23 - ang12
if ang_rel != 0:
interest_points.append((x2, y2))
print(f"Angle {np.rad2deg(ang_rel):.2f} at {(x2, y2)}")
interest_points.append((x[-1], y[-1]))
# Fix a turn radius r
# Shorten the straight segments by r
# convert this into {("straight", s1), ("turn", +/- 90), ("straight", s2)}
turn_radius = 1
path = []
def dist(p1, p2):
x1, y1 = p1
x2, y2 = p2
return ((x2-x1)**2+(y2-y1)**2)**(0.5)
# Start the main thing
turn_radius = 3 * turn_radius
path.append(("straight", dist(interest_points[0], interest_points[1])
- (0 if len(interest_points) == 2 else turn_radius)))
for i in range(1, len(interest_points)-1):
x1, y1 = interest_points[i-1]
x2, y2 = interest_points[i]
x3, y3 = interest_points[i+1]
ang = np.arctan2((y3-y2), (x3-x2)) - np.arctan2((y2-y1), (x2-x1))
path.append(("turn", np.rad2deg(ang))) # Add the turn
path.append(("straight", dist((x2, y2), (x3, y3)) - turn_radius
- (0 if i+1 == len(interest_points)-1 else turn_radius)))
print(f"Path is {path}")
# use generate_trajectory() and plot the smooth path
x, y, th = generate_trajectory(path)
plt.figure(figsize=(12, 5), dpi=80)
plt.subplot(1,2,1)
plt.axis('equal')
plt.title("XY plot")
plt.plot(x, y)
plt.grid()
plt.subplot(1,2,2)
plt.title("Theta")
plt.plot(th)
plt.grid()
# + [markdown] id="Ai-EW-gF2G24"
# Saving the path as a `.npy` file
# + id="dTmEl6EF2P3X" executionInfo={"status": "ok", "timestamp": 1625794231190, "user_tz": -330, "elapsed": 11, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}}
save_path = np.hstack((x.reshape(-1, 1), y.reshape(-1, 1), th.reshape(-1, 1)))
np.save("./data/srs_path.npy", save_path) # Save the path
# + [markdown] id="taT38XfcvgAU"
# # More complex example
# Borrow the Grid world code from week 2 notebook. Get the A* path and smoothen it using the routine from above
# + colab={"base_uri": "https://localhost:8080/"} id="gysa8Fp8iSFo" executionInfo={"status": "ok", "timestamp": 1625794231374, "user_tz": -330, "elapsed": 152, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}} outputId="4ad3ff50-d9ed-470b-aa54-f9a106cd6180"
# !tree
# + [markdown] id="BN3Ba6gti5rA"
# Import important libraries
# + id="7HJnSABEi8Nr" executionInfo={"status": "ok", "timestamp": 1625794232725, "user_tz": -330, "elapsed": 1337, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}}
import networkx as nx
# + [markdown] id="w8yGMaw0jC-m"
# Load the grid
# + id="4s1VvNw-vgAV" colab={"base_uri": "https://localhost:8080/", "height": 718} executionInfo={"status": "ok", "timestamp": 1625794233447, "user_tz": -330, "elapsed": 669, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}} outputId="54f0096e-1863-481d-cdcc-43e97728bd0f"
# Load grid
grid = np.load("./data/astar_grid.npy")
print(f"Loaded grid of shape {grid.shape}")
# you can define your own start/ end
start = (0, 0)
goal = (0, 19)
# visualize the start/ end and the robot's environment
fig, ax = plt.subplots(figsize=(12,12))
ax.imshow(grid, cmap=plt.cm.Dark2)
ax.scatter(start[1],start[0], marker = "+", color = "yellow", s = 200)
ax.scatter(goal[1],goal[0], marker = "+", color = "red", s = 200)
plt.show()
# + [markdown] id="j7Y_38DPjiSC"
# Remove nodes that are occupied
# + colab={"base_uri": "https://localhost:8080/", "height": 356} id="mQm0puYujGag" executionInfo={"status": "ok", "timestamp": 1625794234074, "user_tz": -330, "elapsed": 574, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}} outputId="1ddb8e57-3f9c-4740-a325-58466da8155e"
#initialize graph
grid_size = grid.shape
G = nx.grid_2d_graph(*grid_size)
# G.nodes -> (0,0), (0,1), ... (19, 18), (19, 19)
num_nodes = 0 # counter to keep track of deleted nodes
#nested loop to remove nodes that are not connected
#free cell => grid[i, j] = 0
#occupied cell => grid[i, j] = 1
for i in range(grid_size[0]):
for j in range(grid_size[1]):
if grid[i, j] == 1: # If occupied
G.remove_node((i, j))
num_nodes += 1
print(f"Removed {num_nodes} nodes")
print(f"Number of occupied cells in grid {np.sum(grid)}")
pos = {(x,y):(y,-x) for x,y in G.nodes()} # Converting axis
nx.draw(G, pos=pos, node_color='green', node_size=100)
# + [markdown] id="7JoyKBnPjpQ3"
# Create an A* path
# + colab={"base_uri": "https://localhost:8080/", "height": 738} id="46XWEEtBjo2m" executionInfo={"status": "ok", "timestamp": 1625794235244, "user_tz": -330, "elapsed": 1108, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}} outputId="b51c3c2c-7db8-4da9-aeb1-9c6901a25f35"
def euclidean(node1, node2):
x1, y1 = node1
x2, y2 = node2
return ((x1-x2)**2 + (y1-y2)**2)**0.5
nx.set_edge_attributes(G, {e: 1 for e in G.edges()}, "cost") # All edges have cost = 1
weight = 1.0 # Weight for heuristic
astar_path = nx.astar_path(G, start, goal,
heuristic=lambda n1, n2: weight * euclidean(n1, n2),
weight="cost")
print(astar_path)
# Visualize the path
fig, ax = plt.subplots(figsize=(12,12))
ax.imshow(grid, cmap=plt.cm.Dark2)
ax.scatter(start[1],start[0], marker = "+", color = "yellow", s = 200)
ax.scatter(goal[1],goal[0], marker = "+", color = "red", s = 200)
for s in astar_path[1:]:
ax.plot(s[1], s[0],'r+')
# + [markdown] id="2mlzVeZYkE7G"
# Get the path
# + id="SOPcdiYElGXD" executionInfo={"status": "ok", "timestamp": 1625794235267, "user_tz": -330, "elapsed": 18, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}}
a = np.array(astar_path)
y, x = -a[:, 0], a[:, 1]
#find the corner point and plot it
interest_points = [(x[0], y[0])] # Interest points (corners + start and stop)
for i in range(1, len(x)-1, 1):
x1, y1 = x[i-1], y[i-1]
x2, y2 = x[i], y[i]
x3, y3 = x[i+1], y[i+1]
ang12 = np.arctan2(y2-y1, x2-x1)
ang23 = np.arctan2(y3-y2, x3-x2)
ang_rel = ang23 - ang12
if ang_rel != 0:
if ang_rel == 3*np.pi/2:
ang_rel = -np.pi/2
if ang_rel == -3*np.pi/2:
ang_rel = np.pi/2
interest_points.append((x2, y2))
# print(f"Angle {np.rad2deg(ang_rel):.2f} at {(x2, y2)}")
interest_points.append((x[-1], y[-1]))
# Fix a turn radius r
# Shorten the straight segments by r
# convert this into {("straight", s1), ("turn", +/- 90), ("straight", s2)}
turn_radius = 1/3
path = []
def dist(p1, p2):
x1, y1 = p1
x2, y2 = p2
return ((x2-x1)**2+(y2-y1)**2)**(0.5)
# Start the main thing
path.append(("straight", dist(interest_points[0], interest_points[1])
- (0 if len(interest_points) == 2 else turn_radius)))
for i in range(1, len(interest_points)-1):
x1, y1 = interest_points[i-1]
x2, y2 = interest_points[i]
x3, y3 = interest_points[i+1]
ang = np.arctan2((y3-y2), (x3-x2)) - np.arctan2((y2-y1), (x2-x1))
if ang == 3 * np.pi/2:
ang = -np.pi/2
elif ang == -3*np.pi/2:
ang = np.pi/2
path.append(("turn", np.rad2deg(ang))) # Add the turn
path.append(("straight", dist((x2, y2), (x3, y3)) - turn_radius
- (0 if i+1 == len(interest_points)-1 else turn_radius)))
xi, yi = x, y # Backup
v = 1
dt = 0.01
num_st_pts = int(v/dt)
num_pts = 50
# Generate the path
x, y, th = generate_trajectory(path, (0, 0, 0))
# + colab={"base_uri": "https://localhost:8080/", "height": 793} id="JdkIWzA7m8rd" executionInfo={"status": "ok", "timestamp": 1625794236275, "user_tz": -330, "elapsed": 1003, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}} outputId="d66d2354-4c1b-4d74-f0a1-54c2c2b2b71a"
# Visualize the path first
plt.figure(figsize=(12, 12), dpi=80)
plt.title("Path")
plt.plot(xi, yi, label="actual")
plt.plot(x, y, label="smooth")
plt.legend()
plt.grid()
# + [markdown] id="biZObANsvgAV"
# This approach of path planning with 90 deg turns juxtaposed between straight segments works well in structured environments.
#
# In the general case, where $A^*$/ $RRT^*$ path is a sequence of piecewise linear segments, we will perform a path optimization routine directly.
# + [markdown] id="9fRQo6TOvgAV"
# There are 3 more advanced manouevres that you may need
#
# 1. Lane-change: Robot has to move laterally but without change to the orientation
#
# 2. Inplace: Robot has to turn around itself
#
# 3. Reverse: Straights or turns in reverse
#
# Lane-change has to be applied as a combination of 2 cubic spirals (90 to 0 and 0 to 90). Inplace and Reverse are situational constructs
# + id="Ujw-WlgpvgAW" executionInfo={"status": "ok", "timestamp": 1625794236308, "user_tz": -330, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjisMWV7H4tZokgrGXnFa0Vc5YzZAwcUwPjeEk=s64", "userId": "03029354539657630877"}}
| week3/avneesh/Q3 - 2/Attempt1_filesubmission_Avneesh Mishra - Smooth Planning Spirals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_tensorflow_p36)
# language: python
# name: conda_tensorflow_p36
# ---
# +
import warnings
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import sys, os, time, gc, random, pickle
from sklearn.neighbors import NearestNeighbors
import requests, shutil
import tensorflow as tf
import keras
from keras import backend as K
from keras.preprocessing.image import load_img, img_to_array
from keras.models import Sequential, load_model, Model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda, MaxPooling2D
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, ReduceLROnPlateau
from keras.applications.vgg16 import VGG16
# %matplotlib inline
# -
# # Load Data
# +
train_df = pd.read_csv('./data/triplet/train.csv')
val_df = pd.read_csv('./data/triplet/validation.csv')
test_df = pd.read_csv('./data/triplet/test.csv')
print('Train:\t\t', train_df.shape)
print('Validation:\t', val_df.shape)
print('Test:\t\t', test_df.shape)
print('\nTrain Landmarks:\t', len(train_df['landmark_id'].unique()))
print('Validation Landmarks:\t', len(val_df['landmark_id'].unique()))
print('Test Landmarks:\t\t', len(test_df['landmark_id'].unique()))
# -
train_df.head()
# # Helper Functions
# training set triplet generator
def train_triplet_generator(df, batch_size=100, img_size=(224, 224), seed=42,
prefix='./data/triplet/train/'):
""" training set triplet generator
it will generate 7400 triplet images in total
"""
# get images with only one training image landmark id and the rest landmark ids
np.random.seed(seed)
grouped = df[['landmark_id', 'image_id']].groupby('landmark_id').count().reset_index()
unique_neg_ids = list(grouped[grouped['image_id'] == 1]['landmark_id'].values)
rest_ids = list(grouped[grouped['image_id'] > 1]['landmark_id'].values)
size = 7400 * 2 - len(unique_neg_ids)
zeros = np.zeros((batch_size, 3, 1), dtype=K.floatx())
while True:
# get positive and negative image landmark ids
np.random.shuffle(rest_ids)
candidate_ids = list(np.random.choice(rest_ids, size=size, replace=False))
pos_landmark_ids = candidate_ids[:7400]
neg_landmark_ids = candidate_ids[7400:] + unique_neg_ids
np.random.shuffle(neg_landmark_ids)
# transform landmark id into image id
anc_img_ids = []
pos_img_ids = []
neg_img_ids = []
for i in range(len(pos_landmark_ids)):
tmp_pos_ids = df[df['landmark_id'] == pos_landmark_ids[i]]['image_id'].values
anc_img_ids.append(tmp_pos_ids[0])
pos_img_ids.append(tmp_pos_ids[1])
tmp_neg_ids = df[df['landmark_id'] == neg_landmark_ids[i]]['image_id'].values
neg_img_ids.append(tmp_neg_ids[0])
# iterator to read batch images
for j in range(len(pos_img_ids) // batch_size):
batch_anc_img_ids = anc_img_ids[j * batch_size: (j + 1) * batch_size]
batch_pos_img_ids = pos_img_ids[j * batch_size: (j + 1) * batch_size]
batch_neg_img_ids = neg_img_ids[j * batch_size: (j + 1) * batch_size]
# get images
anc_imgs = []
pos_imgs = []
neg_imgs = []
# iteratively read images
for k in range(batch_size):
anc_path = prefix + str(batch_anc_img_ids[k]) + '.jpg'
pos_path = prefix + str(batch_pos_img_ids[k]) + '.jpg'
neg_path = prefix + str(batch_neg_img_ids[k]) + '.jpg'
tmp_anc_img = load_img(anc_path, target_size=img_size)
tmp_anc_img = img_to_array(tmp_anc_img)
anc_imgs.append(tmp_anc_img)
tmp_pos_img = load_img(pos_path, target_size=img_size)
tmp_pos_img = img_to_array(tmp_pos_img)
pos_imgs.append(tmp_pos_img)
tmp_neg_img = load_img(neg_path, target_size=img_size)
tmp_neg_img = img_to_array(tmp_neg_img)
neg_imgs.append(tmp_neg_img)
# transform list to array
anc_imgs = np.array(anc_imgs, dtype=K.floatx()) / 255.0
pos_imgs = np.array(pos_imgs, dtype=K.floatx()) / 255.0
neg_imgs = np.array(neg_imgs, dtype=K.floatx()) / 255.0
yield [anc_imgs, pos_imgs, neg_imgs], zeros
# validation set triplet generator
def val_triplet_generator(df, batch_size=128, img_size=(224, 224),
seed=42, prefix='./data/triplet/validation'):
""" validation set triplet collector """
# get images with only one image landmark id and the rest landmark ids
grouped = df[['landmark_id', 'image_id']].groupby('landmark_id').count().reset_index()
unique_neg_ids = list(grouped[grouped['image_id'] == 1]['landmark_id'].values)
rest_ids = list(grouped[grouped['image_id'] > 1]['landmark_id'].values)
size = 3072 * 2 - len(unique_neg_ids)
zeros = np.zeros((batch_size, 3, 1), dtype=K.floatx())
while True:
# get positive and negative image landmark ids
np.random.seed(seed)
candidate_ids = list(np.random.choice(rest_ids, size=size, replace=False))
pos_landmark_ids = candidate_ids[:3072]
neg_landmark_ids = candidate_ids[3072:] + unique_neg_ids
np.random.shuffle(neg_landmark_ids)
# transform landmark id into image id
anc_img_ids = []
pos_img_ids = []
neg_img_ids = []
for i in range(len(pos_landmark_ids)):
tmp_pos_ids = df[df['landmark_id'] == pos_landmark_ids[i]]['image_id'].values
anc_img_ids.append(tmp_pos_ids[0])
pos_img_ids.append(tmp_pos_ids[1])
tmp_neg_ids = df[df['landmark_id'] == neg_landmark_ids[i]]['image_id'].values
neg_img_ids.append(tmp_neg_ids[0])
# iterator to read batch images
for j in range(len(pos_img_ids) // batch_size):
batch_anc_img_ids = anc_img_ids[j * batch_size: (j + 1) * batch_size]
batch_pos_img_ids = pos_img_ids[j * batch_size: (j + 1) * batch_size]
batch_neg_img_ids = neg_img_ids[j * batch_size: (j + 1) * batch_size]
# get images
anc_imgs = []
pos_imgs = []
neg_imgs = []
# iteratively read images
for k in range(batch_size):
anc_path = prefix + str(batch_anc_img_ids[k]) + '.jpg'
pos_path = prefix + str(batch_pos_img_ids[k]) + '.jpg'
neg_path = prefix + str(batch_neg_img_ids[k]) + '.jpg'
tmp_anc_img = load_img(anc_path, target_size=img_size)
tmp_anc_img = img_to_array(tmp_anc_img)
anc_imgs.append(tmp_anc_img)
tmp_pos_img = load_img(pos_path, target_size=img_size)
tmp_pos_img = img_to_array(tmp_pos_img)
pos_imgs.append(tmp_pos_img)
tmp_neg_img = load_img(neg_path, target_size=img_size)
tmp_neg_img = img_to_array(tmp_neg_img)
neg_imgs.append(tmp_neg_img)
# transform list to array
anc_imgs = np.array(anc_imgs, dtype=K.floatx()) / 255.0
pos_imgs = np.array(pos_imgs, dtype=K.floatx()) / 255.0
neg_imgs = np.array(neg_imgs, dtype=K.floatx()) / 255.0
yield [anc_imgs, pos_imgs, neg_imgs], zeros
# # Define Triplet Loss Model
# Define base network for triplet network
def base_net(input_shape=(224, 224, 3)):
""" define triplet network """
# load pre-trained VGG16 model
vgg16 = VGG16(include_top=False, weights='imagenet', input_shape=input_shape, pooling='avg')
# frozen shallow layers
vgg16.trainable = True
set_trainable = False
for layer in vgg16.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
# define sequential model
model = Sequential(name='base_net')
model.add(vgg16)
model.add(Lambda(lambda x: K.l2_normalize(x, axis=1), name='l2_norm'))
return model
# Define triplet network
def triplet_net(base_model, input_shape=(224, 224, 3)):
""" function to define triplet networks """
# define input: anchor, positive, negative
anchor = Input(shape=input_shape, name='anchor_input')
positive = Input(shape=input_shape, name='positive_input')
negative = Input(shape=input_shape, name='negative_input')
# extract vector represent using CNN based model
anc_vec = base_model(anchor)
pos_vec = base_model(positive)
neg_vec = base_model(negative)
# stack outputs
stacks = Lambda(lambda x: K.stack(x, axis=1), name='output')([anc_vec, pos_vec, neg_vec])
# define inputs and outputs
inputs=[anchor, positive, negative]
outputs = stacks
# define the triplet model
model = Model(inputs=inputs, outputs=outputs, name='triplet_net')
return model
# Define triplet loss
def triplet_loss(y_true, y_pred):
""" function to compute triplet loss
margin is predefined coded, manually change if needed
"""
# define triplet margin
margin = K.constant(0.5)
zero = K.constant(0.0)
# get the prediction vector
anchor, positive, negative = y_pred[:, 0], y_pred[:, 1], y_pred[:, 2]
# compute distance
pos_distance = K.sum(K.square(anchor - positive), axis=1)
neg_distance = K.sum(K.square(anchor - negative), axis=1)
# compute loss
partial_loss = pos_distance - neg_distance + margin
full_loss = K.sum(K.maximum(partial_loss, zero), axis=0)
return full_loss
# # Build Triplet Model
# +
# For reproduciable purpose
seed = 42
K.clear_session()
os.environ['PYTHONHASHSEED'] = '0'
np.random.seed(seed)
random.seed(seed)
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
tf.set_random_seed(seed)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
# Define Parameters
img_size = (224, 224, 3) # target image size
# triplet image generator
train_generator = train_triplet_generator(train_df, batch_size=100, img_size=img_size[:2],
seed=42, prefix='./data/triplet/train/')
val_generator = val_triplet_generator(val_df, batch_size=64, img_size=img_size[:2],
seed=42, prefix='./data/triplet/validation/')
# -
# Define triplet network model
base_model = base_net(input_shape=img_size)
base_model.summary()
triplet_model = triplet_net(base_model=base_model, input_shape=img_size)
triplet_model.summary()
# # Fit Triplet Model
# +
# define learning scheduler
def lr_schedule(epoch):
""" Learning rate schedule """
lr = 1e-3
if epoch > 80:
lr *= 2e-1
elif epoch > 60:
lr *= 4e-1
elif epoch > 40:
lr *= 6e-1
elif epoch > 20:
lr *= 8e-1
print('Learning rate: ', lr)
return lr
# define optimizer
opt = keras.optimizers.Adam(lr=lr_schedule(0))
# Create call backs
lr_scheduler = LearningRateScheduler(lr_schedule)
lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.1), cooldown=0, patience=5, min_lr=0.5e-6)
callbacks = [lr_reducer, lr_scheduler]
# compile the model
triplet_model.compile(optimizer=opt, loss=triplet_loss)
# +
# fit the mode
history = triplet_model.fit_generator(train_generator, steps_per_epoch=74, epochs=100,
validation_data=val_generator, validation_steps=48,
verbose=2, callbacks=callbacks)
base_model.save('./models/vgg16-base-0.5-model.h5')
pickle.dump(history.history, open('./models/vgg16-triplet-0.5-history.p', 'wb'))
_ = gc.collect()
# +
# Visualize the training process
train_loss = history.history['loss']
val_loss = history.history['val_loss']
fig, ax = plt.subplots(figsize=(10, 7))
ax.plot(train_loss, label='Training Loss')
ax.plot(val_loss, label='Validation Loss')
ax.set_title('Loss vs. Epochs', fontsize=16)
ax.set_xlabel('Epochs', fontsize=14)
ax.set_ylabel('Loss', fontsize=14)
ax.legend(fontsize=14)
ax.grid(True)
plt.show()
# -
# # Extract Features using Triplet Network
# +
train_df = pd.read_csv('./data/triplet/train.csv')
val_df = pd.read_csv('./data/triplet/validation.csv')
test_df = pd.read_csv('./data/triplet/test.csv')
print('Train:\t\t', train_df.shape)
print('Validation:\t', val_df.shape)
print('Test:\t\t', test_df.shape)
print('\nTrain Landmarks:\t', len(train_df['landmark_id'].unique()))
print('Validation Landmarks:\t', len(val_df['landmark_id'].unique()))
print('Test Landmarks:\t\t', len(test_df['landmark_id'].unique()))
# -
# Load trained model
base_model = load_model('./models/vgg16-base-0.5-model.h5')
base_model.summary()
# Define train_imgs and test_imgs
train_imgs = np.zeros(shape=(len(train_df), 512), dtype=np.float32)
val_imgs = np.zeros(shape=(len(val_df), 512), dtype=np.float32)
test_imgs = np.zeros(shape=(len(test_df), 512), dtype=np.float32)
# Process training images
img_ids = train_df['image_id'].values
steps = 20000
for i in range(0, len(train_df), steps):
tmp_imgs = []
print('\nProcess: {:10d}'.format(i))
start = i
end = min(len(train_df), i + steps)
for idx in range(start, end):
if idx % 250 == 0:
print('=', end='')
img_id = img_ids[idx]
path = './data/triplet/train/' + str(img_id) + '.jpg'
img = load_img(path, target_size=img_size[:2])
img = img_to_array(img)
tmp_imgs.append(img)
tmp_imgs = np.array(tmp_imgs, dtype=np.float32) / 255.0
tmp_prediction = base_model.predict(tmp_imgs)
train_imgs[start: end, ] = tmp_prediction
_ = gc.collect()
# Process validation images
img_ids = val_df['image_id'].values
steps = 4000
for i in range(0, len(val_df), steps):
tmp_imgs = []
print('\nProcess: {:10d}'.format(i))
start = i
end = min(len(val_df), i + steps)
for idx in range(start, end):
if idx % 50 == 0:
print('=', end='')
img_id = img_ids[idx]
path = './data/triplet/validation/' + str(img_id) + '.jpg'
img = load_img(path, target_size=img_size[:2])
img = img_to_array(img)
tmp_imgs.append(img)
tmp_imgs = np.array(tmp_imgs, dtype=np.float32) / 255.0
tmp_prediction = base_model.predict(tmp_imgs)
val_imgs[start: end, ] = tmp_prediction
_ = gc.collect()
# Process test images
img_ids = test_df['image_id'].values
steps = 4000
for i in range(0, len(test_df), steps):
tmp_imgs = []
print('\nProcess: {:10d}'.format(i))
start = i
end = min(len(test_df), i + steps)
for idx in range(start, end):
if idx % 50 == 0:
print('=', end='')
img_id = img_ids[idx]
path = './data/triplet/test/' + str(img_id) + '.jpg'
img = load_img(path, target_size=img_size[:2])
img = img_to_array(img)
tmp_imgs.append(img)
tmp_imgs = np.array(tmp_imgs, dtype=np.float32) / 255.0
tmp_prediction = base_model.predict(tmp_imgs)
test_imgs[start: end, ] = tmp_prediction
_ = gc.collect()
print('Train:\t\t', train_imgs.shape)
print('Validation:\t', val_imgs.shape)
print('Test:\t\t', test_imgs.shape)
# Save to disk
np.save('./data/triplet/train-triplet-vgg16-0.5-features.npy', train_imgs)
np.save('./data/triplet/validation-triplet-vgg16-0.5-features.npy', val_imgs)
np.save('./data/triplet/test-triplet-vgg16-0.5-features.npy', test_imgs)
# # Load Features and Labels
# +
# Already normalized
train_feature = np.load('./data/triplet/train-triplet-vgg16-0.5-features.npy')
val_feature = np.load('./data/triplet/validation-triplet-vgg16-0.5-features.npy')
test_feature = np.load('./data/triplet/test-triplet-vgg16-0.5-features.npy')
train_df = pd.read_csv('./data/triplet/train.csv')
val_df = pd.read_csv('./data/triplet/validation.csv')
test_df = pd.read_csv('./data/triplet/test.csv')
print('Train:\t\t', train_feature.shape, train_df.shape)
print('Validation:\t', val_feature.shape, val_df.shape)
print('Test:\t\t', test_feature.shape, test_df.shape)
# -
# Helper function
def accuracy(true_label, prediction, top=1):
""" function to calculate the prediction accuracy """
prediction = prediction[:, :top]
count = 0
for i in range(len(true_label)):
if true_label[i] in prediction[i]:
count += 1
return count / len(true_label)
# # Implement KNN Model
# Merge train and validation features
train_val_feature = np.concatenate((train_feature, val_feature), axis=0)
train_val_df = pd.concat((train_df, val_df), axis=0)
train_val_df = train_val_df.reset_index(drop=True)
# Implement KNN model
knn = NearestNeighbors(n_neighbors=50, algorithm='auto', leaf_size=30,
metric='minkowski', p=2, n_jobs=-1)
knn.fit(train_val_feature)
# +
# Search the first 50 neighbors
distance, neighbor_index = knn.kneighbors(test_feature, return_distance=True)
# Save the results
np.save('./result/knn-triplet-vgg16-0.5-distance.npy', distance)
np.save('./result/knn-triplet-vgg16-0.5-neighbor.npy', neighbor_index)
# -
# ### Search Neighbors
# +
knn_distance = np.load('./result/knn-triplet-vgg16-0.5-distance.npy')
knn_neighbor = np.load('./result/knn-triplet-vgg16-0.5-neighbor.npy')
# Get the first 50 neighbors
predictions = []
for neighbors in knn_neighbor:
predictions.append(train_val_df.loc[neighbors]['landmark_id'].values)
predictions = np.array(predictions)
np.save('./result/knn-triplet-vgg16-0.5-test-prediction.npy', predictions)
# -
# ### Compute Accuracy
print('Top 1 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=1))
print('Top 5 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=5))
print('Top 10 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=10))
print('Top 20 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=20))
# +
knn_acc = []
for i in range(1, 51):
tmp_acc = accuracy(test_df['landmark_id'].values, predictions, top=i)
knn_acc.append(tmp_acc)
np.save('./result/knn-triplet-vgg16-0.5-accuracy.npy', knn_acc)
# -
| 3-2. VGG16 Triplet Network (margin 0.5).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Ex2 - Filtering and Sorting Data
# This time we are going to pull data directly from the internet.
#
# ### Step 1. Import the necessary libraries
import pandas as pd
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv).
# ### Step 3. Assign it to a variable called euro12.
file = 'Euro_2012_stats_TEAM.csv'
euro12 = pd.read_csv(file)
euro12.head()
# ### Step 4. Select only the Goal column.
euro12[['Goals']]
# ### Step 5. How many team participated in the Euro2012?
if euro12['Team'].is_unique:
print(euro12.shape[0])
# ### Step 6. What is the number of columns in the dataset?
euro12.shape[1]
# ### Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
discpline = euro12[['Team', 'Yellow Cards', 'Red Cards']]
discpline.head()
# ### Step 8. Sort the teams by Red Cards, then to Yellow Cards
discpline.sort_values(["Red Cards", "Yellow Cards"], ascending=False)
# ### Step 9. Calculate the mean Yellow Cards given per Team
discpline['Yellow Cards'].mean()
# ### Step 10. Filter teams that scored more than 6 goals
# Standard way
euro12[['Team', 'Goals']].loc[euro12['Goals'] > 6]
# Alternate syntax,
euro12.loc[euro12.Goals > 6, ['Team', 'Goals']]
# with query
euro12.query('Goals > 6')[['Team', 'Goals']]
# ### Step 11. Select the teams that start with G
euro12[['Team']][euro12['Team'].str.startswith('G')]
# ### Step 12. Select the first 7 columns
euro12.iloc[:, :7]
# ### Step 13. Select all columns except the last 3.
euro12.shape[1]
euro12.iloc[:, :-3]
euro12.iloc[:, :-3].shape[1]
# ### Step 14. Present only the Shooting Accuracy from England, Italy and Russia
countries = ['England', 'Italy', 'Russia']
euro12[['Team', 'Shooting Accuracy']].set_index('Team').loc[countries]
# alternate method,
euro12[['Team', 'Shooting Accuracy']][euro12['Team'].isin(countries)]
# with query, probably most intuitive/elegant
euro12[['Team', 'Shooting Accuracy']].query('Team in @countries')
| 02_Filtering_&_Sorting/Euro12/Exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
'''
Written by rizawa
Written on 2018-06-08
'''
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
import numpy as np
import psycopg2
import json
import os
import re
from datetime import date
from datetime import datetime
with open('/Users/rizawa/Code/config/settings.json') as config_file:
config = json.load(config_file)
CONNECTION_PARAMETERS = config['dbs']
HOST = CONNECTION_PARAMETERS['hostname']
USER = CONNECTION_PARAMETERS['username']
WORD = CONNECTION_PARAMETERS['password']
DTBS = CONNECTION_PARAMETERS['database']
PORT = CONNECTION_PARAMETERS['port']
def execute_query(query):
connection = psycopg2.connect(host=HOST,user=USER,password=<PASSWORD>,dbname=DTBS,port=PORT)
cur = connection.cursor()
cur.execute(query)
column_list = [c[0] for c in cur.description]
df_query = pd.DataFrame(cur.fetchall(), columns=column_list)
connection.close()
return df_query
# -
def query_edges(incl=0,excl=0):
query_str = """
select distinct
events.account_id as a
,guests.account_id as b
,count(*) as bond
from paperless_public.events
join paperless_public.guests on events.id = guests.event_id
where events.account_id != guests.account_id
and guests.sent_at is not null
and events.account_id in (""" + str(incl) + """)
and guests.account_id not in (""" + str(excl) + """)
group by 1,2
union
select distinct
guests.account_id as a
,events.account_id as b
,count(*) as bond
from paperless_public.events
join paperless_public.guests on events.id = guests.event_id
where events.account_id != guests.account_id
and guests.sent_at is not null
and guests.account_id in (""" + str(incl) + """)
and events.account_id not in (""" + str(excl) + """)
group by 1,2
;
"""
return query_str
def query_paths(incl=0):
query_str = """
SELECT
guests.account_id as a
,events.account_id as b
,count(*) as bond
FROM guests
JOIN events ON events.id = guests.event_id
WHERE guests.sent_at IS NOT null
AND guests.account_id IN (""" + incl + """)
AND events.account_id IN (""" + incl + """)
GROUP BY 1,2
;
"""
return query_str
def query_accounts(incl=0):
query_str = """
SELECT
accounts.id
,accounts.display_name
,email.email_address
FROM accounts
JOIN email_addresses email ON email.account_id = accounts.id
WHERE accounts.id IN (""" + incl + """)
AND email.primary IS true
;
"""
return query_str
acct0 = 0; acct1 = 2
def query_first_group():
str_query = """
select
""" + str(acct0) + """ as a
,accounts.id as b
,1 as bond
from paperless_public.accounts
join paperless_public.email_addresses email on accounts.id = email.account_id
where email.email_address like <EMAIL>'
and email.primary is true
;
"""
return str_query
def query_second_group():
str_query = """
select
""" + str(acct1) + """ as a
,accounts.id as b
,1 as bond
from paperless_public.accounts
join paperless_public.email_addresses email on accounts.id = email.account_id
where email.email_address like <EMAIL>'
and email.primary is true
;
"""
return str_query
# +
first_query = query_first_group()
first_group = execute_query(first_query)
second_query = query_second_group()
second_group = execute_query(second_query)
# -
print(len(first_group))
print(len(second_group))
def find_path(edges_x, edges_y):
common_nodes = set(edges_x['b']) & set(edges_y['b'])
if not common_nodes:
incl = ",".join(str(e) for e in edges_x['b'])
excl = ",".join(str(e) for e in edges_x['a'])
str_query = query_edges(incl,excl)
edges_q = execute_query(str_query)
incl = ",".join(str(e) for e in edges_y['b'])
excl = ",".join(str(e) for e in edges_y['a'])
str_query = query_edges(incl,excl)
edges_r = execute_query(str_query)
beam = find_path(edges_q, edges_r)
# Merge and rename the columns so they can be merged later
fore = pd.merge(edges_x[['a','b']], beam, on='b', how='inner')
cols = ['a']
for c in range(2,len(fore.columns)):
cols.append(c)
cols.append('b')
fore.columns = cols
# Merge and rename the columns so they can be merged later
ship = pd.merge(fore, edges_y[['a','b']], on='b', how='inner')
cols = ['b']
for c in range(2,len(ship.columns)):
cols.append(c)
cols.append('a')
ship.columns = cols
else:
ship = pd.merge(edges_x, edges_y, on='b', how='inner')[['a_x','b','a_y']]
ship.columns = ['b','x','a']
paths = ship
return paths
# +
start_time = datetime.now()
paths = find_path(first_group, second_group)
# Renaming the columns for ease of use.
cols = []
for c in range(0,len(paths.columns)):
cols.append(c)
paths.columns = cols
# Reducing the accounts to unique account_ids,
# then querying all edges involving these accounts.
unique_accounts = pd.unique(paths.values.ravel('K'))
str_acct = ",".join(str(e) for e in unique_accounts)
str_query = query_paths(str_acct)
real_paths = execute_query(str_query)
list_paths = [first_group, second_group, real_paths]
all_paths = pd.concat(list_paths)
# Building the graph from the queried edges.
G = nx.from_pandas_edgelist(all_paths, 'a', 'b', ['bond'])
print('completed in: ' + str(datetime.now() - start_time))
# +
start_time = datetime.now()
for path in nx.all_simple_paths(G, source=acct0, target=acct1, cutoff=4):
print(path)
#for path in nx.shortest_path(G, source=acct0, target=acct1):
# print(path)
print('completed in: ' + str(datetime.now() - start_time))
# -
# +
# All paths within threshold
start_time = datetime.now()
pos = nx.spring_layout(G)
plt.figure(1,figsize=(15,15))
plt.subplot(111)
# node lists
first_nodes = list(first_group['b'])
second_nodes = list(second_group['b'])
# nodes
nx.draw_networkx_nodes(G,pos,nodelist=first_nodes,node_color='blue',node_size=20,alpha=0.8)
nx.draw_networkx_nodes(G,pos,nodelist=second_nodes,node_color='indigo',node_size=20,alpha=0.8)
# edges
nx.draw_networkx_edges(G,pos,width=1.0,alpha=0.5)
# Create the networkx diagram labels from the list of emails.
str_query = query_accounts(str_acct)
all_accounts = execute_query(str_query)
emails = all_accounts['email_address']
labels = {}
for idx,val in enumerate(all_accounts['id']):
labels[val] = r'$'+emails[idx]+'$'
# Draw the labels
nx.draw_networkx_labels(G,pos,labels)
print('completed in: ' + str(datetime.now() - start_time))
# -
| groups/nx_simple_paths-Copy1.groups.v9.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 特征提取
# +
from keras.applications import Xception, xception
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Activation, Dropout, Flatten, Dense, Input, Lambda
from keras.layers.pooling import GlobalAveragePooling2D
from sklearn.model_selection import train_test_split
import numpy as np
import h5py
import cv2
import datetime
print('start')
starttime = datetime.datetime.now()
image_size = (299, 299)
input_shape = image_size + (3,)
x = Input(input_shape)
x = Lambda(xception.preprocess_input)(x)
model = Xception(input_tensor=x, input_shape=input_shape, weights='imagenet', include_top=False, pooling='avg')
print('input shape: ', model.input.shape)
print('output shape: ', model.output.shape)
batch_size = 2
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = datagen.flow_from_directory(
'dataset-mini10/train',
target_size=image_size,
batch_size=batch_size,
shuffle=False,
save_to_dir='save', save_prefix='catpre', save_format='jpeg')
print('train_generator.samples:', train_generator.samples)
print('train_generator.classes:', train_generator.classes)
bottleneck_features_train = model.predict_generator(train_generator, train_generator.samples)
print('bottleneck_features_train.shape:', bottleneck_features_train.shape)
test_generator = datagen.flow_from_directory(
'dataset-mini10/test',
target_size=image_size,
batch_size=batch_size,
class_mode=None,
shuffle=False)
print('test_generator.samples:', test_generator.samples)
print('test_generator.classes:', test_generator.classes)
bottleneck_features_test = model.predict_generator(test_generator, test_generator.samples)
print('bottleneck_features_test.shape:', bottleneck_features_test.shape)
with h5py.File("bottleneck_features.h5", 'w') as h:
h.create_dataset('train', data=bottleneck_features_train)
h.create_dataset('labels', data=train_generator.classes)
h.create_dataset('test', data=bottleneck_features_test)
print('complete!')
endtime = datetime.datetime.now()
print (endtime - starttime)
# -
# ## 搭建模型
# +
from sklearn.utils import shuffle
with h5py.File('bottleneck_features.h5','r') as h:
X_train = np.array(h['train'])
y_train = np.array(h['labels'])
X_test = np.array(h['test'])
print('type:', type(X_train))
print('X_train', X_train.shape)
print('y_train', y_train.shape)
print('X_test', X_test.shape)
X_train, y_train = shuffle(X_train, y_train)
model = Sequential()
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(X_train,
y_train,
epochs=10,
batch_size=batch_size,
validation_split=0.2)
model.save_weights('bottleneck_fc_model.h5')
| with_ImageDataGenerator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/veronicabaron224/Linear-Algebra-58019/blob/main/Eigenvalue_and_Eigenvector.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="7B4R-aqz9WZr" colab={"base_uri": "https://localhost:8080/"} outputId="a6b42674-c68a-4d57-c25c-a93d9cac3b95"
import numpy as np
from numpy.linalg import eig
A = np.array([[-12,3],[4,1]])
print(A)
inv_A = np.linalg.inv(A)
print(inv_A)
B = np.array([[0],[0]])
print(B)
X = np.dot(inv_A,B)
print(X)
X = np.linalg.solve(A,B)
print(X)
# + colab={"base_uri": "https://localhost:8080/"} id="ref8gnUJ_CQU" outputId="7f86ac3f-4c33-4728-ebd5-badd11fdfd04"
#Example 1
A = np.array([[-6,3],[4,5]])
print(A)
w,v = np.linalg.eig(A)
print("The eigenvalue/s is/are:",w)
print("The right eigenvectors are:",v)
#x = v.round()
#print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="TmPsqgsBAVrS" outputId="c48b9cad-901c-4cde-8a70-a91e092fe6dc"
#Example 2
A = np.array([[2,2,4],[1,3,5],[2,3,4]])
print(A)
s,t=np.linalg.eig(A)
print(s.round())
print(t.round())
c = np.dot(A,t.round())
print(c)
| Eigenvalue_and_Eigenvector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ray Tune - Understanding Hyperparameter Tuning
#
# © 2019-2020, Anyscale. All Rights Reserved
#
# 
# Many of the [Ray RLlib](../ray-rllib/00-Ray-RLlib-Overview.ipynb) lessons used [Ray Tune](http://tune.io) to train policies. This meant we trained _parameters_ that defined the policies. Now we'll learn that Ray Tune was actually designed to determine the best _hyperparameters_ for the problem, before training to determine the _parameters_.
#
# This lesson introduces the concepts of _Hyperparameter Tuning or Optimization_ (HPO) and works through a nontrivial example using Tune.
#
# See also the [Hyperparameter Tuning References](References-Hyperparameter-Tuning.ipynb) notebook and the [Tune documentation](http://tune.io), in particular, the [API reference](https://docs.ray.io/en/latest/tune/api_docs/overview.html).
#
# A recent [Ray Summit Connect](https://anyscale.com/blog/videos-and-slides-for-the-second-ray-summit-connect-june-17-2020/) talk by the creator of Tune, <NAME>, provides an excellent overview of the challenges of hyperparameter tuning and how Tune addresses these challenges.
# ## What Are Hyperparameters?
#
# In _supervised learning_, we train a model with labeled data so the model can properly label new data values. Everything about the model is defined by a set of _parameters_, such as the weights in a linear regression.
#
# In contrast, the _hyperparameters_<sup>1</sup> define structural details about the kind of model itself, like whether or not we are using a linear regression or what architecture is best for a neural network, etc. Other quantities considered hyperparameters include learning rates, discount rates, etc. If we want our training process and resulting model to work well, we first need to determine the optimal or near-optimal set of hyperparameters.
#
# How do we determine the optimal hyperparameters? The most straightfoward approach is to perform a loop where we pick a candidate set of values from some reasonably inclusive list of possible values, train a model, compare the results achieved with previous loop iterations, and pick the set that performed best. This process is called _Hyperparameter Tuning_ or _Optimization_ (HPO).
#
# This simple algorithm can quickly become very expensive, however. Training a single neural networks can be compute intensive and the space of all possible architectures is huge. Hence, much of the research in hyperparameter tuning, especially for neural networks, focuses on ways to optimize HPO, such as early stopping and pruning the search space when some combinations appear to perform poorly.
#
# 1. _Hyperparameter_ is often spelled _hyper parameter_, but we'll use the spelling with no space.
# ## A Simple Example: $k$-Means
#
# Let's start with a very simple example of HPO, finding $k$ in $k$-means.
#
# The $k$-means algorithm finds clusters in a data set. It's a canonical example of _unsupervised learning_, where information is extracted from a data set, rather than using labeled data to train a model for labelling new data, as in _supervised learning_. We won't discuss the algorithm details, but the essense of it involves a "guess" for the expected number of clusters, the $k$ value, then calculating $k$ centroids (the coordinates at the center), one per cluster, along with determining to which cluster each data point belongs. The details are in [$k$-means Wikipedia article](https://en.wikipedia.org/wiki/K-means_clustering). The following animation shows the algorithm in action for a two-dimensional data set where three clusters are evident.
#
# 
#
# (source: [Wikipedia](https://en.wikipedia.org/wiki/K-means_clustering). [Larger Image](https://en.wikipedia.org/wiki/K-means_clustering#/media/File:K-means_convergence.gif))
# While it is easy to see the clusters in this two-dimensional data set, that won't be for arbitrary datasets, especially those with more than three dimensions. Hence, we should determine the best $k$ value by trying many values and picking the value that appears to be best. In this case, "best" would mean that we minimize the distances between the datapoints and centroids.
#
# With just one hyperparameter, this problem is comparatively simple and brute force calculations to find the optimal $k$ is usually good enough.
# ## HPO for Neural Networks
#
# Where HPO really becomes a challenge is finding the right neural network architecture for your problem. Why are neural networks a challenge. Consider this image of a typical architecture:
#
# 
#
# Every number you see is a hyperparameter! So are the decisions about how many layers to have, what kind of layer to use for each layer, etc. The space of possible hyperparameters is enormous, too big to explore naively.
#
# So called _neural architecture search_ (NAS) has become a research field in its own right, along with general research in optimizing HPO.
# ## Introduction to Ray Tune
#
# [Ray Tune](http://tune.io) is the Ray-based library for hyperparameter tuning. Tune makes it nearly as easy to run distributed, parallelized HPO as it is to run trials on a single machine manually, one after the other.
#
# Tune is built as an extensible, pluggable framework, with built-in integrations for many frameworks, including [OpenAI Gym environments](https://gym.openai.com/envs/), [PyTorch](https://pytorch.org), [TensorFlow](http://tensorflow.org), and recently, [sci-kit learn](https://scikit-learn.org/stable/) (see [this recent blog post](https://medium.com/distributed-computing-with-ray/gridsearchcv-2-0-new-and-improved-ee56644cbabf)).
#
# Tune also integrates implementations of many state-of-the-art [search algorithms](https://docs.ray.io/en/latest/tune/api_docs/suggestion.html) and [schedulers](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html), so it is easy to optimize your HPO process.
# ## Using Ray Tune
#
# We used Ray Tune in several of the reinforcement learning lessons, where we actually didn't optimize any hyperparameters, we just used it to drive the RLlib training process. Now we'll see an example of HPO with Tune.
# First, we want to make sure that Ray is running and we want to initialize Ray in this "driver" program explicitly.
import ray
from ray import tune
# Now we start Ray in this "driver" program (notebook).
ray.init(ignore_reinit_error=True)
# In our [RLlib Simple Multi-Armed Bandit](../ray-rllib/multi-armed-bandits/03-Simple-Multi-Armed-Bandit.ipynb) lesson, we used Tune to train RLlib, but not train hyperparameters.
#
# Now we'll use Tune to train hyperparameters. We'll use the same experimental environment we used in our [RLlib `CartPole` lesson](../ray-rllib/explore-rllib/01-Application-Cart-Pole.ipynb).
#
# > **Aside:** In case you haven't gone through the [Ray RLlib tutorial](../ray-rllib/00-Ray-RLlib-Overview.ipynb), [CartPole](https://gym.openai.com/envs/CartPole-v1/) is an [OpenAI Gym](https://gym.openai.com/envs/) environment that simulates a cart that moves left or right while attempting to balance a vertical pole. A _policy_ is trained to optimize changing the velocity up and down to control right and left movement with the goal of keeping the pole balanced for as long as possible.
#
# The most important hyperparameters for `CartPole` are for the neural network used to learn how to balance the pole. A simple network suffices. We'll use a fully-connected network with two hidden layers, but use Tune to find an optimal choice for the sizes of the layers. To keep the computation tractable for our purposes, we'll just pick sizes from the list of 20 and 40.
#
# It's probably that some specific number between these 10 and 100 is optimal for one layer and a different number is optimal for the other layer. However, the computation required for trying all possible combinations is $O(n^2)$. You should consider trying more numbers if you don't mind waiting!
#
# However, the two we've chosen are good enough as we'll see. We'll consider what "good enough" means when we look at the results.
#
# The next cell runs Tune for this purpose. The comments explain what each argument does. We'll do four tries, one for each combination of the two possible values for the two hidden layers.
#
# > **Note:** `tune.run` will handle Ray initialization for us, if it isn't already initialized. To force Tune to throw an error instead, pass the argument `ray_auto_init=False`.
#
# The next cell will take 5-6 minutes to run.
analysis = tune.run(
"PPO", # Use proximal policy optimization to train
stop={"episode_reward_mean": 400}, # Stopping criteria, when average reward over the episodes
# of training equals 400 out of a maximum possible 500 score.
config={
"env": "CartPole-v1", # Tune can associate this string with the environment.
"num_gpus": 0, # If you have GPUs, go for it!
"num_workers": 3, # Number of Ray workers to use; Use one LESS than
# the number of cores you wan to use (or omit this argument)!
"model": { # The NN model we'll optimize.
'fcnet_hiddens': [ # "Fully-connected network with N hidden layers".
tune.grid_search([20, 40]), # Try these four values for layer one.
tune.grid_search([20, 40]) # Try these four values for layer one.
]
},
"eager": False, # Flag for TensorFlow; don't use eager evaluation.
},
verbose=1
)
# ## Understanding the Results
#
# First, how long did it take?
stats = analysis.stats()
secs = stats["timestamp"] - stats["start_time"]
print(f'{secs:7.2f} seconds, {secs/60.0:7.2f} minutes')
# Which one performed best based on our stopping criteria?
analysis.get_best_config(metric="episode_reward_mean")
# So, the smallest combination is good enough, even the best according to this metric!
#
# Let's look at the data for all of them:
df = analysis.dataframe()
df
# As stated above, all four combinations actually work about equally well, as far as the `episode_reward_mean` mean is concerned. So what's actually best? What's "good enough" in this case?
#
# It's useful to consider the `training_iteration` (roughly proportional to `episodes_total`) and `timesteps_total`. Let's project out the most interesting data so we can see it more clearly:
df[['episode_reward_mean', 'training_iteration', 'timesteps_total', 'config/model']].sort_values('timesteps_total', ascending=True)
# We see from this table that the `[20,20]` hyperparameter set took the *most* training iterations, which is understandable as it is the least powerful network configuration. The corresponding number of timesteps was the longest. In contrast, `[40,20]` and `[40,40]` are the fastest to train with almost the same `episode_reward_mean` value.
#
# Since all four combinations perform equally well, perhaps it's best to choose the largest network as it trains the fastest. If we need to train the neural network frequently, then fast training times might be most important. This also suggests that we should be sure the trial sizes we used are really best. In a real-world application, you would want to spend more time on HPO, trying a larger set of possible values.
# ## Analyzing the Results with TensorBoard
#
# Here is the directory where the training data was written for the "best" hyperparameter set:
analysis.get_best_logdir(metric="episode_reward_mean")
# There is a separate directory for each trial run.
#
# The easiest way to inspect all the training data, including graphs, is to use [TensorBoard](https://www.tensorflow.org/tensorboard).
#
# 1. If you are runnng on the Anyscale Platform, click the _TensorBoard_ link.
# 2. If you running this notebook on a laptop, open a terminal window using the `+` under the _Edit_ menu, run the following command, then open the URL shown in the output.
#
# ```
# tensorboard --logdir ~/ray_results
# ```
# Here is a screen shot of TensorBoard after the previous Tune run, showing how _scalars_ like the `episdoe_reward_mean` evolved. Some data shown is for runs with trial values of 60 and 80:
#
# 
# Here is a table of the hyperparameter data, similar to what we saw above by looking at the `analysis.dataframe()`:
#
# 
# ## Other Useful Information
#
# The `ray.tune.trial.Trial` object ([documentation](https://docs.ray.io/en/latest/tune/api_docs/internals.html?highlight=Trial#trial-docstring)) records lots of information about particular trials:
trial = analysis.get_best_trial(metric="episode_reward_mean")
trial, type(trial)
trial.stopping_criterion
trial.metric_analysis
# How long did it take??
stats = analysis.stats()
stats
secs = stats["timestamp"] - stats["start_time"]
print(f'{secs:7.2f} seconds, {secs/60.0:7.2f} minutes')
analysis.runner_data()
# We used grid search here, which is a naïve approach. In subsequent lessons, we'll explore how to optimize the search process using some of Tune's built-in algorithms for this purpose.
# ## Exercise - Try More Neural Network Sizes
#
# Repeat the experiment above using the sizes `[20, 40, 60, 80, 100]` or some subset of these numbers, depending on how long you are willing to wait. (It takes about 25 minutes on a recent-vintage laptop.) What combination appears to be best, given the considerations we discussed above?
#
# > **Note:** The exercise solution for this tutorial can be found [here](solutions/01-Understanding-Hyperparameter-Tuning-Solutions.ipynb).
ray.shutdown() # "Undo ray.init()".
| ray-tune/01-Understanding-Hyperparameter-Tuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import itertools
def _pad_sequence(
sequence,
n,
pad_left = False,
pad_right = False,
left_pad_symbol = None,
right_pad_symbol = None,
):
sequence = iter(sequence)
if pad_left:
sequence = itertools.chain((left_pad_symbol,) * (n - 1), sequence)
if pad_right:
sequence = itertools.chain(sequence, (right_pad_symbol,) * (n - 1))
return sequence
def ngrams(
sequence,
n: int,
pad_left = False,
pad_right = False,
left_pad_symbol = None,
right_pad_symbol = None,
):
sequence = _pad_sequence(
sequence, n, pad_left, pad_right, left_pad_symbol, right_pad_symbol
)
history = []
while n > 1:
try:
next_item = next(sequence)
except StopIteration:
return
history.append(next_item)
n -= 1
for item in sequence:
history.append(item)
yield tuple(history)
del history[0]
# -
list(ngrams(['saya', 'suka', 'makan', 'ayam'], 2))
from glob import glob
languages = glob('*-words.json')
languages
import json
# +
with open(languages[0]) as fopen:
lang = set(json.load(fopen))
longest = 0
for l in lang:
ls = len(l.split())
if ls > longest:
print(l)
longest = ls
longest
# +
with open('social-language.json') as fopen:
malay = json.load(fopen)['malay']
len(malay)
# +
words = []
languages = glob('*-words.json')
for language in languages:
if 'english' in language:
continue
label = language.replace('-words.json', '')
with open(language) as fopen:
lang = json.load(fopen)
words.append(lang)
len(words)
# -
set.intersection(*map(set,words))
with open('malays_word.json') as fopen:
malays = set(json.load(fopen))
languages_dict = {}
longest = 0
languages = glob('*-words.json')
for language in languages:
if 'english' in language:
continue
print(language)
label = language.replace('-words.json', '')
with open(language) as fopen:
lang = set(json.load(fopen))
print(len(lang))
lang = lang - malays
print(len(lang))
for l in lang:
ls = len(l.split())
if ls > longest:
print(l)
longest = ls
languages_dict[label] = lang
# +
from tqdm import tqdm
results = {}
for s in tqdm(malay):
splitted = s.split()
ngs = splitted[:]
for i in range(2, longest):
ngs.extend([' '.join(n) for n in ngrams(splitted, i)])
found = False
for k, v in languages_dict.items():
r = set(ngs) & v
if len(r):
# print(s, k, r)
found = True
if k in results:
results[k].append(s)
else:
results[k] = [s]
break
if not found:
if 'malay' in results:
results['malay'].append(s)
else:
results['malay'] = [s]
# -
for k, v in results.items():
print(k, len(v))
# +
from random import sample
results['malay'] = sample(results['malay'],100000)
# -
with open('sublanguages.json', 'w') as fopen:
json.dump(results, fopen)
# +
# with open('sublanguages-malay.json', 'w') as fopen:
# json.dump(malays, fopen)
# +
import boto3
bucketName = 'malaya-dataset'
Key = 'sublanguages.json'
outPutname = "language-detection/sublanguages.json"
s3 = boto3.client('s3')
s3.upload_file(Key,bucketName,outPutname)
| session/language-detection/7.5.sublanguages.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import sys
sys.path.append('../')
from Trajectory import *
from Optimisation import *
import matplotlib.pyplot as plt
from PyGMO import *
# %matplotlib inline
# %config InlineBackend.figure_format = 'svg'
model = Point_Lander_Drag()
prob = HSS(model, nsegs=40)
print model
zguess = prob.Guess.Ballistic(model.si, tf=30)
tf, s = prob.Guess.Ballistic(model.si, tf=30, nlp=False)
plt.plot(s[:,0], s[:,1], 'k.--')
plt.axes().set_aspect('equal', 'datalim')
plt.show()
pop = population(prob)
pop.push_back(zguess)
algo = algorithm.scipy_slsqp(max_iter=3000, screen_output=True)
alg1 = algorithm.mbh(algo, stop=1)
pop = alg1.evolve(pop)
tf, sb, cb, s, c = prob.Decode(pop.champion.x)
z = pop.champion.x
save("../Data/HSS/40Seg/HSS_40_Lunar_Base", z)
# +
plt.close('all')
ax1 = plt.subplot(211)
plt.plot(c[:,0], 'k.-')
plt.ylabel("Throttle")
ax2 = plt.subplot(212, sharex=ax1)
plt.plot(c[:,1], 'k.-')
plt.ylabel('Thrust Angle [rad]')
plt.xlabel("Node Index")
plt.figure()
plt.plot(s[:,0], s[:,1], 'k.-')
plt.axes().set_aspect('equal', 'datalim')
plt.xlabel("Cross-Range [m]")
plt.ylabel("Altitude [m]")
plt.figure()
plt.plot(s[:,2], 'k.-')
plt.plot(s[:,3], 'k.--')
plt.legend(["$v_x$", "$v_y$"], loc="best")
plt.xlabel("Node Index")
plt.ylabel("Velocity [m/s]")
plt.show()
# -
| src/Notebook/Point Lander with Drag.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#hide
from dbcm2.DBcm import *
#hide
from dbcm2.DBcm import UseDatabase
# # dbcm2
# > The UseDatabase context manager for working with MySQL/MariaDB. For more information, see Chapters 7, 8, 9, and 11 of the 2nd Edition of Head First Python.
# The use of MariaDB over MySQL is a strong recommendation.
# ## Install
# You can install using your choice of `pip` or `conda` as follows (with dependencies taken care of for you):
# > `pip install dbcm2`
# or:
# > `conda install -c barrypj dbcm2`
# The `dbcm2` package is based on the `DBcm` module from the 2nd edition of Head First Python. It is essentially a thin wrapper around the `mysql-connector-python` module and exists primarily to demonstrate how to create a custom context manager in Python. Despite this primary goal, `DBcm` can be useful when your database interactions are straightforward. An assumption is that you know the basics of SQL (specifically: the CRUD operations).
# ## How to use
# Begin by importing what you need from `dbcm2`:
from dbcm2.DBcm import UseDatabase, SQLError
# Note: The use of `dbcm2.DBcm` above is a breaking change from the original `DBcm` as described in the book.
#
# Next, create a configuration dictionary detailing the credentials you are using:
config = {
'host': '127.0.0.1',
'user': 'usernamehere',
'password': '<PASSWORD>',
'database': 'databasenamehere',
}
# Another assumption is that you have previously installed your database server, created a new database, granted rights to the new database to a user, then created one or more tables. See Chapter 7 of Head First Python, 2nd Edition, for more on this.
# You are now ready to use the context manager in your code. Here's a simple example where the SQL query is sent to the server, executed, then the results (if any) are available within the `data` variable:
with UseDatabase(config) as cursor:
try:
_SQL = "select * from log"
cursor.execute(_SQL)
data = cursor.fetchall()
except SQLError as err:
print('Your query broke:', str(err))
# And, that's it! Enjoy, and have fun.
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dlnd
# language: python
# name: dlnd
# ---
# # 경사 하강법을 이용한 얕은 신경망 학습
#
import tensorflow as tf
import numpy as np
# ## 하이퍼 파라미터 설정
EPOCHS = 1000
# ## 네트워크 구조 정의
# ### 얕은 신경망
# #### 입력 계층 : 2, 은닉 계층 : 128 (Sigmoid activation), 출력 계층 : 10 (Softmax activation)
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.d1 = tf.keras.layers.Dense(128, input_dim=2, activation='sigmoid')
self.d2 = tf.keras.layers.Dense(10, activation='softmax')
def call(self, x, training=None, mask=None):
x = self.d1(x)
return self.d2(x)
# ## 학습 루프 정의
# - ref : 텐서플로 2.0의 tf.function과 오토그래프 (AutoGraph) https://www.tensorflow.org/guide/function?hl=ko
# - ref : 텐서플로 2.0 주요 변경사항 요약 https://tensorflow.blog/2019/02/21/%EC%9D%B4%ED%8E%99%ED%8B%B0%EB%B8%8C-%ED%85%90%EC%84%9C%ED%94%8C%EB%A1%9C-2-0/
@tf.function
def train_step(model, inputs, labels, loss_object, optimizer, train_loss, train_metric):
with tf.GradientTape() as tape:
predictions = model(inputs)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables) # df(x)/dx
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_metric(labels, predictions)
# ## 데이터셋 생성, 전처리
# +
np.random.seed(0)
pts = list()
labels = list()
center_pts = np.random.uniform(-8.0, 8.0, (10, 2))
for label, center_pt in enumerate(center_pts):
for _ in range(100):
pts.append(center_pt + np.random.randn(*center_pt.shape))
labels.append(label)
pts = np.stack(pts, axis=0).astype(np.float32)
labels = np.stack(labels, axis=0)
train_ds = tf.data.Dataset.from_tensor_slices((pts, labels)).shuffle(1000).batch(32)
# -
# ## 모델 생성
model = MyModel()
# ## 손실 함수 및 최적화 알고리즘 설정
# ### CrossEntropy, Adam Optimizer
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
# ## 평가 지표 설정
# ### Accuracy
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
# ## 학습 루프
for epoch in range(EPOCHS):
for x, label in train_ds:
train_step(model, x, label, loss_object, optimizer, train_loss, train_accuracy)
template = 'Epoch {}, Loss: {}, Accuracy: {}'
print(template.format(epoch + 1,
train_loss.result(),
train_accuracy.result() * 100))
train_loss.reset_states()
train_accuracy.reset_states()
# ## 데이터셋 및 학습 파라미터 저장
# +
np.savez_compressed('ch2_dataset.npz', inputs=pts, labels=labels)
W_h, b_h = model.d1.get_weights()
W_o, b_o = model.d2.get_weights()
W_h = np.transpose(W_h)
W_o = np.transpose(W_o)
np.savez_compressed('ch2_parameters.npz',
W_h=W_h,
b_h=b_h,
W_o=W_o,
b_o=b_o)
# -
| RNN_TF20/03. Shallow Neural Network Training with Gradient Descent-Antonio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Inductive Miner
# ## Step 1: Handling and import event data
# + pycharm={"name": "#%%\n"}
import pm4py
from pm4py.objects.log.importer.xes import importer as xes_importer
from pm4py.algo.discovery.inductive import algorithm as inductive_miner
log = xes_importer.apply("BPIC2012-6-Filtering.xes")
# -
# ## Step 2: Mining event log - Process Discovery
# + pycharm={"name": "#%%\n"}
from pm4py.algo.discovery.inductive import algorithm as inductive_miner
from pm4py.visualization.process_tree import visualizer as pt_visualizer
net, initial_marking, final_marking = inductive_miner.apply(log)
tree = inductive_miner.apply_tree(log)
# -
# ## Step 3: Visualize Petri net and Process Tree of Mined Process from log
# + pycharm={"name": "#%%\n"}
pm4py.view_petri_net(net, initial_marking, final_marking)
# + pycharm={"name": "#%%\n"}
gviz = pt_visualizer.apply(tree)
pt_visualizer.view(gviz)
# -
# ## Step 4: Convert Petri Net to BPMN
# + pycharm={"name": "#%%\n"}
bpmn_graph = pm4py.convert_to_bpmn(*[net, initial_marking, final_marking])
pm4py.view_bpmn(bpmn_graph, "png")
# -
# ## Step 5: Log-Model Evaluation
# ### Replay Fitness
# + pycharm={"name": "#%%\n"}
from pm4py.algo.evaluation.replay_fitness import algorithm as replay_fitness_evaluator
fitness = replay_fitness_evaluator.apply(log, net, initial_marking, final_marking, variant=replay_fitness_evaluator.Variants.TOKEN_BASED)
# + pycharm={"name": "#%%\n"}
fitness
# -
# ### Precision
# + pycharm={"name": "#%%\n"}
from pm4py.algo.evaluation.precision import algorithm as precision_evaluator
prec = precision_evaluator.apply(log, net, initial_marking, final_marking, variant=precision_evaluator.Variants.ETCONFORMANCE_TOKEN)
# + pycharm={"name": "#%%\n"}
prec
# -
# ### F-measure
# + pycharm={"name": "#%%\n"}
def f_measure(f, p):
return (2*f*p)/(f+p)
f_measure(fitness['average_trace_fitness'], prec)
# + pycharm={"name": "#%%\n"}
# %reset -f
# + pycharm={"name": "#%%\n"}
| src/BPIC 2012/InductiveMiner_after.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="AruwmQxE5Xm_"
# In this assignment you will continue to make some plots on the [Coronavirus Source Data](https://ourworldindata.org/coronavirus-source-data). For plotting you will use Seaborn library.
# + [markdown] id="ZXTkIVcs5XnI"
# **(1)** Plot a line plot with seaborn for total deaths four the four countries (Spain, France, Germany, Italy) after April 1, 2020.
# + id="QDS91EJh5XnK" colab={"base_uri": "https://localhost:8080/", "height": 609} executionInfo={"status": "ok", "timestamp": 1636749995529, "user_tz": -180, "elapsed": 2162, "user": {"displayName": "Serhan \u00<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiHrfv372wn98VVRXVKRE9HrNOFRYFJ236z3hwSSQ=s64", "userId": "17824916347375562391"}} outputId="5a7806d6-e5fc-4f30-9eaf-3ac9d5dba1b7"
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv('owid-covid-data.csv', parse_dates=["date"], low_memory=False)
df
# + colab={"base_uri": "https://localhost:8080/"} id="Au1YhaEMCCkU" executionInfo={"status": "ok", "timestamp": 1636750200868, "user_tz": -180, "elapsed": 243, "user": {"displayName": "Serhan \u00d<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiHrfv372wn98VVRXVKRE9HrNOFRYFJ236z3hwSSQ=s64", "userId": "17824916347375562391"}} outputId="62a96500-88a9-4273-be43-5ce329b32f07"
df.loc[df['location'] == 'Spain', 'date'].tail(218)
# + colab={"base_uri": "https://localhost:8080/"} id="71CnBH81ECRW" executionInfo={"status": "ok", "timestamp": 1636750928292, "user_tz": -180, "elapsed": 214, "user": {"displayName": "Serhan \u0<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiHrfv372wn98VVRXVKRE9HrNOFRYFJ236z3hwSSQ=s64", "userId": "17824916347375562391"}} outputId="5af0c7b5-e154-49b3-b4ea-5915e0863c86"
spain_deaths = df.loc[df['location'] == 'Spain', 'total_deaths'].tail(218)
france_deaths = df.loc[df['location'] == 'France', 'total_deaths'].tail(218)
germany_deaths = df.loc[df['location'] == 'Germany', 'total_deaths'].tail(218)
italy_deaths = df.loc[df['location'] == 'Italy', 'total_deaths'].tail(218)
spain_deaths
# + colab={"base_uri": "https://localhost:8080/"} id="EwVmA7wJ69n3" executionInfo={"status": "ok", "timestamp": 1636753222558, "user_tz": -180, "elapsed": 257, "user": {"displayName": "Serhan \u00<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiHrfv372wn98VVRXVKRE9HrNOFRYFJ236z3hwSSQ=s64", "userId": "17824916347375562391"}} outputId="abee5b4a-fe81-4174-e196-830517d03676"
df[df.continent == 'Spain'].total_deaths.tail(218)
# + colab={"base_uri": "https://localhost:8080/"} id="5lQnxjoSGDmY" executionInfo={"status": "ok", "timestamp": 1636754956622, "user_tz": -180, "elapsed": 266, "user": {"displayName": "Serhan \u<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiHrfv372wn98VVRXVKRE9HrNOFRYFJ236z3hwSSQ=s64", "userId": "17824916347375562391"}} outputId="4a0175f5-de71-4ab8-c852-da85df834066"
df2.iloc[0:218].date
# + id="cx6nFFyY5XnM" colab={"base_uri": "https://localhost:8080/", "height": 511} executionInfo={"status": "ok", "timestamp": 1636755562443, "user_tz": -180, "elapsed": 804, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiHrfv372wn98VVRXVKRE9HrNOFRYFJ236z3hwSSQ=s64", "userId": "17824916347375562391"}} outputId="24c6374b-2b3b-44bd-a430-3d7143e249c1"
plt.figure(figsize=(10, 5), dpi = 100)
x_ax = df2.iloc[0:218].date
spain_deaths = df2.loc[df2['location'] == 'Spain', 'total_deaths'].tail(218)
france_deaths = df2.loc[df2['location'] == 'France', 'total_deaths'].tail(218)
germany_deaths = df2.loc[df2['location'] == 'Germany', 'total_deaths'].tail(218)
italy_deaths = df2.loc[df2['location'] == 'Italy', 'total_deaths'].tail(218)
sns.lineplot(x = x_ax, y = spain_deaths, data = df2, lw = 3, color = 'blue')
"""
sns.lineplot(x = df2.date, y = df2.loc[df2.location == 'France'].total_deaths, data = df2, lw = 3, color = 'green')
sns.lineplot(x = df2.date, y = df2.loc[df2.location == 'Germany'].total_deaths, data = df2, lw = 3, color = 'red')
sns.lineplot(x = df2.date, y = df2.loc[df2.location == 'Italy'].total_deaths, data = df2, lw = 3, color = 'cyan')
"""
#sns.lineplot(x = gdp.Year, y = 'United States', data = gdp, lw = 3, color = '#22B573')
title_style = {'color': 'darkred', 'size': 20 }
axis_style = {'color': 'darkblue', 'size': 15 }
plt.title('Total Deaths for 4 Countries', fontdict = title_style)
plt.xlabel('Dates', fontdict= axis_style)
plt.ylabel('Total Deaths', fontdict= axis_style)
plt.legend(['Spain', 'France', 'Germany', 'Italy'])
plt.xticks(rotation = 45, fontsize = 9)
plt.show()
# + [markdown] id="sBQJjzu85XnN"
# **(2)** Plot a bar plot with seaborn for average death number that compares continents.
# + id="9N27Z1Kc5XnO" outputId="2314a1fa-1a37-4f1e-e695-a4d25cdf13c5"
# + [markdown] id="9_i1Zglu5XnO"
# **(3)** Plot a histogram for daily deaths for any country you choose. Make four subplots for different bin counts and `kde` arguments.
# + id="t1EA-xO65XnP" outputId="f8c6766f-17cf-4bc2-d88a-cc71134207d7"
# + [markdown] id="aBLXdDSE5XnQ"
# **(4)** Create a figure and three subplots containing boxplot, violin plot and swarm plot for daily deaths of two countries you choose.
# + id="P1sZ2LcK5XnR" outputId="7694a44d-6681-43a1-c445-5e5d845f06d9"
| Visualization_Assignments/Copy of A_06_VisualizationWithSeaborn_en.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Adatstruktúrák
# Pythonban több összetett adattípus is meg van adva, amikkel több különböző értéket csoportosíthatunk. Ilyen adattípus a lista (`list`), könyvtár (dictionary, `dict`), rendezett pár (tuple, `tuple`), halmaz (set, `set`).
# ## Listák (`List`)
# A listákban listaszerűen, vesszővel elválasztva soroljuk fel a lista elemeit szögletes zárójelek között:
v = [1, 2, 3, 4, 5]
print(v)
# Üres listát a szögletes zárójelekkel vagy a `list()` beépített függvénnyel készíthetünk:
ures_lista_1 = []
print(ures_lista_1)
ures_lista_2 = list()
print(ures_lista_2)
# Pythonban a listák elemei lehetnek különböző típusúak (ez más nyelvekre jellemzően nem igaz):
v = [1, 2, "cica", 5, True, [5, 6, 7]]
print(v)
# A string-eknél megszokott módon, indexeléssel kiszedhetjük a lista egy elemét.
#
# A harmadik indexű, azaz a negyedik elem a listában:
print(v[3])
# Az első három elem a listában:
print(v[:3])
# A negatív indexek is működnek a string-eknél megszokott módon:
v[-2]
# ### Néhány művelet, amik listákra értelmezve vannak
# A listákra értelmezve van az összeadás művelet, ami a paraméterül kapott listákat egy listává fűzi össze:
v = [1, 2, 3] + [4, 5, 6]
print(v)
w = v + [100, 1000, 10000]
print(w)
# A listák, a string-ekkel ellentétben, megváltoztatható (*mutable*) objektumok, azaz a tartalmuk megváltoztatható.
#
# Cseréljük ki a `v` listában a negyedik elemet 25-re:
v[3] = 25
print(v)
# Az `append()` metódussal egy új elem adható a lista végére:
print(v) # a hozzáadás előtt
v.append("137")
print(v)
# A lista hossza a `len()` beépített függvénnyel kérhető le:
len(v)
# Egy adott elem indexét az `index()` metődussal kaphatjuk meg:
v.index(3)
# Ha olyan elemet adunk meg, ami nincs a listában, hibát kapunk:
v.index(137)
# A logikai kifejezéseknél megismerkedtünk az `in` és `not in` operátorokkal. Ezek azt tesztelik, hogy egy adott elem benne van-e a listában vagy sem:
4 in v # a 4-es szám benne van-e a `v` listában
12 not in v # a 12 szám nincs-e benne a `v` listában
# Egy listát lemásolhatunk a `copy()` metódussal. Ez a lemásolt lista tartalmával megegyező, új listát ad vissza:
v1 = v.copy()
v1
# Egy lista elemeinek a sorrendje megfordítható a `reverse()` metódussal. Ez nem hoz létre új listát, abban a listában fordítja meg az elemek sorrendjét, amelyikre meghívtuk:
print(v)
v.reverse() # nem csinál új listát!
print(v)
# A `count()` metódus megszámolja, hogy a paraméterül átadott elem hányszor fordul elő a listában:
print(v.count(3))
# ## Könyvtárak (`Dict`)
# A könyvtárak felépítése alapvetően különbözik a listától. A könyvtár elemeit kulcsokkal (*key*) indexeljük, ami immutable kell legyen (pl: string vagy egész szám). Minden kulcshoz tartozik egy érték (*value*), azzal a megkötéssel, hogy egy kulcshoz egy érték tartozhat (egy könyvtáron belül).
# Készítsünk egy könyvtárat, ami embereket és azok telefonszámait tárolja. A könyvtár elemeit, azaz a kulcs-érték párokat kapcsos zárójelek (`{}`) között, kettősponttal (`:`) elválasztva adjuk meg:
tel = {"döncike": 123, "zénó": 456, "zebulon": 789}
# **Fontos**: a könyvtár kulcsai rendezetlenül tárolódnak, azaz nem feltétlenül a beszúrás sorrendjében kapjuk vissza az elemeket!
# Üres könyvtárat a kapcsos zárójelekkel vagy a `dict()` beépített függvénnyel készíthetünk:
ures_dict_1 = {}
print(ures_dict_1)
ures_dict_2 = dict()
print(ures_dict_2)
# A könyvtár egy konkrét kulcsához tartozó értéket lekérhetjük a következő módon. A könyvtár neve után szögletes zárójelek közé beírjuk a kulcsot, aminek az értékére kíváncsiak vagyunk. Zénó telefonszámát tehát így kérhetjük le:
print(tel["zénó"])
# Hasonlóan a listákhoz, ha egy nem létező kulcs értékét kérjük le, hibát kapunk:
print(tel["kázmér"])
# Természetesen a könyvtárhoz hozzá lehet adni új kulcs-érték párokat. Kázmér telefonszáma 137, ezt hozzá szeretnénk adni a könyvtárunkhoz. Ezt egy egyszerű értékadással megtehetjük, aminek a bal oldalán a könyvtár egy új kulcsa, a jobb oldalán pedig annak a kulcsnak az értéke szerepel:
tel["kázmér"] = 137
print(tel["kázmér"])
# Lehetőség van a könyvtár teljes tartalmának kiírására is:
print(tel)
# A könyvtárban egy kulcs értéke különbözhet a többi érték típusától:
tel["huba"] = "345"
print(tel)
# De ki is törölhetünk egy konkrét kulcshoz tartozó értéket a kulcssal együtt a `del` kulcsszó használatával:
del tel["döncike"]
print(tel)
# A `del` kulcsszó használata néha hibát eredményezhet, ha nem figyelünk oda, mi történik a körülötte lévő kódban (jellemzően ciklusoknál lehet gond). Ezért óvatosan használjuk!
# ### A könyvtárakon értelmezett műveletek
# Egy könyvtárban tárolt kulcsokat a `keys()` metódussal kérhetjük le:
print(tel.keys())
# Az értékeket pedig a `values()` metódussal kérhetjük le:
print(tel.values())
# A kulcs-érték párok pedig az `items()` metódussal:
print(tel.items())
# A fenti metódusok kimenetei látszólag tartalmazzák a kért információkat. Ezeket lehet listává alakítani, hogy később kényelmesebben használhassuk:
nevek = list(tel.keys())
print(nevek)
# ## Párok (`tuple`)
# Egy harmadik adatszerkezet a rendezett pár, vagy tuple. Valójában nem csak egy párt tárolhatunk vele, hanem akárhány elemből állhat.
# Egy tuple elemeit zárójelek között, vesszővel elválasztva adhatjuk meg:
t = (1, "hello", True)
print(t)
# A tuple típusa `tuple`:
type(t)
# A zárójelek egyszerűbb esetekben elhagyhatók, például ha egy változónak adjuk át, tehát a fenti tuple pontosan ugyanaz lesz, mint ez:
tt = 1, "hello", True
print(tt)
# A string-eknél és a listáknál megszokott módon lehet indexelni, szeletelni a tuple objektumokat.
# A tuple a string-ekhez hasonlóan immutable, azaz megváltoztathatatlan objektum. Ha például az első elemét meg akarjuk változtatni, hibát kapunk:
t[0] = 137
# Maga a `tuple` ugyan megváltoztathatatlan, de megváltoztatható objektumokat, például listákat tartalmazhatnak:
t3 = ([1, 2, 3], 137)
print(t3)
# Hogyan lehet egy elemet tartalmazó `tuple`-t csinálni? Ha zárójelek közé egy elemet írunk, az nem lesz `tuple`:
egyelem = (1)
type(egyelem)
# Figyeljük meg, hogy `int` az `egyelem` nevű változó típusa! Vessző kiírásával már helyes típusú lesz az objektum:
egyelem = (1, ) # figyeljünk a vesszőre!
type(egyelem)
# És nulla elemű `tuple`-t hogyan csinálunk? Két zárójel leírásával megoldható:
nullaelem = ()
type(nullaelem)
| teljes/05-Adatstrukturak.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp data.text2text.core
# -
#hide
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# # data.text2text.core
#
# > This module contains the core text2text (e.g., language modeling, summarization, translation) bits required to use the fastai DataBlock API and/or mid-level data processing pipelines to organize your data in a way modelable by huggingface transformer implementations.
# +
#export
from functools import reduce
import torch, nlp,pdb
from transformers import *
from fastai.text.all import *
from blurr.utils import *
from blurr.data.core import *
logging.set_verbosity_error()
# +
#hide
import pdb
from nbdev.showdoc import *
from fastcore.test import *
from fastai import __version__ as fa_version
from torch import __version__ as pt_version
from transformers import __version__ as hft_version
print(f'Using pytorch {pt_version}')
print(f'Using fastai {fa_version}')
print(f'Using transformers {hft_version}')
# + tags=[]
#cuda
torch.cuda.set_device(1)
print(f'Using GPU #{torch.cuda.current_device()}: {torch.cuda.get_device_name()}')
# -
# ## Base tokenization, batch transform, and DataBlock methods
# +
#export
class HF_Text2TextAfterBatchTransform(HF_AfterBatchTransform):
def decodes(self, encoded_samples):
input_ids = encoded_samples['input_ids'] if (isinstance(encoded_samples, dict)) else encoded_samples
return self.input_return_type(input_ids, hf_tokenizer=self.hf_tokenizer)
class HF_Text2TextBlock(HF_TextBlock):
def __init__(self, hf_arch=None, hf_tokenizer=None, before_batch_tfms=None, after_batch_tfms=None,
max_length=None, padding=True, truncation=True, is_split_into_words=False,
n_tok_inps=1, tok_kwargs={}, input_return_type=HF_BaseInput, dl_type=SortedDL,
before_batch_kwargs={}, after_batch_kwargs={}, **kwargs):
if (after_batch_tfms is None):
after_batch_tfms = HF_Text2TextAfterBatchTransform(hf_tokenizer, input_return_type,
**after_batch_kwargs.copy())
return super().__init__(hf_arch=hf_arch, hf_tokenizer=hf_tokenizer,
before_batch_tfms=before_batch_tfms, after_batch_tfms=after_batch_tfms,
max_length=max_length, padding=padding, truncation=truncation,
is_split_into_words=is_split_into_words, n_tok_inps=n_tok_inps,
tok_kwargs=tok_kwargs, input_return_type=input_return_type, dl_type=dl_type,
before_batch_kwargs=before_batch_kwargs, after_batch_kwargs=after_batch_kwargs,
**kwargs)
# -
# We include a new batch `Transform` and `TransformBlock` specific to text-2-text tasks.
# ## Cleanup
# + tags=[]
#hide
from nbdev.export import notebook2script
notebook2script()
# -
| nbs/01za_data-text2text-core.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# name: python2
# ---
# + id="izDHCRIt8gqQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="93429536-a3a5-4c4c-b06a-e8f9ef61881b" executionInfo={"status": "ok", "timestamp": 1539141008640, "user_tz": -540, "elapsed": 1831, "user": {"displayName": "\u6b66\u85e4\u7199\u9e9f", "photoUrl": "", "userId": "16762842130569802091"}}
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.applications import MobileNet
# + id="gQOTVgXb8W9a" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 7174} outputId="56830c86-494a-4cb1-e80a-a1cf267e311f" executionInfo={"status": "ok", "timestamp": 1539167453237, "user_tz": -540, "elapsed": 26444440, "user": {"displayName": "\u6b66\u85e4\u7199\u9e9f", "photoUrl": "", "userId": "16762842130569802091"}}
batch_size = 32
classes = 10
epochs = 200
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
Y_train = keras.utils.to_categorical(y_train, classes)
Y_test = keras.utils.to_categorical(y_test, classes)
img_input = keras.layers.Input(shape=(32, 32, 3))
model = MobileNet(input_tensor=img_input, alpha=0.5, weights=None, classes=classes)
model.compile(loss='categorical_crossentropy', optimizer="nadam", metrics=['accuracy'])
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(X_train)
model.fit_generator(datagen.flow(X_train, Y_train, batch_size=batch_size),
steps_per_epoch=X_train.shape[0] // batch_size,
epochs=epochs,
validation_data=(X_test, Y_test))
# + id="rwjCH9BX8o9Y" colab_type="code" colab={}
| mobile_net_ssd.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Q#
# language: qsharp
# name: iqsharp
# ---
# # Graph Coloring Kata Workbook
#
# **What is this workbook?**
# A workbook is a collection of problems, accompanied by solutions to them.
# The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required.
#
# Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.
#
# This workbook describes the solutions to the problems offered in the [Graph Coloring kata](./GraphColoring.ipynb).
# Since the tasks are offered as programming problems, the explanations also cover some elements of Q# that might be non-obvious for a first-time user.
# ## Part I. Colors Representation and Manipulation
# ### Task 1.1. Initialize register to a color
#
# **Inputs:**
#
# 1. An integer $C$ ($0 \leq C \leq 2^{N} - 1$).
#
# 2. An array of $N$ qubits in the $|0...0\rangle$ state.
#
# **Goal:**
#
# Prepare the array in the basis state which represents the binary notation of $C$.
# Use little-endian encoding (i.e., the least significant bit should be stored in the first qubit).
#
# **Example:** For $N = 2$ and $C = 2$ the state should be $|01\rangle$.
# ### Solution
#
# We first need to convert the integer C to its binary representation. In Q#, we can use the [IntAsBoolArray](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.convert.intasboolarray) function to convert the input integer to its equivalent binary representation in little-endian `binaryC`.
#
# Next we need to use `binaryC` as a bit mask: whenever `binaryC[i]` is 1 (or `true` if stored as an array of boolean values), we need to flip the qubit by applying an X gate. We can do this using a `for` loop:
# +
%kata T11_InitializeColor
open Microsoft.Quantum.Convert;
operation InitializeColor (C : Int, register : Qubit[]) : Unit is Adj {
let N = Length(register);
// Convert C to an array of bits in little endian format
let binaryC = IntAsBoolArray(C, N);
// Value "true" corresponds to bit 1 and requires applying an X gate
for (i in 0 .. N - 1) {
if (binaryC[i]) {
X(register[i]);
}
}
}
# -
# Alternatively, we can use a helpful library operation [ApplyPauliFromBitString](https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.canon.applypaulifrombitstring).
# It takes as input a Pauli operator $P \in \{I,X,Y,Z\}$, a boolean value, a boolean array and a qubit register and applies the Pauli operator to the register using the boolean array as a bit mask: the operator is applied to the qubits that correspond to array elements equal to the given boolean value.
#
# We can think of `ApplyPauliFromBitString` as the following unitary transformation:
#
# $$P^{b_0} \otimes P^{b_1} \otimes ... \otimes P^{b_{n-1}}$$
#
# Here $P^0=I, P^1 =P$ (the given Pauli operator), and $b_i \in \{0,1\}$ are the elements of the given boolean array if the given boolean value is `true` or their negations if it is `false`.
#
# In our case, `ApplyPauliFromBitString(PauliX, true, binaryC, register)` represents the following transformation of `register`:
#
# $$|\psi\rangle \xrightarrow{} X^{c_0} \otimes X^{c_1} \otimes ... \otimes X^{c_{n-1}}|\psi\rangle$$
#
# where $c_0c_1...c_{n-1}$ is the binary representation of $C$.
#
# When the input qubit register is in the state $|0...0\rangle$, `ApplyPauliFromBitString` operation will convert it into a basis state representing the boolean array, i.e., little-endian binary encoding of $C$:
#
# $$|0...0\rangle \xrightarrow{} X^{c_0} \otimes X^{c_1} \otimes ... \otimes X^{c_{n-1}}|0...0\rangle = |c_0c_1...c_{n-1}\rangle = |C\rangle$$
# +
%kata T11_InitializeColor
open Microsoft.Quantum.Convert;
operation InitializeColor (C : Int, register : Qubit[]) : Unit is Adj {
let N = Length(register);
// Convert C to an array of bits in little endian format
let binaryC = IntAsBoolArray(C, N);
// Value "true" corresponds to bit 1 and requires applying an X gate
ApplyPauliFromBitString(PauliX, true, binaryC, register);
}
# -
# [Return to task 1.1 of the Graph Coloring kata.](./GraphColoring.ipynb#Task-1.1.-Initialize-register-to-a-color)
# ### Task 1.2. Read color from a register
#
# **Input:** An array of $N$ qubits which are guaranteed to be in one of the $2^{N}$ basis states.
#
# **Output:**
#
# An $N$-bit integer that represents this basis state, in little-endian encoding.
# The operation should not change the state of the qubits.
#
# **Example:** For $N = 2$ and the qubits in the state $|01\rangle$ return 2 (and keep the qubits in $|01\rangle$).
# ### Solution
#
# Since we are guaranteed that `register` is in one of the $2^N$ basis states,
# simply measuring it (without resetting the qubits to the $|0\rangle$ state after the measurement) will not destroy any superposition and leave the state of the qubits unchanged, while giving us the necessary information.
#
# We can use the [MultiM](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.measurement.multim) operation to measure each of the qubits and store the result in an array of the type `Result[]`.
#
# We now need to convert these bits into an integer. We can do this directly using the [ResultArrayAsInt](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.convert.resultarrayasint) function which converts an array of `Result` values representing a little-endian encoding of an integer into the equivalent integer.
# +
%kata T12_MeasureColor
open Microsoft.Quantum.Convert;
open Microsoft.Quantum.Measurement;
operation MeasureColor (register : Qubit[]) : Int {
let measurements = MultiM(register);
return ResultArrayAsInt(measurements);
}
# -
# [Return to task 1.2 of the Graph Coloring kata.](./GraphColoring.ipynb#Task-1.2.-Read-color-from-a-register)
# ### Task 1.3. Read coloring from a register
#
# **Inputs:**
#
# 1. The number of elements in the coloring $K$.
#
# 2. An array of $K * N$ qubits which are guaranteed to be in one of the $2^{KN}$ basis states.
#
# **Output:**
#
# An array of $K$ $N$-bit integers that represent this basis state.
# $i$-th integer of the array is stored in qubits with indices $i * N$, $i * N + 1$, ..., $i * N + N - 1$ in little-endian format.
# The operation should not change the state of the qubits.
#
# **Example:**
# For $N = 2$, $K = 2$ and the qubits in the state $|0110\rangle$ return `[2, 1]`.
# ### Solution
#
# This can be considered as $K$-copies version of the previous task. In that task we read the color from a register of size $N$.
# Here we have to return $K$ integers representing the $K$ colors from each of the $N$-bit registers.
#
# We are given $K$ and hence we can find out $N$ by dividing the `Length` of the register by $K$.
#
# Next we need to divide $KN$-qubit register into $K$ $N$-qubit registers.
# We use the Q# function [Chunks](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.arrays.chunks) which partitions the given array into chunks of the given length.
#
# Finally, we use the [ForEach](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.arrays.foreach) operation to apply the `MeasureColor` operation from task 2.2 to each element of `colorPartitions` and assemble the results of each application into the resulting array `coloring`.
# +
%kata T13_MeasureColoring
open Microsoft.Quantum.Arrays;
operation MeasureColoring (K : Int, register : Qubit[]) : Int[] {
let N = Length(register) / K;
let colorPartitions = Chunks(N, register);
let coloring = ForEach(MeasureColor, colorPartitions);
return coloring;
}
# -
# [Return to task 1.3 of the Graph Coloring kata.](./GraphColoring.ipynb#Task-1.3.--Read-coloring-from-a-register)
# ### Task 1.4. 2-bit color equality oracle
#
# **Inputs:**
#
# 1. An array of 2 qubits in an arbitrary state $|c_{0}\rangle$ representing the first color.
#
# 2. An array of 2 qubits in an arbitrary state $|c_{1}\rangle$ representing the second color.
#
# 3. A qubit in an arbitrary state $|y\rangle$ (target qubit).
#
# **Goal:**
#
# Transform state $|c_{0}\rangle|c_{1}\rangle|y\rangle$ into state $|c_{0}\rangle|c_{1}\rangle|y \oplus f(c_{0},c_{1})\rangle$ ($\oplus$ is addition modulo 2),
# where $f(x) = 1$ if $c_{0}$ and $c_{1}$ are in the same state, and 0 otherwise.
# Leave the query register in the same state it started in.
#
# In this task you are allowed to allocate extra qubits.
# ### Solution
#
# We are given that $f(c_0,c_1)=1$ if and only if $c_0=c_1$. Let's express this using XOR (or $\oplus$) - a binary operation defined as follows:
#
# $$c_0 = c_1 \Leftrightarrow c_0 \oplus c_1 =0 $$
#
# We can express $f(c_0,c_1)$ as $1$ if $c_0\oplus c_1=0$ and $0$ otherwise. The advantage of this representation over the previous one is that we can calculate the XOR of two bits using the $\textrm{CNOT}$ operator.
# To do this, we allocate an extra qubit in the $|0\rangle$ state and do two $\textrm{CNOT}$s with each of the input bits as the control and the extra qubit as target. Since the effect of $\textrm{CNOT}$ is $|x\rangle|y\rangle \rightarrow |x\rangle|y \oplus x\rangle$, the effect of such a pair of $\textrm{CNOT}$s will be
#
# $$|b_0b_1\rangle|0\rangle \rightarrow |b_0b_1\rangle|(0 \oplus b_0) \oplus b_1\rangle = |b_0b_1\rangle|b_0 \oplus b_1\rangle$$
#
# Thus, we can compute bitwise XOR of bit strings $c_0$ and $c_1$ by allocating two auxiliary qubits $|a\rangle$ in the initial state $|00\rangle$ and applying the XOR computation procedure described above to pairs of corresponding bits in $c_0$ and $c_1$.
#
# We now need to flip the target qubit $|y\rangle$ only if the auxiliary qubits $|a\rangle$ are in the $|00\rangle$ state.
# This can be done by using zero-controlled $X$ gate, i.e., `ControlledOnInt(0, X)`.
# Finally, we need to uncompute the bitwise XOR to ensure the auxiliary qubits are again in the $|00\rangle$ state before releasing them.
# +
%kata T14_ColorEqualityOracle_2bit
operation ColorEqualityOracle_2bit (c0 : Qubit[], c1 : Qubit[], target : Qubit) : Unit is Adj+Ctl {
using (a = Qubit[2]) {
within {
// Compute bitwise XOR of c0 and c1 and store it in a
CNOT(c0[0],a[0]);
CNOT(c0[1],a[1]);
CNOT(c1[0],a[0]);
CNOT(c1[1],a[1]);
} apply {
// If all XORs are 0, c0 = c1, and our function is 1
(ControlledOnInt(0, X))(a, target);
}
}
}
# -
# [Return to task 1.4 of the Graph Coloring kata.](./GraphColoring.ipynb#Task-1.4.-2-bit-color-equality-oracle)
# ### Task 1.5. N-bit color equality oracle (no extra qubits)
#
# This task is the same as task 1.4, but in this task you are NOT allowed to allocate extra qubits.
# ### Solution
#
# Since this task is the generalized $N$-bit version of the previous task, we can approach it in a similar manner: compute the bitwise XOR of $N$-bit registers $c_0$ and $c_1$, and flip the target qubit if the XOR is $0$.
# However, this time we are not allowed to allocate extra qubits and thus must compute (and uncompute) XOR in-place.
#
# We can do this by storing $c_1 \oplus c_0$ in $c_1$ itself: this is exactly the effect of the $\textrm{CNOT}$ gate!
# We'll use $N$ $\textrm{CNOT}$ gates, with each of the qubits of $c_0$ acting as the control and the respective qubits of $c_1$ acting as the target.
# The remaining procedure is exactly the same as the solution to task 1.4.
# +
%kata T15_ColorEqualityOracle_Nbit
operation ColorEqualityOracle_Nbit (c0 : Qubit[], c1 : Qubit[], target : Qubit) : Unit is Adj+Ctl {
within {
// Compute bitwise XOR of c0 and c1 in place (storing it in c1)
for (i in 0 .. Length(c0) - 1) {
CNOT(c0[i], c1[i]);
}
} apply {
// If all XORs are 0, c0 = c1, and our function is 1
(ControlledOnInt(0, X))(c1, target);
}
}
# -
# [Return to task 1.5 of the Graph Coloring kata.](./GraphColoring.ipynb#Task-1.5.-N-bit-color-equality-oracle-(no-extra-qubits))
# ## Part II. Vertex coloring problem
# ### Task 2.1. Classical verification of vertex coloring
#
# **Inputs:**
#
# 1. The number of vertices in the graph $V$ ($V \leq 6$).
#
# 2. An array of $E$ tuples of integers, representing the edges of the graph ($E \leq 12$).
# Each tuple gives the indices of the start and the end vertices of the edge.
# The vertices are indexed $0$ through $V - 1$.
#
# 3. An array of $V$ integers, representing the vertex coloring of the graph.
# $i$-th element of the array is the color of the vertex number $i$.
#
# **Output:**
#
# True if the given vertex coloring is valid (i.e., no pair of vertices connected by an edge have the same color), and false otherwise.
#
# **Example:**
#
# Graph 0 -- 1 -- 2 would have $V = 3$ and `edges = [(0, 1), (1, 2)]`.
# Some of the valid colorings for it would be `[0, 1, 0]` and `[-1, 5, 18]`.
# ### Solution
#
# A graph coloring valid when the nodes connected by every edge have a different color. This means that we have to check every edge, see if the nodes have the same color, and if it is the case, return that the graph coloring is invalid. If every edge passed the test, we can safely say the graph coloring is valid.
#
# Since the color of vertex $n$ is the $n$-th element of the `colors` array, we simply loop through every edge, which is a pair of vertices, and compare corresponding colors.
# +
%kata T21_IsVertexColoringValid
function IsVertexColoringValid (V : Int, edges: (Int, Int)[], colors: Int[]) : Bool {
// Loop through every edge
for ((v0, v1) in edges){
// Compare the colors of vertices
if (colors[v0] == colors[v1]){
// A return statement stops the execution of the function
return false;
}
}
// If the code reaches this point, every edge was correct
return true;
}
# -
# [Return to task 2.1 of the Graph Coloring kata.](./GraphColoring.ipynb#Task-2.1.-Classical-verification-of-vertex-coloring)
# ### Task 2.2. Oracle for verifying vertex coloring
#
# **Inputs:**
#
# 1. The number of vertices in the graph $V$ ($V \leq 6$).
#
# 2. An array of $E$ tuples of integers, representing the edges of the graph (E $\leq$ 12).
# Each tuple gives the indices of the start and the end vertices of the edge.
# The vertices are indexed $0$ through $V - 1$.
#
# 3. An array of $2V$ qubits `colorsRegister` that encodes the color assignments.
#
# 4. A qubit in an arbitrary state $|y\rangle$ (target qubit).
#
# **Goal:**
#
# Transform state $|x, y\rangle$ into state $|x, y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2),
# where $f(x) = 1$ if the given vertex coloring is valid, and 0 otherwise.
# Leave the query register in the same state it started in.
#
# Each color in `colorsRegister` is represented as a 2-bit integer in little-endian format.
# See task 1.3 for a more detailed description of color assignments.
# ### Solution
#
# The oracle will rely on the `ColorEqualityOracle_Nbit` operation from task 1.5.
# We will allocate an array of qubits, of the same size as the number of edges, that will be flipped if the vertices connected by the corresponding edge have the same color.
# This can easily be done with the operation defined in task 1.5. We will then check if the array is still in state $|0...0\rangle$; if it is, the coloring is valid.
#
# Since the coloring is provided as an array of qubits, with 2 qubits per vertex (2 qubits = 4 basis states = 4 colors), we have to take the correct chunks of the coloring. We can deduce that the coloring of vertex $n$ is encoded in qubits in positions $2*n$ and $2*n+1$.
#
# Also, do not forget to uncompute to leave the qubits clean.
#
# > In Q#, a sub-array of array elements between indices $a$ and $b$, inclusive, is written as `array[a..b]` (see [array slicing documentation](https://docs.microsoft.com/quantum/user-guide/language/expressions#array-slices)).
# >
# > The uncomputing of the temporarily allocated qubits can be done using the `within ... apply ...` structure (see [conjugations documentation](https://docs.microsoft.com/quantum/user-guide/using-qsharp/control-flow#conjugations)).
# +
%kata T22_VertexColoringOracle
operation VertexColoringOracle (V : Int,
edges : (Int, Int)[],
colorsRegister : Qubit[],
target : Qubit) : Unit is Adj+Ctl {
// Store the number of edges
let edgesNumber = Length(edges);
// Allocate the array of qubits storing the conflicts of the coloring
using (conflicts = Qubit[edgesNumber]) {
within {
// Iterate over every edge
for (i in 0 .. edgesNumber - 1) {
// Deconstruct the edge tuple into two separate vertex indices
let (v0, v1) = edges[i];
// Use the operation from task 1.5 to track conflicts
ColorEqualityOracle_Nbit(colorsRegister[2*v0 .. 2*v0+1],
colorsRegister[2*v1 .. 2*v1+1], conflicts[i]);
}
} apply {
// If all the edges are colored properly, conflicts should be in state |0...0⟩
(ControlledOnInt(0, X))(conflicts, target);
}
}
}
# -
# [Return to task 2.2 of the Graph Coloring kata.](./GraphColoring.ipynb#Task-2.2.-Oracle-for-verifying-vertex-coloring)
# ### Task 2.3. Using Grover's search to find vertex coloring
#
# **Inputs:**
#
# 1. The number of vertices in the graph $V$ ($V \leq 6$).
#
# 2. A marking oracle which implements vertex coloring verification, as implemented in task 2.2.
#
# **Output:**
#
# A valid vertex coloring for the graph, in a format used in task 2.1.
# ### Solution
#
# First we have to implement the generic Grover's algorithm. However, learning the implementation details is not the goal of this kata, so we will assume that you are already familiar with it; if not, please refer to the [Grover's Algorithm kata](./../GroversAlgorithm/GroversAlgorithm.ipynb).
#
# The first code cell implements the phase kickback trick to turn a marking oracle into a phase oracle. The second one is the actual implementation of Grover's algorithm.
operation OracleConverter (markingOracle : ((Qubit[], Qubit) => Unit is Adj), register : Qubit[]) : Unit is Adj {
using (target = Qubit()) {
within {
// Put the target qubit in the |-⟩ state
X(target);
H(target);
} apply {
// Apply the marking oracle
markingOracle(register, target);
}
}
}
# +
open Microsoft.Quantum.Arrays;
operation GroverAlgorithmLoop (markingOracle : ((Qubit[], Qubit) => Unit is Adj), register : Qubit[], iterations : Int) : Unit is Adj {
// Convert the marking oracle in a phase oracle
let phaseOracle = OracleConverter(markingOracle, _);
// Prepare an equal superposition of all basis states
ApplyToEachA(H, register);
// Apply Grover iterations
for (_ in 1..iterations) {
// Apply phase oracle
phaseOracle(register);
// Apply "reflection about the mean"
within {
ApplyToEachA(H, register);
ApplyToEachA(X, register);
} apply {
(Controlled Z)(Most(register), Tail(register));
}
}
}
# -
# The main part of this task is running Grover's algorithm to find a solution.
#
# To do this, we'll need to take some extra steps to use the generic implementation. Here is the flow:
# - Allocate an array of qubits to store graph coloring with 2 qubits per vertex and one more qubit to use when we verify that the solution we found is indeed correct.
# - Try running the algorithm with different numbers of Grover's iterations, starting with 1 iteration and increasing the number each time we don't find a solution.
# We will use two mutable variables for this, `iterations` to store iteration count and `correct` to indicate whether we found a correct solution, and the [repeat-until-success loop](https://docs.microsoft.com/quantum/user-guide/using-qsharp/control-flow#repeat-until-success-loop).
# - In the body of the loop we'll do the following steps:
# - Use Grover's algorithm loop implemented in a previous code cell with the current number of iterations.
# - Measure the qubit array to the result of Grover's algorithm. We can do that using the [`MultiM` operation](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.measurement.multim).
# - Check whether the result is a valid graph coloring by applying the marking oracle to the qubit array that stores the coloring (remember that after the measurement the state of these qubits collapsed to the basis state that corresponds to the measurement results) and the extra qubit.
# - Measure the extra qubit in the Pauli Z basis and reset it using the [`MResetZ` operation](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.measurement.mresetz):
# - If the measurement result is `One`, the algorithm result is indeed a solution to the problem we're solving; we need to set `correct` variable to `true` and to decode the result into a graph coloring using the `MeasureColoring` operation from task 1.3.
# - If the measurement result is `Zero`, the algorithm result is not a solution to our problem, and we do nothing.
# - Reset the array that stored the coloring to prepare it for the next iteration using the [`ResetAll` operation](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.intrinsic.resetall).
# - We stop the loop if we found the solution or if we are running too many iterations (in this case we throw an exception to indicate that we didn't find a solution).
# +
%kata T23_GroversAlgorithm
open Microsoft.Quantum.Measurement;
operation GroversAlgorithm (V : Int, oracle : ((Qubit[], Qubit) => Unit is Adj)) : Int[] {
mutable coloring = new Int[V];
using ((register, output) = (Qubit[2 * V], Qubit())) {
mutable correct = false;
mutable iterations = 1;
repeat {
Message($"Trying iteration {iterations}");
GroverAlgorithm(oracle, register, iterations);
let temp = MultiM(register);
oracle(register, output);
if (MResetZ(output) == One) {
set correct = true;
set coloring = MeasureColoring(V, register);
}
ResetAll(register);
}
until (correct or iterations > 10)
fixup {
set iterations += 1;
}
if (not correct) {
fail "No valid coloring was found";
}
}
return coloring;
}
# -
# [Return to task 2.3 of the Graph Coloring kata.](./GraphColoring.ipynb#Task-2.3.-Using-Grover's-search-to-find-vertex-coloring)
| GraphColoring/Workbook_GraphColoring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# ============================================================================
# Decoding in time-frequency space data using the Common Spatial Pattern (CSP)
# ============================================================================
#
#
# The time-frequency decomposition is estimated by iterating over raw data that
# has been band-passed at different frequencies. This is used to compute a
# covariance matrix over each epoch or a rolling time-window and extract the CSP
# filtered signals. A linear discriminant classifier is then applied to these
# signals.
#
# +
# Authors: <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne import Epochs, create_info, events_from_annotations
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
from mne.time_frequency import AverageTFR
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder
# -
# Set parameters and read data
#
#
# +
event_id = dict(hands=2, feet=3) # motor imagery: hands vs feet
subject = 1
runs = [6, 10, 14]
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
# Extract information from the raw file
sfreq = raw.info['sfreq']
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
raw.pick_types(meg=False, eeg=True, stim=False, eog=False, exclude='bads')
# Assemble the classifier using scikit-learn pipeline
clf = make_pipeline(CSP(n_components=4, reg=None, log=True, norm_trace=False),
LinearDiscriminantAnalysis())
n_splits = 5 # how many folds to use for cross-validation
cv = StratifiedKFold(n_splits=n_splits, shuffle=True)
# Classification & Time-frequency parameters
tmin, tmax = -.200, 2.000
n_cycles = 10. # how many complete cycles: used to define window size
min_freq = 5.
max_freq = 25.
n_freqs = 8 # how many frequency bins to use
# Assemble list of frequency range tuples
freqs = np.linspace(min_freq, max_freq, n_freqs) # assemble frequencies
freq_ranges = list(zip(freqs[:-1], freqs[1:])) # make freqs list of tuples
# Infer window spacing from the max freq and number of cycles to avoid gaps
window_spacing = (n_cycles / np.max(freqs) / 2.)
centered_w_times = np.arange(tmin, tmax, window_spacing)[1:]
n_windows = len(centered_w_times)
# Instantiate label encoder
le = LabelEncoder()
# -
# Loop through frequencies, apply classifier and save scores
#
#
# +
# init scores
freq_scores = np.zeros((n_freqs - 1,))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
X = epochs.get_data()
# Save mean scores over folds for each frequency and time window
freq_scores[freq] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
# -
# Plot frequency results
#
#
plt.bar(freqs[:-1], freq_scores, width=np.diff(freqs)[0],
align='edge', edgecolor='black')
plt.xticks(freqs)
plt.ylim([0, 1])
plt.axhline(len(epochs['feet']) / len(epochs), color='k', linestyle='--',
label='chance level')
plt.legend()
plt.xlabel('Frequency (Hz)')
plt.ylabel('Decoding Scores')
plt.title('Frequency Decoding Scores')
# Loop through frequencies and time, apply classifier and save scores
#
#
# +
# init scores
tf_scores = np.zeros((n_freqs - 1, n_windows))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
# Roll covariance, csp and lda over time
for t, w_time in enumerate(centered_w_times):
# Center the min and max of the window
w_tmin = w_time - w_size / 2.
w_tmax = w_time + w_size / 2.
# Crop data into time-window of interest
X = epochs.copy().crop(w_tmin, w_tmax).get_data()
# Save mean scores over folds for each frequency and time window
tf_scores[freq, t] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
# -
# Plot time-frequency results
#
#
# +
# Set up time frequency object
av_tfr = AverageTFR(create_info(['freq'], sfreq), tf_scores[np.newaxis, :],
centered_w_times, freqs[1:], 1)
chance = np.mean(y) # set chance level to white in the plot
av_tfr.plot([0], vmin=chance, title="Time-Frequency Decoding Scores",
cmap=plt.cm.Reds)
| stable/_downloads/c252c7618c44ac318ad1e9e85d5b7059/plot_decoding_csp_timefreq.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # data for the distance-size ratio (D), the number of discretized tesseroids (N) and the CPU time
#
# This notebook makes the data for the D and N, D and time.
# %matplotlib inline
from __future__ import division
from tempfile import NamedTemporaryFile
from StringIO import StringIO
import numpy as np
from IPython.display import Image
from matplotlib.font_manager import FontProperties
from fatiando.mesher import Tesseroid
from fatiando.constants import MEAN_EARTH_RADIUS
from fatiando.vis import myv, mpl
import fatiando
import matplotlib.pyplot as plt
import pandas as pd
print('Fatiando a Terra version: {}'.format(fatiando.__version__))
# !tessgzzz --version
def discretize(tesseroid, point, ratio):
d2r = np.pi/180
lon, lat, h = point
lon *= d2r
lat *= d2r
sinlat = np.sin(lat)
coslat = np.cos(lat)
r = h + MEAN_EARTH_RADIUS
result = []
stack = [tesseroid]
while stack:
tess = stack.pop()
# compute the distance to the center of the tesseroid
rt = 0.5*(tess.top + tess.bottom) + MEAN_EARTH_RADIUS
latt = d2r*0.5*(tess.s + tess.n)
lont = d2r*0.5*(tess.w + tess.e)
cospsi = sinlat*np.sin(latt) + coslat*np.cos(latt)*np.cos(lon - lont)
distance = np.sqrt(r**2 + rt**2 - 2*r*rt*cospsi)
# compute the sizes of the tesseroid
r2 = tess.top + MEAN_EARTH_RADIUS
dlon = (r2*np.arccos(np.sin(latt)**2
+ (np.cos(latt)**2)*np.cos(d2r*(tess.e - tess.w))))
dlat = (r2*np.arccos(np.sin(d2r*tess.n)*np.sin(d2r*tess.s)
+ np.cos(d2r*tess.n)*np.cos(d2r*tess.s)))
dr = tess.top - tess.bottom
nlon, nlat, nr = 1, 1, 1
if distance/dlon < ratio:
nlon = 2
if distance/dlat < ratio:
nlat = 2
if distance/dr < ratio:
nr = 2
if nlon == 1 and nlat == 1 and nr == 1:
result.append(tess)
else:
stack.extend(tess.split(nlon, nlat, nr))
return result
tesseroid = Tesseroid(-55, -25, -80, -70, 500e3, 0)
point = [-40, -70, 800e3]
import numpy as np
D = np.arange(0, 52.5, 2.5)
with open("a.txt", "w") as f:
for split in D:
model = discretize(tesseroid, point, split)
a=str(len(model))
print "The number of tesseroids are N:", a
f.write(a)
f.write('\n')
import time
with open("b.txt", "w") as f:
for split in D:
start = time.clock()
model = discretize(tesseroid, point, split)
b=str(time.clock() - start)
a=str(len(model))
f.write(b)
f.write('\n')
| Experiments/D&N&Time/.ipynb_checkpoints/D&N&Time-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="bOChJSNXtC9g" colab_type="text"
# # Advanced RNNs
# + [markdown] id="OLIxEDq6VhvZ" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png" width=150>
#
# In this notebook we're going to cover some advanced topics related to RNNs.
#
# 1. Conditioned hidden state
# 2. Char-level embeddings
# 3. Encoder and decoder
# 4. Attentional mechanisms
# 5. Implementation
#
#
#
# + [markdown] id="41r7MWJnY0m8" colab_type="text"
# # Set up
# + id="EJDhjHCHY0_a" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="beb1c764-e47f-41a6-f8cf-e4150ee3befd"
# Load PyTorch library
# !pip3 install torch
# + id="p0FbOd6IZmzX" colab_type="code" colab={}
import os
from argparse import Namespace
import collections
import copy
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import re
import torch
# + id="bOsqAo4XZpXQ" colab_type="code" colab={}
# Set Numpy and PyTorch seeds
def set_seeds(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
# Creating directories
def create_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
# + id="QHfvEzQ9ZweF" colab_type="code" outputId="a69944ff-021d-4d04-e920-cfc49112a34c" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Arguments
args = Namespace(
seed=1234,
cuda=True,
batch_size=4,
condition_vocab_size=3, # vocabulary for condition possibilities
embedding_dim=100,
rnn_hidden_dim=100,
hidden_dim=100,
num_layers=1,
bidirectional=False,
)
# Set seeds
set_seeds(seed=args.seed, cuda=args.cuda)
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
# + [markdown] id="VoMq0eFRvugb" colab_type="text"
# # Conditioned RNNs
# + [markdown] id="ZUsj7HjBp69f" colab_type="text"
# Conditioning an RNN is to add extra information that will be helpful towards a prediction. We can encode (embed it) this information and feed it along with the sequential input into our model. For example, suppose in our document classificaiton example in the previous notebook, we knew the publisher of each news article (NYTimes, ESPN, etc.). We could have encoded that information to help with the prediction. There are several different ways of creating a conditioned RNN.
#
# **Note**: If the conditioning information is novel for each input in the sequence, just concatenate it along with each time step's input.
# + [markdown] id="Kc8H9JySmtLa" colab_type="text"
# 1. Make the initial hidden state the encoded information instead of using the initial zerod hidden state. Make sure that the size of the encoded information is the same as the hidden state for the RNN.
#
# + [markdown] id="pKlb9SjfpbED" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conditioned_rnn1.png" width=400>
# + id="jbrlQHx2x8Aa" colab_type="code" colab={}
import torch.nn as nn
import torch.nn.functional as F
# + id="cFoiV-fqmvRo" colab_type="code" outputId="9843f756-8d71-4686-b479-b521df9b6f3c" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Condition
condition = torch.LongTensor([0, 2, 1, 2]) # batch size of 4 with a vocab size of 3
condition_embeddings = nn.Embedding(
embedding_dim=args.embedding_dim, # should be same as RNN hidden dim
num_embeddings=args.condition_vocab_size) # of unique conditions
# Initialize hidden state
num_directions = 1
if args.bidirectional:
num_directions = 2
# If using multiple layers and directions, the hidden state needs to match that size
hidden_t = condition_embeddings(condition).unsqueeze(0).repeat(
args.num_layers * num_directions, 1, 1).to(args.device) # initial state to RNN
print (hidden_t.size())
# Feed into RNN
# y_out, _ = self.rnn(x_embedded, hidden_t)
# + [markdown] id="REgyaMDgmtHw" colab_type="text"
# 2. Concatenate the encoded information with the hidden state at each time step. Do not replace the hidden state because the RNN needs that to learn.
# + [markdown] id="yUIg5o-dpiZF" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conditioned_rnn2.png" width=400>
# + id="eQ-h28o-pi4X" colab_type="code" outputId="4143190d-c452-48cc-cc96-1a2f0f7fc5ee" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Initialize hidden state
hidden_t = torch.zeros((args.num_layers * num_directions, args.batch_size, args.rnn_hidden_dim))
print (hidden_t.size())
# + id="2Z6hYSIdqBQ4" colab_type="code" colab={}
def concat_condition(condition_embeddings, condition, hidden_t, num_layers, num_directions):
condition_t = condition_embeddings(condition).unsqueeze(0).repeat(
num_layers * num_directions, 1, 1)
hidden_t = torch.cat([hidden_t, condition_t], 2)
return hidden_t
# + id="Tjyzq_s5pixL" colab_type="code" outputId="f4f62742-044e-46ef-cc46-fc21a3c52c78" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Loop through the inputs time steps
hiddens = []
seq_size = 1
for t in range(seq_size):
hidden_t = concat_condition(condition_embeddings, condition, hidden_t,
args.num_layers, num_directions).to(args.device)
print (hidden_t.size())
# Feed into RNN
# hidden_t = rnn_cell(x_in[t], hidden_t)
...
# + [markdown] id="A-0_81jMXg_J" colab_type="text"
# # Char-level embeddings
# + [markdown] id="w0yUKKpq3pu_" colab_type="text"
# Our conv operations will have inputs that are words in a sentence represented at the character level| $\in \mathbb{R}^{NXSXWXE}$ and outputs are embeddings for each word (based on convlutions applied at the character level.)
#
# **Word embeddings**: capture the temporal correlations among
# adjacent tokens so that similar words have similar representations. Ex. "New Jersey" is close to "NJ" is close to "Garden State", etc.
#
# **Char embeddings**: create representations that map words at a character level. Ex. "toy" and "toys" will be close to each other.
# + [markdown] id="-SZgVuwebm_4" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/char_embeddings.png" width=450>
# + id="QOdIvz0G3O8C" colab_type="code" colab={}
# Arguments
args = Namespace(
seed=1234,
cuda=False,
shuffle=True,
batch_size=64,
vocab_size=20, # vocabulary
seq_size=10, # max length of each sentence
word_size=15, # max length of each word
embedding_dim=100,
num_filters=100, # filters per size
)
# + id="raztXIeYXYJT" colab_type="code" colab={}
class Model(nn.Module):
def __init__(self, embedding_dim, num_embeddings, num_input_channels,
num_output_channels, padding_idx):
super(Model, self).__init__()
# Char-level embedding
self.embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
padding_idx=padding_idx)
# Conv weights
self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_output_channels,
kernel_size=f) for f in [2,3,4]])
def forward(self, x, channel_first=False, apply_softmax=False):
# x: (N, seq_len, word_len)
input_shape = x.size()
batch_size, seq_len, word_len = input_shape
x = x.view(-1, word_len) # (N*seq_len, word_len)
# Embedding
x = self.embeddings(x) # (N*seq_len, word_len, embedding_dim)
# Rearrange input so num_input_channels is in dim 1 (N, embedding_dim, word_len)
if not channel_first:
x = x.transpose(1, 2)
# Convolution
z = [F.relu(conv(x)) for conv in self.conv]
# Pooling
z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z]
z = [zz.view(batch_size, seq_len, -1) for zz in z] # (N, seq_len, embedding_dim)
# Concat to get char-level embeddings
z = torch.cat(z, 2) # join conv outputs
return z
# + id="MzHVs8Xe0Zph" colab_type="code" outputId="ff91c1ac-5bc4-446c-9047-8b4b58570e13" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Input
input_size = (args.batch_size, args.seq_size, args.word_size)
x_in = torch.randint(low=0, high=args.vocab_size, size=input_size).long()
print (x_in.size())
# + id="0B_Xscby2PMQ" colab_type="code" outputId="05b0c3ac-429f-47aa-9526-718e55dfc897" colab={"base_uri": "https://localhost:8080/", "height": 153}
# Initial char-level embedding model
model = Model(embedding_dim=args.embedding_dim,
num_embeddings=args.vocab_size,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
padding_idx=0)
print (model.named_modules)
# + id="8DIgeEZFXYR2" colab_type="code" outputId="ffdbfabf-5f60-4045-be84-23dfb65fd424" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Forward pass to get char-level embeddings
z = model(x_in)
print (z.size())
# + [markdown] id="nzTscaE10HFA" colab_type="text"
# There are several different ways you can use these char-level embeddings:
#
# 1. Concat char-level embeddings with word-level embeddings, since we have an embedding for each word (at a char-level) and then feed it into an RNN.
# 2. You can feed the char-level embeddings into an RNN to processes them.
# + [markdown] id="nyCQ13_ckV_c" colab_type="text"
# # Encoder and decoder
# + [markdown] id="_sixbu74kbJk" colab_type="text"
# So far we've used RNNs to `encode` a sequential input and generate hidden states. We use these hidden states to `decode` the predictions. So far, the encoder was an RNN and the decoder was just a few fully connected layers followed by a softmax layer (for classification). But the encoder and decoder can assume other architectures as well. For example, the decoder could be an RNN that processes the hidden state outputs from the encoder RNN.
# + id="kfK1mAp1dlpT" colab_type="code" colab={}
# Arguments
args = Namespace(
batch_size=64,
embedding_dim=100,
rnn_hidden_dim=100,
hidden_dim=100,
num_layers=1,
bidirectional=False,
dropout=0.1,
)
# + id="p_OJFyY97bF_" colab_type="code" colab={}
class Encoder(nn.Module):
def __init__(self, embedding_dim, num_embeddings, rnn_hidden_dim,
num_layers, bidirectional, padding_idx=0):
super(Encoder, self).__init__()
# Embeddings
self.word_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
padding_idx=padding_idx)
# GRU weights
self.gru = nn.GRU(input_size=embedding_dim, hidden_size=rnn_hidden_dim,
num_layers=num_layers, batch_first=True,
bidirectional=bidirectional)
def forward(self, x_in, x_lengths):
# Word level embeddings
z_word = self.word_embeddings(x_in)
# Feed into RNN
out, h_n = self.gru(z)
# Gather the last relevant hidden state
out = gather_last_relevant_hidden(out, x_lengths)
return out
# + id="HRXtaGPlpyH7" colab_type="code" colab={}
class Decoder(nn.Module):
def __init__(self, rnn_hidden_dim, hidden_dim, output_dim, dropout_p):
super(Decoder, self).__init__()
# FC weights
self.dropout = nn.Dropout(dropout_p)
self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, encoder_output, apply_softmax=False):
# FC layers
z = self.dropout(encoder_output)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
# + id="SnKyCPVj-OVi" colab_type="code" colab={}
class Model(nn.Module):
def __init__(self, embedding_dim, num_embeddings, rnn_hidden_dim,
hidden_dim, num_layers, bidirectional, output_dim, dropout_p,
padding_idx=0):
super(Model, self).__init__()
self.encoder = Encoder(embedding_dim, num_embeddings, rnn_hidden_dim,
num_layers, bidirectional, padding_idx=0)
self.decoder = Decoder(rnn_hidden_dim, hidden_dim, output_dim, dropout_p)
def forward(self, x_in, x_lengths, apply_softmax=False):
encoder_outputs = self.encoder(x_in, x_lengths)
y_pred = self.decoder(encoder_outputs, apply_softmax)
return y_pred
# + id="hfeoErsc-Tum" colab_type="code" outputId="8faa37ab-4c38-4ace-bb96-e5dc7e1483bf" colab={"base_uri": "https://localhost:8080/", "height": 204}
model = Model(embedding_dim=args.embedding_dim, num_embeddings=1000,
rnn_hidden_dim=args.rnn_hidden_dim, hidden_dim=args.hidden_dim,
num_layers=args.num_layers, bidirectional=args.bidirectional,
output_dim=4, dropout_p=args.dropout, padding_idx=0)
print (model.named_parameters)
# + [markdown] id="LAsOI6jEmTd0" colab_type="text"
# # Attentional mechanisms
# + [markdown] id="vJN5ft5Sc_kb" colab_type="text"
# When processing an input sequence with an RNN, recall that at each time step we process the input and the hidden state at that time step. For many use cases, it's advantageous to have access to the inputs at all time steps and pay selective attention to the them at each time step. For example, in machine translation, it's advantageous to have access to all the words when translating to another language because translations aren't necessarily word for word.
# + [markdown] id="jb6A6WfbXje6" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/attention1.jpg" width=650>
# + [markdown] id="mNkayU0rf-ua" colab_type="text"
# Attention can sound a bit confusing so let's see what happens at each time step. At time step j, the model has processed inputs $x_0, x_1, x_2, ..., x_j$ and has generted hidden states $h_0, h_1, h_2, ..., h_j$. The idea is to use all the processed hidden states to make the prediction and not just the most recent one. There are several approaches to how we can do this.
#
# With **soft attention**, we learn a vector of floating points (probabilities) to multiply with the hidden states to create the context vector.
#
# Ex. [0.1, 0.3, 0.1, 0.4, 0.1]
#
# With **hard attention**, we can learn a binary vector to multiply with the hidden states to create the context vector.
#
# Ex. [0, 0, 0, 1, 0]
# + [markdown] id="gYSIAVQqu3Ab" colab_type="text"
# We're going to focus on soft attention because it's more widley used and we can visualize how much of each hidden state helps with the prediction, which is great for interpretability.
# + [markdown] id="9Ch21nZNvDHO" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/attention2.jpg" width=650>
# + [markdown] id="o_jPXuT8xlqd" colab_type="text"
# We're going to implement attention in the document classification task below.
# + [markdown] colab_type="text" id="0iNnQzdxnGvn"
# # Document classification with RNNs
# + [markdown] colab_type="text" id="n38ZJoVZnGaE"
# We're going to implement the same document classification task as in the previous notebook but we're going to use an attentional interface for interpretability.
#
# **Why not machine translation?** Normally, machine translation is the go-to example for demonstrating attention but it's not really practical. How many situations can you think of that require a seq to generate another sequence? Instead we're going to apply attention with our document classification example to see which input tokens are more influential towards predicting the genre.
# + [markdown] colab_type="text" id="Fu7HgEqbnGFY"
# ## Set up
# + colab_type="code" id="elL6BxtCmNGf" colab={}
from argparse import Namespace
import collections
import copy
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import re
import torch
# + id="DCf2fLmPbKKI" colab_type="code" colab={}
def set_seeds(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
# Creating directories
def create_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
# + colab_type="code" outputId="291c03d4-6143-4395-b5c9-ab386b061737" id="TTwkuoZdmMlF" colab={"base_uri": "https://localhost:8080/", "height": 34}
args = Namespace(
seed=1234,
cuda=True,
shuffle=True,
data_file="news.csv",
split_data_file="split_news.csv",
vectorizer_file="vectorizer.json",
model_state_file="model.pth",
save_dir="news",
train_size=0.7,
val_size=0.15,
test_size=0.15,
pretrained_embeddings=None,
cutoff=25,
num_epochs=5,
early_stopping_criteria=5,
learning_rate=1e-3,
batch_size=128,
embedding_dim=100,
kernels=[3,5],
num_filters=100,
rnn_hidden_dim=128,
hidden_dim=200,
num_layers=1,
bidirectional=False,
dropout_p=0.25,
)
# Set seeds
set_seeds(seed=args.seed, cuda=args.cuda)
# Create save dir
create_dirs(args.save_dir)
# Expand filepaths
args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file)
args.model_state_file = os.path.join(args.save_dir, args.model_state_file)
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
# + [markdown] colab_type="text" id="xfiWhgX5mMQ5"
# ## Data
# + colab_type="code" id="baAsxXNFmMCF" colab={}
import urllib
# + colab_type="code" id="3tJi_HyOmLw-" colab={}
url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/news.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(args.data_file, 'wb') as fp:
fp.write(html)
# + colab_type="code" outputId="a51463a7-f37e-41e7-aca4-74038c7c6e8e" id="wrI_df4bmLjB" colab={"base_uri": "https://localhost:8080/", "height": 204}
df = pd.read_csv(args.data_file, header=0)
df.head()
# + colab_type="code" outputId="36145f0d-7316-4341-f270-1d8c8037c661" id="TreK7nqEmLTN" colab={"base_uri": "https://localhost:8080/", "height": 85}
by_category = collections.defaultdict(list)
for _, row in df.iterrows():
by_category[row.category].append(row.to_dict())
for category in by_category:
print ("{0}: {1}".format(category, len(by_category[category])))
# + colab_type="code" id="35nb3LxLmLCA" colab={}
final_list = []
for _, item_list in sorted(by_category.items()):
if args.shuffle:
np.random.shuffle(item_list)
n = len(item_list)
n_train = int(args.train_size*n)
n_val = int(args.val_size*n)
n_test = int(args.test_size*n)
# Give data point a split attribute
for item in item_list[:n_train]:
item['split'] = 'train'
for item in item_list[n_train:n_train+n_val]:
item['split'] = 'val'
for item in item_list[n_train+n_val:]:
item['split'] = 'test'
# Add to final list
final_list.extend(item_list)
# + colab_type="code" outputId="3b188412-5c0a-4e71-ef50-20c4ba18082b" id="Y48IvuSfmK07" colab={"base_uri": "https://localhost:8080/", "height": 85}
split_df = pd.DataFrame(final_list)
split_df["split"].value_counts()
# + colab_type="code" id="RWuNBxAXmKk2" colab={}
def preprocess_text(text):
text = ' '.join(word.lower() for word in text.split(" "))
text = re.sub(r"([.,!?])", r" \1 ", text)
text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text)
text = text.strip()
return text
split_df.title = split_df.title.apply(preprocess_text)
# + colab_type="code" outputId="7bb68022-5848-44ac-f90c-7cdf6a7eb988" id="fG9n77eLmKWB" colab={"base_uri": "https://localhost:8080/", "height": 204}
split_df.to_csv(args.split_data_file, index=False)
split_df.head()
# + [markdown] colab_type="text" id="m-a0OpqhmKJc"
# ## Vocabulary
# + colab_type="code" id="RUMQ_MwumJ8F" colab={}
class Vocabulary(object):
def __init__(self, token_to_idx=None):
# Token to index
if token_to_idx is None:
token_to_idx = {}
self.token_to_idx = token_to_idx
# Index to token
self.idx_to_token = {idx: token \
for token, idx in self.token_to_idx.items()}
def to_serializable(self):
return {'token_to_idx': self.token_to_idx}
@classmethod
def from_serializable(cls, contents):
return cls(**contents)
def add_token(self, token):
if token in self.token_to_idx:
index = self.token_to_idx[token]
else:
index = len(self.token_to_idx)
self.token_to_idx[token] = index
self.idx_to_token[index] = token
return index
def add_tokens(self, tokens):
return [self.add_token[token] for token in tokens]
def lookup_token(self, token):
return self.token_to_idx[token]
def lookup_index(self, index):
if index not in self.idx_to_token:
raise KeyError("the index (%d) is not in the Vocabulary" % index)
return self.idx_to_token[index]
def __str__(self):
return "<Vocabulary(size=%d)>" % len(self)
def __len__(self):
return len(self.token_to_idx)
# + id="1LtYf3lpExBb" colab_type="code" outputId="0870e7a9-d843-4549-97ae-d8cf5c3e7e3e" colab={"base_uri": "https://localhost:8080/", "height": 85}
# Vocabulary instance
category_vocab = Vocabulary()
for index, row in df.iterrows():
category_vocab.add_token(row.category)
print (category_vocab) # __str__
print (len(category_vocab)) # __len__
index = category_vocab.lookup_token("Business")
print (index)
print (category_vocab.lookup_index(index))
# + [markdown] id="Z0zkF6CsE_yH" colab_type="text"
# ## Sequence vocabulary
# + [markdown] id="QtntaISyE_1c" colab_type="text"
# Next, we're going to create our Vocabulary classes for the article's title, which is a sequence of words.
# + id="ovI8QRefEw_p" colab_type="code" colab={}
from collections import Counter
import string
# + id="4W3ZouuTEw1_" colab_type="code" colab={}
class SequenceVocabulary(Vocabulary):
def __init__(self, token_to_idx=None, unk_token="<UNK>",
mask_token="<MASK>", begin_seq_token="<BEGIN>",
end_seq_token="<END>"):
super(SequenceVocabulary, self).__init__(token_to_idx)
self.mask_token = mask_token
self.unk_token = unk_token
self.begin_seq_token = begin_seq_token
self.end_seq_token = end_seq_token
self.mask_index = self.add_token(self.mask_token)
self.unk_index = self.add_token(self.unk_token)
self.begin_seq_index = self.add_token(self.begin_seq_token)
self.end_seq_index = self.add_token(self.end_seq_token)
# Index to token
self.idx_to_token = {idx: token \
for token, idx in self.token_to_idx.items()}
def to_serializable(self):
contents = super(SequenceVocabulary, self).to_serializable()
contents.update({'unk_token': self.unk_token,
'mask_token': self.mask_token,
'begin_seq_token': self.begin_seq_token,
'end_seq_token': self.end_seq_token})
return contents
def lookup_token(self, token):
return self.token_to_idx.get(token, self.unk_index)
def lookup_index(self, index):
if index not in self.idx_to_token:
raise KeyError("the index (%d) is not in the SequenceVocabulary" % index)
return self.idx_to_token[index]
def __str__(self):
return "<SequenceVocabulary(size=%d)>" % len(self.token_to_idx)
def __len__(self):
return len(self.token_to_idx)
# + id="g5UHjpi3El37" colab_type="code" outputId="75875a36-e34f-4e25-aa96-656bdfe4f210" colab={"base_uri": "https://localhost:8080/", "height": 85}
# Get word counts
word_counts = Counter()
for title in split_df.title:
for token in title.split(" "):
if token not in string.punctuation:
word_counts[token] += 1
# Create SequenceVocabulary instance
title_word_vocab = SequenceVocabulary()
for word, word_count in word_counts.items():
if word_count >= args.cutoff:
title_word_vocab.add_token(word)
print (title_word_vocab) # __str__
print (len(title_word_vocab)) # __len__
index = title_word_vocab.lookup_token("general")
print (index)
print (title_word_vocab.lookup_index(index))
# + [markdown] id="1_wja0EfQNpA" colab_type="text"
# We're also going to create an instance fo SequenceVocabulary that processes the input on a character level.
# + id="5SpfS0BXP9pz" colab_type="code" outputId="383414b5-1274-499a-cd2f-d83cfc17bec6" colab={"base_uri": "https://localhost:8080/", "height": 85}
# Create SequenceVocabulary instance
title_char_vocab = SequenceVocabulary()
for title in split_df.title:
for token in title:
title_char_vocab.add_token(token)
print (title_char_vocab) # __str__
print (len(title_char_vocab)) # __len__
index = title_char_vocab.lookup_token("g")
print (index)
print (title_char_vocab.lookup_index(index))
# + [markdown] id="4Dag6H0SFHAG" colab_type="text"
# ## Vectorizer
# + [markdown] id="VQIfxcUuKwzz" colab_type="text"
# Something new that we introduce in this Vectorizer is calculating the length of our input sequence. We will use this later on to extract the last relevant hidden state for each input sequence.
# + id="tsNtEnhBEl6s" colab_type="code" colab={}
class NewsVectorizer(object):
def __init__(self, title_word_vocab, title_char_vocab, category_vocab):
self.title_word_vocab = title_word_vocab
self.title_char_vocab = title_char_vocab
self.category_vocab = category_vocab
def vectorize(self, title):
# Word-level vectorization
word_indices = [self.title_word_vocab.lookup_token(token) for token in title.split(" ")]
word_indices = [self.title_word_vocab.begin_seq_index] + word_indices + \
[self.title_word_vocab.end_seq_index]
title_length = len(word_indices)
word_vector = np.zeros(title_length, dtype=np.int64)
word_vector[:len(word_indices)] = word_indices
# Char-level vectorization
word_length = max([len(word) for word in title.split(" ")])
char_vector = np.zeros((len(word_vector), word_length), dtype=np.int64)
char_vector[0, :] = self.title_word_vocab.mask_index # <BEGIN>
char_vector[-1, :] = self.title_word_vocab.mask_index # <END>
for i, word in enumerate(title.split(" ")):
char_vector[i+1,:len(word)] = [title_char_vocab.lookup_token(char) \
for char in word] # i+1 b/c of <BEGIN> token
return word_vector, char_vector, len(word_indices)
def unvectorize_word_vector(self, word_vector):
tokens = [self.title_word_vocab.lookup_index(index) for index in word_vector]
title = " ".join(token for token in tokens)
return title
def unvectorize_char_vector(self, char_vector):
title = ""
for word_vector in char_vector:
for index in word_vector:
if index == self.title_char_vocab.mask_index:
break
title += self.title_char_vocab.lookup_index(index)
title += " "
return title
@classmethod
def from_dataframe(cls, df, cutoff):
# Create class vocab
category_vocab = Vocabulary()
for category in sorted(set(df.category)):
category_vocab.add_token(category)
# Get word counts
word_counts = Counter()
for title in df.title:
for token in title.split(" "):
word_counts[token] += 1
# Create title vocab (word level)
title_word_vocab = SequenceVocabulary()
for word, word_count in word_counts.items():
if word_count >= cutoff:
title_word_vocab.add_token(word)
# Create title vocab (char level)
title_char_vocab = SequenceVocabulary()
for title in df.title:
for token in title:
title_char_vocab.add_token(token)
return cls(title_word_vocab, title_char_vocab, category_vocab)
@classmethod
def from_serializable(cls, contents):
title_word_vocab = SequenceVocabulary.from_serializable(contents['title_word_vocab'])
title_char_vocab = SequenceVocabulary.from_serializable(contents['title_char_vocab'])
category_vocab = Vocabulary.from_serializable(contents['category_vocab'])
return cls(title_word_vocab=title_word_vocab,
title_char_vocab=title_char_vocab,
category_vocab=category_vocab)
def to_serializable(self):
return {'title_word_vocab': self.title_word_vocab.to_serializable(),
'title_char_vocab': self.title_char_vocab.to_serializable(),
'category_vocab': self.category_vocab.to_serializable()}
# + id="JtRRXU53El9Y" colab_type="code" outputId="659ad7a1-38a4-46ca-98b8-a72ba0c9fff0" colab={"base_uri": "https://localhost:8080/", "height": 340}
# Vectorizer instance
vectorizer = NewsVectorizer.from_dataframe(split_df, cutoff=args.cutoff)
print (vectorizer.title_word_vocab)
print (vectorizer.title_char_vocab)
print (vectorizer.category_vocab)
word_vector, char_vector, title_length = vectorizer.vectorize(preprocess_text(
"<NAME> wins the Wimbledon tennis tournament."))
print ("word_vector:", np.shape(word_vector))
print ("char_vector:", np.shape(char_vector))
print ("title_length:", title_length)
print (word_vector)
print (char_vector)
print (vectorizer.unvectorize_word_vector(word_vector))
print (vectorizer.unvectorize_char_vector(char_vector))
# + [markdown] id="uk_QvpVfFM0S" colab_type="text"
# ## Dataset
# + id="oU7oDdelFMR9" colab_type="code" colab={}
from torch.utils.data import Dataset, DataLoader
# + id="pB7FHmiSFMXA" colab_type="code" colab={}
class NewsDataset(Dataset):
def __init__(self, df, vectorizer):
self.df = df
self.vectorizer = vectorizer
# Data splits
self.train_df = self.df[self.df.split=='train']
self.train_size = len(self.train_df)
self.val_df = self.df[self.df.split=='val']
self.val_size = len(self.val_df)
self.test_df = self.df[self.df.split=='test']
self.test_size = len(self.test_df)
self.lookup_dict = {'train': (self.train_df, self.train_size),
'val': (self.val_df, self.val_size),
'test': (self.test_df, self.test_size)}
self.set_split('train')
# Class weights (for imbalances)
class_counts = df.category.value_counts().to_dict()
def sort_key(item):
return self.vectorizer.category_vocab.lookup_token(item[0])
sorted_counts = sorted(class_counts.items(), key=sort_key)
frequencies = [count for _, count in sorted_counts]
self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)
@classmethod
def load_dataset_and_make_vectorizer(cls, split_data_file, cutoff):
df = pd.read_csv(split_data_file, header=0)
train_df = df[df.split=='train']
return cls(df, NewsVectorizer.from_dataframe(train_df, cutoff))
@classmethod
def load_dataset_and_load_vectorizer(cls, split_data_file, vectorizer_filepath):
df = pd.read_csv(split_data_file, header=0)
vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
return cls(df, vectorizer)
def load_vectorizer_only(vectorizer_filepath):
with open(vectorizer_filepath) as fp:
return NewsVectorizer.from_serializable(json.load(fp))
def save_vectorizer(self, vectorizer_filepath):
with open(vectorizer_filepath, "w") as fp:
json.dump(self.vectorizer.to_serializable(), fp)
def set_split(self, split="train"):
self.target_split = split
self.target_df, self.target_size = self.lookup_dict[split]
def __str__(self):
return "<Dataset(split={0}, size={1})".format(
self.target_split, self.target_size)
def __len__(self):
return self.target_size
def __getitem__(self, index):
row = self.target_df.iloc[index]
title_word_vector, title_char_vector, title_length = \
self.vectorizer.vectorize(row.title)
category_index = self.vectorizer.category_vocab.lookup_token(row.category)
return {'title_word_vector': title_word_vector,
'title_char_vector': title_char_vector,
'title_length': title_length,
'category': category_index}
def get_num_batches(self, batch_size):
return len(self) // batch_size
def generate_batches(self, batch_size, collate_fn, shuffle=True,
drop_last=False, device="cpu"):
dataloader = DataLoader(dataset=self, batch_size=batch_size,
collate_fn=collate_fn, shuffle=shuffle,
drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
# + id="_Dpb6ZHJFMeb" colab_type="code" outputId="f87f31eb-c1d1-4269-ea4d-4f93826bd0df" colab={"base_uri": "https://localhost:8080/", "height": 272}
# Dataset instance
dataset = NewsDataset.load_dataset_and_make_vectorizer(args.split_data_file,
args.cutoff)
print (dataset) # __str__
input_ = dataset[10] # __getitem__
print (input_['title_word_vector'])
print (input_['title_char_vector'])
print (input_['title_length'])
print (input_['category'])
print (dataset.vectorizer.unvectorize_word_vector(input_['title_word_vector']))
print (dataset.vectorizer.unvectorize_char_vector(input_['title_char_vector']))
print (dataset.class_weights)
# + [markdown] id="_IUIqtbvFUAG" colab_type="text"
# ## Model
# + [markdown] id="xJV5WlDiFVVz" colab_type="text"
# embed → encoder → attend → predict
# + id="rZCzdZZ9FMhm" colab_type="code" colab={}
import torch.nn as nn
import torch.nn.functional as F
# + id="c9wipRZt7feC" colab_type="code" colab={}
class NewsEncoder(nn.Module):
def __init__(self, embedding_dim, num_word_embeddings, num_char_embeddings,
kernels, num_input_channels, num_output_channels,
rnn_hidden_dim, num_layers, bidirectional,
word_padding_idx=0, char_padding_idx=0):
super(NewsEncoder, self).__init__()
self.num_layers = num_layers
self.bidirectional = bidirectional
# Embeddings
self.word_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_word_embeddings,
padding_idx=word_padding_idx)
self.char_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_char_embeddings,
padding_idx=char_padding_idx)
# Conv weights
self.conv = nn.ModuleList([nn.Conv1d(num_input_channels,
num_output_channels,
kernel_size=f) for f in kernels])
# GRU weights
self.gru = nn.GRU(input_size=embedding_dim*(len(kernels)+1),
hidden_size=rnn_hidden_dim, num_layers=num_layers,
batch_first=True, bidirectional=bidirectional)
def initialize_hidden_state(self, batch_size, rnn_hidden_dim, device):
"""Modify this to condition the RNN."""
num_directions = 1
if self.bidirectional:
num_directions = 2
hidden_t = torch.zeros(self.num_layers * num_directions,
batch_size, rnn_hidden_dim).to(device)
def get_char_level_embeddings(self, x):
# x: (N, seq_len, word_len)
input_shape = x.size()
batch_size, seq_len, word_len = input_shape
x = x.view(-1, word_len) # (N*seq_len, word_len)
# Embedding
x = self.char_embeddings(x) # (N*seq_len, word_len, embedding_dim)
# Rearrange input so num_input_channels is in dim 1 (N, embedding_dim, word_len)
x = x.transpose(1, 2)
# Convolution
z = [F.relu(conv(x)) for conv in self.conv]
# Pooling
z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z]
z = [zz.view(batch_size, seq_len, -1) for zz in z] # (N, seq_len, embedding_dim)
# Concat to get char-level embeddings
z = torch.cat(z, 2) # join conv outputs
return z
def forward(self, x_word, x_char, x_lengths, device):
"""
x_word: word level representation (N, seq_size)
x_char: char level representation (N, seq_size, word_len)
"""
# Word level embeddings
z_word = self.word_embeddings(x_word)
# Char level embeddings
z_char = self.get_char_level_embeddings(x=x_char)
# Concatenate
z = torch.cat([z_word, z_char], 2)
# Feed into RNN
initial_h = self.initialize_hidden_state(
batch_size=z.size(0), rnn_hidden_dim=self.gru.hidden_size,
device=device)
out, h_n = self.gru(z, initial_h)
return out
# + id="zeEcdA287gz4" colab_type="code" colab={}
class NewsDecoder(nn.Module):
def __init__(self, rnn_hidden_dim, hidden_dim, output_dim, dropout_p):
super(NewsDecoder, self).__init__()
# Attention FC layer
self.fc_attn = nn.Linear(rnn_hidden_dim, rnn_hidden_dim)
self.v = nn.Parameter(torch.rand(rnn_hidden_dim))
# FC weights
self.dropout = nn.Dropout(dropout_p)
self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, encoder_outputs, apply_softmax=False):
# Attention
z = torch.tanh(self.fc_attn(encoder_outputs))
z = z.transpose(2,1) # [B*H*T]
v = self.v.repeat(encoder_outputs.size(0),1).unsqueeze(1) #[B*1*H]
z = torch.bmm(v,z).squeeze(1) # [B*T]
attn_scores = F.softmax(z, dim=1)
context = torch.matmul(encoder_outputs.transpose(-2, -1),
attn_scores.unsqueeze(dim=2)).squeeze()
if len(context.size()) == 1:
context = context.unsqueeze(0)
# FC layers
z = self.dropout(context)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return attn_scores, y_pred
# + id="yVDftS-G7gwy" colab_type="code" colab={}
class NewsModel(nn.Module):
def __init__(self, embedding_dim, num_word_embeddings, num_char_embeddings,
kernels, num_input_channels, num_output_channels,
rnn_hidden_dim, hidden_dim, output_dim, num_layers,
bidirectional, dropout_p, word_padding_idx, char_padding_idx):
super(NewsModel, self).__init__()
self.encoder = NewsEncoder(embedding_dim, num_word_embeddings,
num_char_embeddings, kernels,
num_input_channels, num_output_channels,
rnn_hidden_dim, num_layers, bidirectional,
word_padding_idx, char_padding_idx)
self.decoder = NewsDecoder(rnn_hidden_dim, hidden_dim, output_dim,
dropout_p)
def forward(self, x_word, x_char, x_lengths, device, apply_softmax=False):
encoder_outputs = self.encoder(x_word, x_char, x_lengths, device)
y_pred = self.decoder(encoder_outputs, apply_softmax)
return y_pred
# + [markdown] id="jHPYCPd7Fl3M" colab_type="text"
# ## Training
# + id="D3seBMA7FlcC" colab_type="code" colab={}
import torch.optim as optim
# + id="HnRKWLekFlnM" colab_type="code" colab={}
class Trainer(object):
def __init__(self, dataset, model, model_state_file, save_dir, device,
shuffle, num_epochs, batch_size, learning_rate,
early_stopping_criteria):
self.dataset = dataset
self.class_weights = dataset.class_weights.to(device)
self.device = device
self.model = model.to(device)
self.save_dir = save_dir
self.device = device
self.shuffle = shuffle
self.num_epochs = num_epochs
self.batch_size = batch_size
self.loss_func = nn.CrossEntropyLoss(self.class_weights)
self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate)
self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(
optimizer=self.optimizer, mode='min', factor=0.5, patience=1)
self.train_state = {
'stop_early': False,
'early_stopping_step': 0,
'early_stopping_best_val': 1e8,
'early_stopping_criteria': early_stopping_criteria,
'learning_rate': learning_rate,
'epoch_index': 0,
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': [],
'test_loss': -1,
'test_acc': -1,
'model_filename': model_state_file}
def update_train_state(self):
# Verbose
print ("[EPOCH]: {0:02d} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%".format(
self.train_state['epoch_index'], self.train_state['learning_rate'],
self.train_state['train_loss'][-1], self.train_state['train_acc'][-1],
self.train_state['val_loss'][-1], self.train_state['val_acc'][-1]))
# Save one model at least
if self.train_state['epoch_index'] == 0:
torch.save(self.model.state_dict(), self.train_state['model_filename'])
self.train_state['stop_early'] = False
# Save model if performance improved
elif self.train_state['epoch_index'] >= 1:
loss_tm1, loss_t = self.train_state['val_loss'][-2:]
# If loss worsened
if loss_t >= self.train_state['early_stopping_best_val']:
# Update step
self.train_state['early_stopping_step'] += 1
# Loss decreased
else:
# Save the best model
if loss_t < self.train_state['early_stopping_best_val']:
torch.save(self.model.state_dict(), self.train_state['model_filename'])
# Reset early stopping step
self.train_state['early_stopping_step'] = 0
# Stop early ?
self.train_state['stop_early'] = self.train_state['early_stopping_step'] \
>= self.train_state['early_stopping_criteria']
return self.train_state
def compute_accuracy(self, y_pred, y_target):
_, y_pred_indices = y_pred.max(dim=1)
n_correct = torch.eq(y_pred_indices, y_target).sum().item()
return n_correct / len(y_pred_indices) * 100
def pad_word_seq(self, seq, length):
vector = np.zeros(length, dtype=np.int64)
vector[:len(seq)] = seq
vector[len(seq):] = self.dataset.vectorizer.title_word_vocab.mask_index
return vector
def pad_char_seq(self, seq, seq_length, word_length):
vector = np.zeros((seq_length, word_length), dtype=np.int64)
vector.fill(self.dataset.vectorizer.title_char_vocab.mask_index)
for i in range(len(seq)):
char_padding = np.zeros(word_length-len(seq[i]), dtype=np.int64)
vector[i] = np.concatenate((seq[i], char_padding), axis=None)
return vector
def collate_fn(self, batch):
# Make a deep copy
batch_copy = copy.deepcopy(batch)
processed_batch = {"title_word_vector": [], "title_char_vector": [],
"title_length": [], "category": []}
# Max lengths
get_seq_length = lambda sample: len(sample["title_word_vector"])
get_word_length = lambda sample: len(sample["title_char_vector"][0])
max_seq_length = max(map(get_seq_length, batch))
max_word_length = max(map(get_word_length, batch))
# Pad
for i, sample in enumerate(batch_copy):
padded_word_seq = self.pad_word_seq(
sample["title_word_vector"], max_seq_length)
padded_char_seq = self.pad_char_seq(
sample["title_char_vector"], max_seq_length, max_word_length)
processed_batch["title_word_vector"].append(padded_word_seq)
processed_batch["title_char_vector"].append(padded_char_seq)
processed_batch["title_length"].append(sample["title_length"])
processed_batch["category"].append(sample["category"])
# Convert to appropriate tensor types
processed_batch["title_word_vector"] = torch.LongTensor(
processed_batch["title_word_vector"])
processed_batch["title_char_vector"] = torch.LongTensor(
processed_batch["title_char_vector"])
processed_batch["title_length"] = torch.LongTensor(
processed_batch["title_length"])
processed_batch["category"] = torch.LongTensor(
processed_batch["category"])
return processed_batch
def run_train_loop(self):
for epoch_index in range(self.num_epochs):
self.train_state['epoch_index'] = epoch_index
# Iterate over train dataset
# initialize batch generator, set loss and acc to 0, set train mode on
self.dataset.set_split('train')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.train()
for batch_index, batch_dict in enumerate(batch_generator):
# zero the gradients
self.optimizer.zero_grad()
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute gradients using loss
loss.backward()
# use optimizer to take a gradient step
self.optimizer.step()
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['train_loss'].append(running_loss)
self.train_state['train_acc'].append(running_acc)
# Iterate over val dataset
# initialize batch generator, set loss and acc to 0, set eval mode on
self.dataset.set_split('val')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.
running_acc = 0.
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.to("cpu").item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['val_loss'].append(running_loss)
self.train_state['val_acc'].append(running_acc)
self.train_state = self.update_train_state()
self.scheduler.step(self.train_state['val_loss'][-1])
if self.train_state['stop_early']:
break
def run_test_loop(self):
# initialize batch generator, set loss and acc to 0, set eval mode on
self.dataset.set_split('test')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['test_loss'] = running_loss
self.train_state['test_acc'] = running_acc
def plot_performance(self):
# Figure size
plt.figure(figsize=(15,5))
# Plot Loss
plt.subplot(1, 2, 1)
plt.title("Loss")
plt.plot(trainer.train_state["train_loss"], label="train")
plt.plot(trainer.train_state["val_loss"], label="val")
plt.legend(loc='upper right')
# Plot Accuracy
plt.subplot(1, 2, 2)
plt.title("Accuracy")
plt.plot(trainer.train_state["train_acc"], label="train")
plt.plot(trainer.train_state["val_acc"], label="val")
plt.legend(loc='lower right')
# Save figure
plt.savefig(os.path.join(self.save_dir, "performance.png"))
# Show plots
plt.show()
def save_train_state(self):
with open(os.path.join(self.save_dir, "train_state.json"), "w") as fp:
json.dump(self.train_state, fp)
# + id="ICkiOaGtFlk-" colab_type="code" outputId="18174034-ce3e-444a-a968-aba51eb03b3e" colab={"base_uri": "https://localhost:8080/", "height": 306}
# Initialization
dataset = NewsDataset.load_dataset_and_make_vectorizer(args.split_data_file,
args.cutoff)
dataset.save_vectorizer(args.vectorizer_file)
vectorizer = dataset.vectorizer
model = NewsModel(embedding_dim=args.embedding_dim,
num_word_embeddings=len(vectorizer.title_word_vocab),
num_char_embeddings=len(vectorizer.title_char_vocab),
kernels=args.kernels,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
rnn_hidden_dim=args.rnn_hidden_dim,
hidden_dim=args.hidden_dim,
output_dim=len(vectorizer.category_vocab),
num_layers=args.num_layers,
bidirectional=args.bidirectional,
dropout_p=args.dropout_p,
word_padding_idx=vectorizer.title_word_vocab.mask_index,
char_padding_idx=vectorizer.title_char_vocab.mask_index)
print (model.named_modules)
# + id="tuaRZ4DiFlh1" colab_type="code" outputId="6496aa05-de58-4913-a56a-9885bd60d9ad" colab={"base_uri": "https://localhost:8080/", "height": 102}
# Train
trainer = Trainer(dataset=dataset, model=model,
model_state_file=args.model_state_file,
save_dir=args.save_dir, device=args.device,
shuffle=args.shuffle, num_epochs=args.num_epochs,
batch_size=args.batch_size, learning_rate=args.learning_rate,
early_stopping_criteria=args.early_stopping_criteria)
trainer.run_train_loop()
# + id="mzRJIz88Flfe" colab_type="code" outputId="dece6240-57ab-4abc-f9cc-ecd11dabcdc6" colab={"base_uri": "https://localhost:8080/", "height": 335}
# Plot performance
trainer.plot_performance()
# + id="4EmFhiX-FMaV" colab_type="code" outputId="29ef6d38-6258-429b-841f-7345b7cd0695" colab={"base_uri": "https://localhost:8080/", "height": 51}
# Test performance
trainer.run_test_loop()
print("Test loss: {0:.2f}".format(trainer.train_state['test_loss']))
print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc']))
# + id="zVU1zakYFMVF" colab_type="code" colab={}
# Save all results
trainer.save_train_state()
# + [markdown] id="qLoKfjSpFw7t" colab_type="text"
# ## Inference
# + id="ANrPcS7Hp_CP" colab_type="code" colab={}
class Inference(object):
def __init__(self, model, vectorizer):
self.model = model
self.vectorizer = vectorizer
def predict_category(self, title):
# Vectorize
word_vector, char_vector, title_length = self.vectorizer.vectorize(title)
title_word_vector = torch.tensor(word_vector).unsqueeze(0)
title_char_vector = torch.tensor(char_vector).unsqueeze(0)
title_length = torch.tensor([title_length]).long()
# Forward pass
self.model.eval()
attn_scores, y_pred = self.model(x_word=title_word_vector,
x_char=title_char_vector,
x_lengths=title_length,
device="cpu",
apply_softmax=True)
# Top category
y_prob, indices = y_pred.max(dim=1)
index = indices.item()
# Predicted category
category = vectorizer.category_vocab.lookup_index(index)
probability = y_prob.item()
return {'category': category, 'probability': probability,
'attn_scores': attn_scores}
def predict_top_k(self, title, k):
# Vectorize
word_vector, char_vector, title_length = self.vectorizer.vectorize(title)
title_word_vector = torch.tensor(word_vector).unsqueeze(0)
title_char_vector = torch.tensor(char_vector).unsqueeze(0)
title_length = torch.tensor([title_length]).long()
# Forward pass
self.model.eval()
_, y_pred = self.model(x_word=title_word_vector,
x_char=title_char_vector,
x_lengths=title_length,
device="cpu",
apply_softmax=True)
# Top k categories
y_prob, indices = torch.topk(y_pred, k=k)
probabilities = y_prob.detach().numpy()[0]
indices = indices.detach().numpy()[0]
# Results
results = []
for probability, index in zip(probabilities, indices):
category = self.vectorizer.category_vocab.lookup_index(index)
results.append({'category': category, 'probability': probability})
return results
# + id="W6wr68o2p_Eh" colab_type="code" outputId="87886e24-350d-433e-981d-b2907b0c95cf" colab={"base_uri": "https://localhost:8080/", "height": 306}
# Load the model
dataset = NewsDataset.load_dataset_and_load_vectorizer(
args.split_data_file, args.vectorizer_file)
vectorizer = dataset.vectorizer
model = NewsModel(embedding_dim=args.embedding_dim,
num_word_embeddings=len(vectorizer.title_word_vocab),
num_char_embeddings=len(vectorizer.title_char_vocab),
kernels=args.kernels,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
rnn_hidden_dim=args.rnn_hidden_dim,
hidden_dim=args.hidden_dim,
output_dim=len(vectorizer.category_vocab),
num_layers=args.num_layers,
bidirectional=args.bidirectional,
dropout_p=args.dropout_p,
word_padding_idx=vectorizer.title_word_vocab.mask_index,
char_padding_idx=vectorizer.title_char_vocab.mask_index)
model.load_state_dict(torch.load(args.model_state_file))
model = model.to("cpu")
print (model.named_modules)
# + id="JPKgHxsfN954" colab_type="code" outputId="0445e3a7-24a9-4c77-829d-a25681768ab1" colab={"base_uri": "https://localhost:8080/", "height": 51}
# Inference
inference = Inference(model=model, vectorizer=vectorizer)
title = input("Enter a title to classify: ")
prediction = inference.predict_category(preprocess_text(title))
print("{} → {} (p={:0.2f})".format(title, prediction['category'],
prediction['probability']))
# + id="JRdz4wzuQR4N" colab_type="code" outputId="f2c91b24-a36a-4e35-b06a-f6618497d64f" colab={"base_uri": "https://localhost:8080/", "height": 102}
# Top-k inference
top_k = inference.predict_top_k(preprocess_text(title), k=len(vectorizer.category_vocab))
print ("{}: ".format(title))
for result in top_k:
print ("{} (p={:0.2f})".format(result['category'],
result['probability']))
# + [markdown] id="R3jrZ6ZkxN4r" colab_type="text"
# # Interpretability
# + [markdown] id="qrAieHoHxOt2" colab_type="text"
# We can inspect the probability vector that is generated at each time step to visualize the importance of each of the previous hidden states towards a particular time step's prediction.
# + id="k6uZY4J8vYgw" colab_type="code" colab={}
import seaborn as sns
import matplotlib.pyplot as plt
# + id="2PNuY7GLoEi4" colab_type="code" outputId="24b2e48f-da5b-4251-c2eb-81e72603a6f4" colab={"base_uri": "https://localhost:8080/", "height": 330}
attn_matrix = prediction['attn_scores'].detach().numpy()
ax = sns.heatmap(attn_matrix, linewidths=2, square=True)
tokens = ["<BEGIN>"]+preprocess_text(title).split(" ")+["<END>"]
ax.set_xticklabels(tokens, rotation=45)
ax.set_xlabel("Token")
ax.set_ylabel("Importance\n")
plt.show()
# + [markdown] id="1YHneO3SStOp" colab_type="text"
# # TODO
# + [markdown] id="gGHaKTe1SuEk" colab_type="text"
# - attn visualization isn't always great
# - bleu score
# - ngram-overlap
# - perplexity
# - beamsearch
# - hierarchical softmax
# - hierarchical attention
# - Transformer networks
# - attention interpretability is hit/miss
#
| notebooks/14_Advanced_RNNs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 국공립 어린이집 최적 입지 선정 | [서울시 빅데이터 캠퍼스 공모전](https://bigdata.seoul.go.kr/cntst/selectCntst.do?r_id=P610&cntst_seq=42&cntst_se_code=&use_type_code=30&file_id=&sch_type=&sch_text=¤tPage=1)
# ## About
# 서울시 국공립 어린이집의 공급률은 점차 증가하는 추세이나 전체 어린이집 공급 대비 국공립은 23.4% 로 공영 시설의 비율이 30%이상인 선진국에 비해 여전히 낮은 수준입니다. 저희 조는 잠재 이용 아동 인구수와 보육시설 소외 지역을 분류, 예측하여 앞으로의 국공립 시설 확충에 기여하고자 하였습니다.
# ## Tech
# - Python, KNN, K-means, RNN LSTM, ArcGIS, PowerBI
# ## Data Fields
# - 서울시 어린이집 현황 : [서울시 열린 데이터 광장](https://data.seoul.go.kr/dataList/datasetView.do?infId=OA-242&srvType=S&serviceKind=1¤tPageNo=1)
# - 우편번호 주소 : [우체국 우편번호 DB](https://www.epost.go.kr/search/zipcode/areacdAddressDown.jsp)
# - 만5세 이하 인구 : [행정안전부 & 서울시 빅데이터 캠퍼스](https://www.mois.go.kr/frt/a01/frtMain.do)
# - 공시지가 : [공공 데이터 포털 OPEN API 개별 공시지가 정보서비스](https://www.data.go.kr/)
# - 실거래가 : [공공 데이터 포털 OPEN API 개별 실거래가 정보서비스](https://www.data.go.kr/)
# ## Copyright
# 소스 코드 : [MIT License](LICENSE)
# 데이터의 저작권은 [KOGL 제1유형](http://www.kogl.or.kr/info/license.do)을 따름
# * 출처표시
# * 상업적, 비상업적 이용가능
# * 변형 등 2차적 저작물 작성 가능
# ## Data Preprocessing
# 분석에 필요한 자료만 사용하기 위해 데이터 전처리 진행 후 취합하여 CSV로 저장
# - 서울시 어린이집 현황 데이터에서 행정동 주소, 위도, 경도를 추출
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
all_data = pd.read_csv("d:/project_data/all_data2.csv", encoding="euc-kr", index_col=0)
all_ad = pd.read_csv("d:/project_data/address_ad.txt", sep="|", encoding="cp949", header=None)
all_data.head() # 초기 상태의 인구 데이터는 X, Y, old_add가 존재하지 않음.
all_ad.head()
# +
# 휴지 상태 제거 #
all_data = all_data[all_data['Open'] == '정상']
all_data['Code'] = '0' + all_data['Code'].astype(np.str)
# 중복 행 제거 #
all_ad[3] = '0' + all_ad[3].astype(np.str) # 0 추가
idx = all_ad[3].duplicated()
all_ad = all_ad[idx==False]
# 구주소를 행정동 주소로 변환 ##
for i in range(len(all_data['old_add'])):
try:
all_data['old_add'].iloc[i] = all_ad[2][all_ad[3] == all_data['Code'].iloc[i]].iloc[0]
# print(all_data['old_add'].iloc[i])
except:
# print("error: ", all_data['old_add'].iloc[i])
continue
# all_data.to_csv("d:/project_data/all_data3.csv", encoding="euc-kr")
all_data.head()
# -
# - 인구 데이터에서 행정동 기준 전처리
# 파일 뽑아오기
# 파일 행값이 다른 부분까지 잘라서 저장.
# 2010 - 2012 , 2012 - 2014도 같은 방법으로
def openFile01():
## 인구 합계 동별, 월별 추출 함수 ##
global data_all
data01 = pd.read_csv('data/2014/19.csv',encoding='cp949')
data01.columns=["0","2","3","4","5","6","7","8","9"]
data02=data01[['0','3']]
data02["City"]=data02['0']
for i in range(len(data02)):
data02['City'].values[i] = data02['0'].values[i][:5]
data03=data02[data02['City']=='서울특별시']
data04 =data03[['0','3']]
data_all= data04
for i in range(20,58):
data01 = pd.read_csv('2014/'+str(i)+'.csv',encoding='cp949')
data01.columns=["0","2","3","4","5","6","7","8","9"]
data02=data01[['0','3']]
data02["City"]=data02['0']
for i in range(len(data02)):
data02['City'].values[i] = data02['0'].values[i][:5]
data03=data02[data02['City']=='서울특별시']
# print(data03)
data04 =data03[['3']]
data_all = pd.concat((data_all, data04), axis=1)
# 결과물 저장 #
# return data_all.to_csv('data/data_people_2014.csv',encoding='cp949',index=False)
# openFile01()
# +
data= pd.read_csv('data/data_people_2010.csv',encoding='cp949')
# index로 검사해서 합치기.
data.iloc[13]= data.iloc[13][:]+ data.iloc[14][:]
data = data.drop(14)
data01 = pd.read_csv('data/data_people_2012.csv',encoding='cp949')
data03 = pd.merge(data,data01,on='0')
# -
data02 = pd.read_csv('data/data_people_2014.csv',encoding='cp949')
data02.iloc[423]= data02.iloc[423][:]+ data02.iloc[424][:] # 통합된 행정동 하나로 합치기
data02 = data02.drop(424)
data04 = pd.merge(data03,data02,on='0')
data04.head(2) #2010 ~ 2018 합치기
data04['Gue'] = 0
for i in range(len(data04['0'])):
data04['Gue'][i] = data04['0'].values[i][6:-12]
data04['Gue01'] = 0
for i in range(len(data04['0'])):
data04['Gue01'][i] = data04['Gue'][i].split(" ")[1]
data05 = data04[data04['Gue01']!=""]
data05['0']= data05['Gue']
del data05['Gue']
del data05['Gue01']
# transpose #
data06 = data05.T
data07=data06.reset_index(drop=True)
data07.head()
# - 공시지가 오픈API를 CSV로 전처리
# 공시지가 자표파일 불러오기(1만개 단위 )
data=pd.read_csv('data/file_clean01.add',encoding='cp949',sep="\t")
data1=pd.read_csv('data/file_clean02.add',encoding='cp949',sep="\t")
data2=pd.read_csv('data/file_clean03.add',encoding='cp949',sep="\t")
data.head()
data01=data[['소재지','X','Y']]
data02=data1[['소재지','X','Y']]
data03=data2[['소재지','X','Y']]
clean_data01=pd.concat([data01,data02,data03],axis=0)
clean_data01.head()
house_clean=pd.read_csv('data/house_clean.csv',encoding='cp949') #공시지가 데이터
house_clean.head()
all_house_clean=pd.merge(house_clean,clean_data01,on='소재지')
all_house_clean01 = all_house_clean.drop(["시도명",'형상명','일련번호','용도지역2','지리적위치2','시군구','읍면동리','지번구분','본번지','부번지'], 1)
check= all_house_clean01['지목']=='대'
all_house_clean02=all_house_clean01[check]
all_house_clean03 = all_house_clean02.drop(['지목','도로교통','지리적위치1'], 1)
check=all_house_clean03['이용상황']!='자연림'
all_house_clean04 = all_house_clean03[check]
all_house_clean04.head()
# - 실거래가 연립주택 오픈API를 CSV, XLS로 전처리
## 연립주택 지번 주소 구하기 ##
def get_code():
## 연립 주택 데이터에서 지번 주소를 구하기 위한 함수 ##
code = {'종로구': '11110', '중구': '11140', '용산구': '11170', '성동구': '11200',
'광진구': '11215', '동대문구': '11230', '중랑구': '11260', '성북구': '11290',
'강북구': '11305', '도봉구': '11320', '노원구': '11350', '은평구': '11380',
'서대문구': '11410', '마포구': '11440', '양천구': '11470', '강서구': '11500',
'구로구': '11530', '금천구': '11545', '영등포구': '11560', '동작구': '11590',
'관악구': '11620', '서초구': '11650', '강남구': '11680', '송파구': '11710', '강동구': '11740'}
dateList01 = ["201601","201602","201603","201604","201605","201606","201607","201608","201609","201610","201611","201612",
"201701","201702","201703","201704","201705","201706","201707","201708","201709","201710","201711","201712",
"201801","201802","201803","201804","201805","201806","201807","201808"]
## URL request --> 받아오기 ## --> 하루 1000트래픽 한정(1 계정당)
url = 'http://openapi.molit.go.kr:8081/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcRHTrade?'
# 서비스키 --> 공공데이터포털에서 오픈API로 받은 인증키를 입력 #
serviceKey = 'serviceKey=' + "0d8fGluCLeDwmtW310ls9LnNRS582k2fwYEnmtr25HJ8Iv%2Bwcjd4D%2B<KEY>LTrDHSawkREI6gD0uHlYGA%3D%3D" + "&"
list = code.keys()
list01=[]
for i in list:
list01.append(i)
data_list=[]
for k in dateList01:
for i in list01:
LAWD_CD = 'LAWD_CD=' + code[i] + '&' # 법정 코드 번호
DEAL_YMD = 'DEAL_YMD=' + k # 기간
url_all = url + serviceKey + LAWD_CD + DEAL_YMD
res = requests.get(url_all)
text = res.text
soup = BeautifulSoup(text,'lxml-xml')
for item in soup.select('item'):
if item.지번 : # 지번이 없을 경우
add = item.법정동.text
zep = item.지번.text
data_list.append(['서울시',i,add+zep])
data_pd=pd.DataFrame(data_list)
data_pd.columns =['Seoul','Gue','Add']
return data_pd.to_csv('Townhouse_code.csv',index=False,encoding='cp949')
# get_code()
data02 = pd.read_csv('data/Townhouse_code.csv',encoding='cp949')
data02.head() #좌표로 전환하기 위해 코드만 뽑음.
## 중복되는 주소 제거 ##
# 지오맵쓰기위해서 xls로 저장 #
def clean():
## 주소를 간단히 전처리하는 함수 ##
open_code= pd.read_csv('Townhouse_code.csv',encoding="cp949")
clean_code=open_code.drop_duplicates(['Add'])
clean_code.to_excel('clean_Townhouse.xls',encoding='cp949',index=False) # 프로그램으로 코드 뽑기위해 xls로 저장.
# clean()
# 파일 합치기 #
def add_data():
## 구해진 위도, 경도를 포함하는 함수 ##
open_data=pd.read_csv('data/Townhouse.csv',encoding='cp949')
data=pd.read_csv('data/townHouse_code_all.csv',encoding='cp949') #프로그램으로 뽑은 좌표
data_clean= data[['Add','X','Y']]
data_hap=pd.merge(open_data,data_clean)
return data_hap.head()
add_data()
# - 실거래가 아파트 오픈API를 CSV로 전처리
## 아파트 주소 구하기 ##
def get_data():
## 아파트 데이터의 지번 주소를 구하는 함수 ##
code = {'종로구': '11110', '중구': '11140', '용산구': '11170', '성동구': '11200',
'광진구': '11215', '동대문구': '11230', '중랑구': '11260', '성북구': '11290',
'강북구': '11305', '도봉구': '11320', '노원구': '11350', '은평구': '11380',
'서대문구': '11410', '마포구': '11440', '양천구': '11470', '강서구': '11500',
'구로구': '11530', '금천구': '11545', '영등포구': '11560', '동작구': '11590',
'관악구': '11620', '서초구': '11650', '강남구': '11680', '송파구': '11710', '강동구': '11740'}
dateList01 = ["201601","201602","201603","201604","201605","201606","201607","201608","201609","201610","201611","201612",
"201701","201702","201703","201704","201705","201706","201707","201708","201709","201710","201711","201712",
"201801","201802","201803","201804","201805","201806","201807","201808"]
## URL request --> 받아오기 ## --> 하루 1000트래픽 한정(1 계정당)
url = 'http://openapi.molit.go.kr:8081/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcAptTrade?'
# 서비스키 --> 인증키 입력 #
serviceKey = 'serviceKey=' + "hhX5tQfth7qK%2BISZ%2BUuun3EQ7SrYG3omxFSIgC0mmsn%2BS<KEY>P%2Fuv4jUJDda7eaYR5PY3hDmig%3D%3D" + "&"
list = code.keys()
list01=[]
for i in list:
list01.append(i)
data_list=[]
for k in dateList01:
for i in list01:
LAWD_CD = 'LAWD_CD=' + code[i] + '&' # 법정 코드 번호 --> 가운데 숫자만 변화주면됨. (위 codedict)
DEAL_YMD = 'DEAL_YMD=' + k # 기간 --> 수집시기는 우리의 몫
url_all = url + serviceKey + LAWD_CD + DEAL_YMD
res = requests.get(url_all)
text = res.text
soup = BeautifulSoup(text,'lxml-xml')
for item in soup.select('item'):
price = item.거래금액.text
apt = item.아파트.text
add = item.법정동.text
howbig = item.전용면적.text
zep = item.지번.text
floor = item.층.text
data_list.append([apt,add+zep,price,howbig,floor+"층"])
data_pd=pd.DataFrame(data_list)
data_pd.columns =['House','Add','Price','Howbig','Floor']
return data_pd.to_csv('clean_APT.csv',encoding='cp949')
# get_data()
data01 = pd.read_csv('data/clean_APT.csv',encoding='cp949')
data01.head()
# ## **데이터 분석 KNN Regressor**
# - 아파트 및 연립주택의 부동산 데이터 표준화(약 40만 개)
# - 부동산 데이터의 위도 및 경도를 KNN Regressor 알고리즘으로 클러스터 생성
# - 어린이집을 해당 클러스터링에 보내 각 클러스터마다의 공시지가 평균값을 인근 지역 소득 추정
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score, classification_report
import sklearn.neighbors as neg
import matplotlib.pyplot as plt
import json
import sklearn.preprocessing as pp
# - 데이터 전처리 : 이상치 제거, 표준화 필요
all_data = pd.read_csv("d:/project_data/house_clean02.csv", dtype=np.str, encoding='euc-kr') # encoding: 'euc-kr'
all_data.head()
# 면적 당 공시지가 추가 # --> string type, astype을 통해 타입 변경
all_data['y_price'] = all_data['공시지가'].astype(np.float32) / all_data['면적'].astype(np.float32)
# X: (x, y) / y: (면적 당 공시지가) #
X = all_data.iloc[:, 9:11].astype(np.float32) # shape (28046, 2)
y = all_data['y_price'] # shape (28046, )
all_data['y_price'].head()
## Robust scaling ## --> 이상치를 반영한 정규화(min-max)
rs = pp.RobustScaler()
y_scale = rs.fit_transform(np.array(y).reshape(-1, 1))
## 실거래가 아파트 데이터 전처리 ## --> shape (281684, 7)
all_data_apt = pd.read_csv("d:/project_data/total_Apt.csv", sep=",", encoding='euc-kr')
all_data_apt['price_big'] = all_data_apt['Price'] / all_data_apt['Howbig']
X_apt = all_data_apt.iloc[:, -3:-1] # shape (281684, 2)
y_apt_scale = rs.fit_transform(np.array(all_data_apt['price_big']).reshape(-1, 1)) # shape(281684, 1)
all_data_apt.head()
## 실거래가 연립 데이터 전처리 ##
all_data_town = pd.read_csv("d:/project_Data/total_Townhouse01.csv", sep=",", encoding="cp949")
all_data_town['price_big'] = all_data_town['Price'] / all_data_town['Howbig']
X_town = all_data_town.iloc[:, -3:-1] # shape (281684, 2)
y_town_scale = rs.fit_transform(np.array(all_data_town['price_big']).reshape(-1, 1)) # shape(281684, 1)
all_data_town.head()
## 어린이집 데이터 전처리 ##
all_center = pd.read_csv("d:/project_data/all_center9.csv", encoding="euc-kr")
# 특정 열만 선택 #
x_test = all_center[all_center['Type'] == "국공립"] # 국공립만 선택
x_test.head()
# - KNN regressor
# +
k_list = [i for i in range(15,26, 2)]
# minkowski --> p = 2 // 평균 회귀 --> regressor #
knn_fit = neg.KNeighborsRegressor(n_neighbors=k_list[0], p=2, metric='minkowski')
knn_fit.fit(X, y_scale)
knn_fit.fit(X_apt, y_apt_scale)
knn_fit.fit(X_town, y_town_scale)
## predict --> 평균가 적용 ##
pred = knn_fit.predict(x_test.iloc[:, 14:16])
x_test['소득추정'] = pred
for i in range(len(x_test['Gue'])):
x_test['Gue'].values[i] = x_test['Gue'].values[i][:-1] # '구' 빼기
## groupby를 통해 구별 평균 소득 추정 ##
mean = x_test.groupby(['Gue'], as_index=False).mean()
mean.head()
# -
# - 시각화
# 한글 폰트 깨지는 문제 #
from matplotlib import font_manager, rc
font_name = font_manager.FontProperties(fname="c:/Windows/Fonts/malgun.ttf").get_name()
rc('font', family=font_name)
plt.figure(figsize=(16,4))
sortList = []
for i in range(len(k_list)):
knn_fit = neg.KNeighborsRegressor(n_neighbors=k_list[i], p=2, metric='minkowski')
knn_fit.fit(X, y_scale)
knn_fit.fit(X_apt, y_apt_scale)
knn_fit.fit(X_town, y_town_scale)
x_test["predK%i" %k_list[i]] = knn_fit.predict(x_test.iloc[:, 14:16])
mean = x_test.groupby(['Gue'], as_index=False).mean()
price_pred = pd.DataFrame(mean.iloc[:, -1])
price_pred.index = mean['Gue']
sortList.append(price_pred)
plt.plot(price_pred)
plt.legend(k_list)
plt.rcParams['axes.grid'] = True
plt.show()
# K=25로 결정 #
# - 결과물 저장
#x_test.to_csv("d:/project_data/KNN_data.csv", encoding='euc-kr', index=False)
x_test.iloc[:,:19].head()
# ## **데이터 분석 RNN LSTM**
# - 서울시 행정동의 2010년 1월 - 2018년 9월까지의 월별 아동 인구 수(만0-5세)의 시계열 어린이패턴을 분석, 2021년 인구 추정하였음
# - 집을 이용하는 아동의 연령에 대한 범위는 선행연구를 참고
# - 어린이집 이용 아동들의 연령은 평균 생후 52개월 경으로 나타나, 만 5세부터는 주로 유치원을 많이 이용하는 것으로 봄
# +
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from tensorflow.contrib import rnn
tf.set_random_seed(777)
tf.reset_default_graph()
# +
time = ["%i"%(i) + "-%i"%(3) for i in range(2010, 2022)]
## parameter ##
seq_length = 5 # 데이터의 시퀀스 length (연관된 데이터) -> output row
data_dim = 1 # 입력 차원 --> 인구수 1 (동별)
output_dim = 1 # 출력 차원 --> 예측치 1
#hidden_size = 30 # 셀 연산 후 나오는 output col
learning_rate = 0.07
iteration = 8000
m = 105 # --> None
MSE_list = []
pop_2103 = []
# training parameter #
predict_list = []
is_training = True
l2norm = 0.0001
# -
### 데이터 전처리 ###
all_data = pd.read_csv("d:/project_data/peopleDataAll01.csv", sep=",", encoding='cp949')
# - Train-Test(모델 test)
for k in [-18]: # 모든 동 별 for문으로 modeling --> 예시는 하나
tf.reset_default_graph()
keep_prob = tf.placeholder(dtype=tf.float32)
test1 = all_data.iloc[:, [k]] # shape(105,1) m = 105
# train scaling #
mm1 = StandardScaler()
test1 = mm1.fit_transform(test1)
## split ## --> 시계열(시간순)
train_size = int(len(test1) * 0.8)
train_set = test1[:train_size, :] # shape(512, 5)
test_set = test1[train_size:, :] # test(220, 5)
# - RNN data building
def build(time_series, seq_length):
x_data = []
y_data = []
for i in range(0, len(time_series) - seq_length):
x_tmp = time_series[i: i + seq_length, :]
y_tmp = time_series[i + seq_length, [-1]]
x_data.append(x_tmp)
y_data.append(y_tmp)
return np.array(x_data), np.array(y_data)
x_train, y_train = build(train_set, seq_length)
x_test, y_test = build(test_set, seq_length)
predict_x = test_set[-seq_length:].reshape(1, seq_length, 1)
# - RNN building
# cell #
def lstm_cell(hidden_size):
cell = tf.nn.rnn_cell.LSTMCell(num_units=hidden_size, activation=tf.tanh)
return cell
# drop-out / multi-cell #
cell1 = rnn.DropoutWrapper(lstm_cell(15), input_keep_prob=keep_prob, output_keep_prob=keep_prob, seed=77)
cell2 = rnn.DropoutWrapper(lstm_cell(10), input_keep_prob=keep_prob, output_keep_prob=keep_prob, seed=77)
cell = rnn.MultiRNNCell([cell1, cell2], state_is_tuple=True) # dropout cell 5개
X = tf.placeholder(dtype=tf.float32, shape=[None, seq_length, data_dim])
y = tf.placeholder(dtype=tf.float32, shape=[None, 1])
# 초기화 #
output, _state = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
Y_pred = tf.contrib.layers.fully_connected(output[:, -1], output_dim, activation_fn=None) # last cell output --> 15일 뒤
# 신경망 모델 구성 # --> 2층 구조 / xavier init / dropout / l2 reg / batch normalization
init = tf.contrib.layers.xavier_initializer(seed=77)
W1 = tf.Variable(init([1, 100]), name='weight1')
b1 = tf.Variable(init([100]), name='bias1')
layer1 = tf.matmul(Y_pred, W1) + b1
l1 = tf.contrib.layers.batch_norm(layer1, center=True, scale=True,
is_training=is_training)
L1 = tf.nn.relu(l1, name='relu1')
L1 = tf.nn.dropout(L1, keep_prob=keep_prob)
W2 = tf.Variable(init([100, 1]), name='weight2')
b2 = tf.Variable(init([1]), name='bias2')
hypothesis = tf.matmul(L1, W2) + b2
## tf.trainable --> l2 norm ##
var = tf.trainable_variables()
l2reg = tf.add_n([tf.nn.l2_loss(v) for v in var if 'bias' not in v.name]) * l2norm
# cost #
cost = tf.reduce_mean(tf.square(Y_pred - y)) # sum of sq --> 수치 예측이기 때문에 sq loss가 필요 없다.
opt = tf.train.AdamOptimizer(learning_rate=learning_rate)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # batch_norm
with tf.control_dependencies(update_ops):
train = opt.minimize(cost)
# MSE # --> mean squared error
targets= tf.placeholder(tf.float32, [None, 1])
predicts = tf.placeholder(tf.float32, [None, 1])
MSE = tf.sqrt(tf.reduce_mean(tf.square(predicts - targets)))
## session ##
# training #
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.7)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
sess.run(tf.global_variables_initializer())
for i in range(iteration):
cost_val, _, out= sess.run([cost, train, output], feed_dict={X: x_train, y: y_train, keep_prob:0.8})
# if i % 1000 == 0:
# print(cost_val)
# predict #
is_training = False
y_hat_train = sess.run(Y_pred, feed_dict={X: x_train, keep_prob:1.0})
y_hat = sess.run(Y_pred, feed_dict={X: x_test, keep_prob:1.0})
# y_hat = mm1.inverse_transform(y_hat)
# y_test = mm1.inverse_transform(y_test)
RMSE_train = sess.run(MSE, feed_dict={targets: y_train, predicts: y_hat_train, keep_prob:1.0})
RMSE = sess.run(MSE, feed_dict={targets: y_test, predicts: y_hat, keep_prob:1.0})
print("RMSE_train: ", RMSE_train)
print("RMSE: ", RMSE)
predict_hat = sess.run(Y_pred, feed_dict={X: predict_x, keep_prob:1.0})
# - 시각화
MSE_list.append(RMSE)
predict_list.append(mm1.inverse_transform(predict_hat)[0,0])
plt.figure(figsize=(8,3))
plt.plot(y_train, 'r-')
plt.plot(y_hat_train, 'b-')
plt.xlabel("Time")
plt.ylabel("Population")
plt.show()
plt.figure(figsize=(8,3))
plt.plot(y_test, 'r-')
plt.plot(y_hat, 'b-')
plt.xlabel("Time")
plt.ylabel("Population")
plt.show()
sess.close()
# - RNN_LSTM: Predict modeling
## LSTM Prediction ## --> 모든 데이터를 train으로 활용 후, predict *학습데이터의 수가 적기 때문
for k in [-18]: # -> 454개의 동 별 모두 for문으로 완성.
tf.reset_default_graph()
test1 = all_data.iloc[:, [k]] # shape(105,1) m = 105
keep_prob = tf.placeholder(tf.float32)
# train scaling #
mm1 = StandardScaler()
test1 = mm1.fit_transform(test1)
# RNN data building #
def build(time_series, seq_length):
x_data = []
y_data = []
for i in range(0, len(time_series) - seq_length):
x_tmp = time_series[i: i + seq_length, :]
y_tmp = time_series[i + seq_length, [-1]]
x_data.append(x_tmp)
y_data.append(y_tmp)
return np.array(x_data), np.array(y_data)
x_train, y_train = build(test1, seq_length)
predict_x = test1[-seq_length*2+1:-seq_length+1].reshape(1, seq_length, 1)
## RNN building ##
# cell #
def lstm_cell(hidden_size):
cell = tf.nn.rnn_cell.LSTMCell(num_units=hidden_size, activation=tf.tanh)
return cell
cell1 = rnn.DropoutWrapper(lstm_cell(15), input_keep_prob=keep_prob, output_keep_prob=keep_prob, seed=77)
cell2 = rnn.DropoutWrapper(lstm_cell(10), input_keep_prob=keep_prob, output_keep_prob=keep_prob, seed=77)
# tensor board를 위한 list #
cells = []
cells.append(cell1)
cells.append(cell2)
cell = rnn.MultiRNNCell([cell1, cell2], state_is_tuple=True) # dropout cell 5개
## tensor board ##
for one_lstm_cell in cells:
one_kernel = one_lstm_cell.variables
tf.summary.histogram("Kernel", one_kernel)
# 
## 초기화 #
X = tf.placeholder(dtype=tf.float32, shape=[None, seq_length, data_dim])
y = tf.placeholder(dtype=tf.float32, shape=[None, 1])
output, _state = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
Y_pred = tf.contrib.layers.fully_connected(output[:, -1], output_dim, activation_fn=None) # last cell output --> 15일 뒤
# cost #
cost = tf.reduce_sum(tf.square(Y_pred - y)) # sum of sq --> 수치 예측이기 때문에 sq loss가 필요 없다.
opt = tf.train.AdamOptimizer(learning_rate=learning_rate)
train = opt.minimize(cost)
# MSE # --> mean squared error
targets= tf.placeholder(tf.float32, [None, 1])
predicts = tf.placeholder(tf.float32, [None, 1])
MSE = tf.sqrt(tf.reduce_mean(tf.square(predicts - targets)))
summary_op = tf.summary.merge_all()
## session ##
# training#
sess = tf.Session()
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter("d:/project_data/logdir/", graph=tf.get_default_graph())
for i in range(iteration):
cost_val, _, out, step_summary= sess.run([cost, train, output, summary_op], feed_dict={X: x_train, y: y_train, keep_prob: 0.7})
# if i % 100 == 0: print(cost_val)
train_writer.add_summary(step_summary)
# predict # --> 201809 30개월 후 --> 202103
for t in range(30):
tmp_arr = sess.run(Y_pred, feed_dict={X: predict_x, keep_prob: 1.0})
test1 = np.concatenate((test1, tmp_arr))
predict_x = np.concatenate((predict_x[:, 1:, :], tmp_arr.reshape(1,1,1)), axis=1)
sess.close()
# - 시각화
if k % 1 == 0:
data_concat = mm1.inverse_transform(test1)
data_concat = pd.DataFrame(data_concat)
plt.figure(figsize=(16,8))
plt.plot(data_concat.iloc[:106, :], 'r-')
plt.plot(data_concat.iloc[105:, :].index, data_concat.iloc[105:, :], 'b-')
plt.xlabel("Time")
plt.ylabel("Population")
plt.xticks(ticks=np.arange(0, 135, 12), labels=list(time))
plt.show()
pop_2103.append(int(data_concat.iloc[-1][0]))
# - 결과물 저장
plist = pd.DataFrame(pop_2103).T
#plist.to_csv("d:/project_data/pop_2103.csv")
plist.head() # 2021년 3월 인구 예측
# ## **데이터 분석 K-means**
# 시설 접근성 및 이용자접근성 지수가 가장 낮은 지역(인구밀도가 높고 소득이 적은 지역)이 국공립 어린이집이 필요한 지역으로 해석함
# - **시설 접근성 분석 : 각 클러스터의 중심 위치와 어린이집 위치를 반영**
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from scipy.spatial.distance import cdist, pdist
np.random.seed(777)
# -
# - 어린이집 데이터 전처리
all_center = pd.read_csv("d:/project_data/test/all_test_1.csv", sep=",", encoding="euc-kr")
x_test = all_center[all_center['Type'] == "국공립"] # 국공립만 선택
X = x_test.iloc[:-15, 15:17]
X_test = x_test.iloc[:, 15:17]
# - 최적 클러스터 k 찾기 : Elbow Curve
K = 150
# +
def k_search():
## 최적의 k를 엘보 그래프로 찾는 함수 ##
K = [25, 50, 75, 100, 125, 150, 175, 200]
KM = [KMeans(n_clusters=k).fit(X) for k in K] # 각각의 k(25~300까지 5단위), k-means 명령어
ss = [silhouette_score(X, k.labels_, metric='euclidean') for k in KM]
centroids = [k.cluster_centers_ for k in KM] # 각 k-means마다 클러스터별 center 거리
D_k = [cdist(X, centrds, 'euclidean') for centrds in centroids] # 센터와 X데이터간의 거리
cIdx = [np.argmin(D, axis=1) for D in D_k] # 최소 거리
dist = [np.min(D, axis=1) for D in D_k] # 최소 거리
avgWithinSS = [sum(d) / X.shape[0] for d in dist] # 클러스터 내 제곱 평균 (sum of sq)
wcss = [sum(d**2) for d in dist] # sq 계산
tss = sum(pdist(X)**2 / X.shape[0]) # X각각의 거리 제곱 / m --> 평균
bss = tss - wcss
fig, axs = plt.subplots(2,1, constrained_layout=True)
axs[0].plot(K, avgWithinSS, 'o-')
axs[0].set_title('Average within-cluster sum of squares')
axs[0].set_xlabel('Number of clusters')
axs[0].set_ylabel('avgWithinSS')
fig.suptitle('Elbow Curve for finding K value', fontsize=16)
## 분산 ##
axs[1].plot(K, bss/tss*100, '--')
axs[1].set_title('Analysis of variance')
axs[1].set_xlabel('Number of clusters')
axs[1].set_ylabel('variance explained(%)')
plt.show()
return ss
ss = k_search() # k -- > 구별 25 / 100로 진행
# -
# 
# n_cluster = 150, max_iter=3000 #
k_means = KMeans(n_clusters=K, max_iter=3000, random_state=77)
k_means.fit(X)
k_cluster = k_means.predict(X_test)
x_test['k_cluster'] = k_cluster
# - 시각화 : GIS 공간분석방법
# 
# 실루엣 스코어 --> 클러스터 밀집도 평가지수 (-1 ~ 1) --> 높을 수록 좋다.
ss = silhouette_score(X, k_means.labels_, metric='euclidean')
ss
# 
center = k_means.cluster_centers_ # 150개의 클러스터
center = pd.DataFrame(center)
groupby = x_test.sort_values(['k_cluster'])
def distance(a, b):
## 좌표계 사이의 거리를 km 계산으로 ## --> 위도 경도는 radian 형태이므로 --> 변경이 필요
lon1, lat1 = a[0], a[1]
lon2, lat2 = float("%.6f" %b[0]), float("%.6f" %b[1])
R = 6378.137 #// radius of the earth in km
dlat = (lat2 - lat1) * (np.pi / 180)
dlon = (lon2 - lon1) * (np.pi / 180)
a = np.sin((dlat/2))**2 + np.cos(lat1 * np.pi / 180) * np.cos(lat2 * np.pi / 180) * (np.sin(dlon/2))**2
c = 2 * np.math.atan2(np.sqrt(a), np.sqrt(1-a))
d = R * c
return d
def center_access(center_col, pop):
# 시설 접근성 수식 연산 함수 #
global k_means, center, K, groupby
groupby[center_col] = 0.01
xy = np.array(groupby.iloc[:, 15:17])
center_xy = np.array(center.iloc[:, 0:2])
tmp = np.zeros_like(groupby[center_col])
for j in range(len(groupby)):
tmpList = []
for i in range(len(center)):
gb = groupby[groupby['k_cluster'] == i]
e = np.int(np.mean(gb[pop]))
dist = distance(xy[j], center_xy[i])
tmpList.append(e * (dist*1000) ** -1)
tmp[j] = np.sum(tmpList)
groupby[center_col] = tmp
# - **이용자 접근성 분석 : 시설 접근성과 이용자 밀도의 비교 분석을 반영**
center_access('center_access', '201809')
# 
def people_access(people_col, center_col):
global k_means, center, K, groupby
center[people_col] = 0.01
xy = np.array(groupby.iloc[:, 15:17])
center_xy = np.array(center.iloc[:, 0:2])
tmp = np.zeros_like(center[people_col])
for j in range(len(center)):
# if j % 100 == 0: print("people continue..")
tmpList = []
for i in range(len(groupby)):
center_acc = groupby[center_col].iloc[i]
limit = groupby['Max'].iloc[i]
dist = distance(xy[i], center_xy[j])
tmpList.append((limit * (dist*1000) ** -1) / (center_acc))
tmp[j] = np.sum(tmpList)
center[people_col] = tmp
for i in range(len(groupby)):
groupby[people_col].iloc[i] = center[people_col][groupby['k_cluster'].iloc[i]]
# - 현재 인구 기반 이용자 접근성 분석
people_access('people_access', 'center_access')
# [](https://github.com/rosa-yuri/BigCampus_project/blob/master/img/7.png)
# - 2021년 인구 예측 기반 시설 / 이용자 접근성 분석
center_access('center_access_2', '202104')
people_access('people_access_2', 'center_access_2')
# - 결과 저장
# groupby.to_csv("d:/project_data/test/test_1.csv", encoding="euc-kr", index=0)
groupby.head(3)
# ## **데이터 심층분석 - 수치 랭킹**
# 동별 랭킹과 %를 구하는 모듈
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
# +
all_data = pd.read_csv("d:/project_data/test/test_11.csv", encoding='euc-kr')
# scaling #
mm = MinMaxScaler()
scale_m = mm.fit_transform(all_data.iloc[:, -5:])
summation = pd.DataFrame(np.mean(scale_m, axis=1))
data = pd.concat((all_data['Name'], all_data['old_add'], summation), axis=1)
mean = data.groupby(['old_add'], as_index=False).mean() # (4, 13, )
mean.columns = ['old_add', 'ranking']
mean = mean.sort_values(by=['ranking'])
mean['rank'] = mean.iloc[:,[-1]].rank() / len(mean) * 100
# -
## 결과물 저장 ##
#mean.to_csv("d:/project_data/test/ranking.csv", encoding="euc-kr")
mean.head()
# ## **데이터 심층 분석 - 협업 필터링**
# 코사인 유사도
# - 최하 클러스터 유사 어린이집 찾기
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.preprocessing import MinMaxScaler
# +
## 데이터 전처리 ##
all_data = pd.read_csv("d:/project_data/KK_k150_2021.csv", sep=",", encoding='cp949')
# 필요 데이터 벡터화 #
data = pd.concat((all_data['predK25'], all_data['center_access'], all_data['center_access_2'],
all_data['people_access'], all_data['people_access_2']), axis=1)
data.index = all_data['Name'] # 인덱스 첨부
# -
# scaling #
mm = MinMaxScaler()
data_scale = mm.fit_transform(data)
ana = cosine_similarity(data_scale)
# +
# 소외 어린이집 별 groupby, sorting #
data_259 = pd.DataFrame(ana[259], index=all_data['Name'], columns=['봄빛'])
#data_259 = data_259.sort_values(by='봄빛', ascending=False)
data_261 = pd.DataFrame(ana[261], index=all_data['Name'], columns=['상일'])
#data_261 = data_261.sort_values(by='상일', ascending=False)
data_270 = pd.DataFrame(ana[270], index=all_data['Name'], columns=['한마을'])
#data_270 = data_270.sort_values(by='한마을', ascending=False)
data_824 = pd.DataFrame(ana[824], index=all_data['Name'], columns=['늘사랑'])
#data_824 = data_824.sort_values(by='늘사랑', ascending=False)
data_686 = pd.DataFrame(ana[686], index=all_data['Name'], columns=['노원'])
#data_686 = data_686.sort_values(by='노원', ascending=False)
cos_sim = pd.concat((data_259, data_261, data_270, data_824, data_686), axis=1)
cos_sim = cos_sim[cos_sim > 0.9]
cos_sim = cos_sim.dropna(axis=0)
#cos_sim.to_csv("d:/project_data/cos_sim.csv", encoding="cp949")
cos_sim.head()
# -
# ## **데이터 심층 분석 - 관련 변수의 분석 결과 및 종합 분석**
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
# - 소득 분위 별 클러스터링 함수
# +
all_data = pd.read_csv("d:/project_data/test/test_1.csv", sep=",", encoding='cp949')
def income_cluster(col):
quarter = range(25,101,25)
k = 0
p = np.min(col)
for i in quarter:
q = np.percentile(col, i)
idx = all_data[all_data['predK25'] <= q][all_data['predK25'] > p].index
for j in idx:
all_data.iloc[j, -1] = k
k += 1
p = q
# -
# - 시설접근성 분위 별 클러스터링
def center_cluster(all_data, colname, new_col):
mean = all_data[colname].groupby(all_data['old_add']).mean()
mean = mean.sort_values()
for i in range(len(mean)):
mean[i] = i
for i in range(len(all_data)):
all_data.iloc[i, -1] = int(mean[all_data['old_add'][i]])
# - 이용자 접근성 분위 별 클러스터링
def people_cluster(colname, new_col):
global all_data
sort = all_data[colname].sort_values().index
k = 0
j = all_data[colname][sort[0]]
for i in sort:
if all_data[colname][i] == j:
all_data.iloc[i, -1] = k
else:
k += 1
all_data.iloc[i, -1] = k
j = all_data[colname][i]
all_data['income_cluster_test'] = 0
income_cluster(all_data['predK25'])
all_data['center_cluster1_test'] = 0
center_cluster(all_data, 'center_access', 'center_cluster1_test')
all_data['people_cluster1_test'] = 0
people_cluster('people_access', 'people_cluster1_test')
all_data['center_cluster2_test'] = 0
center_cluster(all_data, 'center_access_2', 'center_cluster2_test')
all_data['people_cluster2_test'] = 0
people_cluster('people_access_2', 'people_cluster2_test')
## 결과물 저장 ##
#all_data.to_csv("d:/project_data/test/test2(상계1동, 강일동).csv", encoding='cp949')
all_data.iloc[:, 15:31].head(3)
# ## **심층분석에 따른 최적 입지 선정**
# 유흥시설, 산업지역 및 그 외 기피지역을 제외한 지역을 선정
# 
# 
# 
# ## **개선효과 분석**
# 어린이집과 어린이집 개수 충원을 가정한 후, 수치 입력 후 재 분석
# 
# 
# 
| Project_md_kernel_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %autosave 2
# %matplotlib inline
import numpy as np
import pandas
# ?np.loadtxt
filename = 'freddi.dat'
with open(filename) as f:
print(f.readline())
table = np.genfromtxt(filename, names=True, usecols=(0,5))
print(table.shape)
print(table.dtype)
print(table[0])
print(table['t'][:5])
table = np.genfromtxt(
filename,
usecols=(0,5),
skip_header=10,
skip_footer=100,
comments='#',
dtype=[('a', np.float), ('b', '|U10')],
converters={5: lambda s: s.replace(b'0', b'U')}
)
print(table.shape)
print(table.dtype)
print(table[:5])
| misc/jupyter_notebooks/17.10.13/numpy_table.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# 
# # Introduction to labeled datasets
#
# Labeled datasets are output from Azure Machine Learning [labeling projects](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-create-labeling-projects). It captures the reference to the data (e.g. image files) and its labels.
#
# This tutorial introduces the capabilities of labeled datasets and how to use it in training.
#
# Learn how-to:
#
# > * Set up your development environment
# > * Explore labeled datasets
# > * Train a simple deep learning neural network on a remote cluster
#
# ## Prerequisite:
# * Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
# * Go through Azure Machine Learning [labeling projects](https://docs.microsoft.com/azure/machine-learning/service/how-to-create-labeling-projects) and export the labels as an Azure Machine Learning dataset
# * Go through the [configuration notebook](../../../configuration.ipynb) to:
# * install the latest version of azureml-sdk
# * install the latest version of azureml-contrib-dataset
# * install [PyTorch](https://pytorch.org/)
# * create a workspace and its configuration file (`config.json`)
# ## Set up your development environment
#
# All the setup for your development work can be accomplished in a Python notebook. Setup includes:
#
# * Importing Python packages
# * Connecting to a workspace to enable communication between your local computer and remote resources
# * Creating an experiment to track all your runs
# * Creating a remote compute target to use for training
#
# ### Import packages
#
# Import Python packages you need in this session. Also display the Azure Machine Learning SDK version.
# +
import os
import azureml.core
import azureml.contrib.dataset
from azureml.core import Dataset, Workspace, Experiment
from azureml.contrib.dataset import FileHandlingOption
# check core SDK version number
print("Azure ML SDK Version: ", azureml.core.VERSION)
print("Azure ML Contrib Version", azureml.contrib.dataset.VERSION)
# -
# ### Connect to workspace
#
# Create a workspace object from the existing workspace. `Workspace.from_config()` reads the file **config.json** and loads the details into an object named `workspace`.
# load workspace
workspace = Workspace.from_config()
print('Workspace name: ' + workspace.name,
'Azure region: ' + workspace.location,
'Subscription id: ' + workspace.subscription_id,
'Resource group: ' + workspace.resource_group, sep='\n')
# ### Create experiment and a directory
#
# Create an experiment to track the runs in your workspace and a directory to deliver the necessary code from your computer to the remote resource.
# +
# create an ML experiment
exp = Experiment(workspace=workspace, name='labeled-datasets')
# create a directory
script_folder = './labeled-datasets'
os.makedirs(script_folder, exist_ok=True)
# -
# ### Create or Attach existing compute resource
# By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you will create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.
#
# **Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process.
# +
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "openhack"
try:
compute_target = ComputeTarget(workspace=workspace, name=cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(workspace, cluster_name, compute_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it uses the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
# -
# ## Explore labeled datasets
#
# **Note**: How to create labeled datasets is not covered in this tutorial. To create labeled datasets, you can go through [labeling projects](https://docs.microsoft.com/azure/machine-learning/service/how-to-create-labeling-projects) and export the output labels as Azure Machine Lerning datasets.
#
# `animal_labels` used in this tutorial section is the output from a labeling project, with the task type of "Object Identification".
# get animal_labels dataset from the workspace
animal_labels = Dataset.get_by_name(workspace, 'animal_labels')
animal_labels
# You can load labeled datasets into pandas DataFrame. There are 3 file handling option that you can choose to load the data files referenced by the labeled datasets:
# * Streaming: The default option to load data files.
# * Download: Download your data files to a local path.
# * Mount: Mount your data files to a mount point. Mount only works for Linux-based compute, including Azure Machine Learning notebook VM and Azure Machine Learning Compute.
animal_pd = animal_labels.to_pandas_dataframe(file_handling_option=FileHandlingOption.DOWNLOAD, target_path='./download/', overwrite_download=True)
animal_pd
# +
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# read images from downloaded path
img = mpimg.imread(animal_pd.loc[0,'image_url'])
imgplot = plt.imshow(img)
# -
# You can also load labeled datasets into [torchvision datasets](https://pytorch.org/docs/stable/torchvision/datasets.html), so that you can leverage on the open source libraries provided by PyTorch for image transformation and training.
# +
from torchvision.transforms import functional as F
# load animal_labels dataset into torchvision dataset
pytorch_dataset = animal_labels.to_torchvision()
img = pytorch_dataset[0][0]
print(type(img))
# use methods from torchvision to transform the img into grayscale
pil_image = F.to_pil_image(img)
gray_image = F.to_grayscale(pil_image, num_output_channels=3)
imgplot = plt.imshow(gray_image)
# -
# ## Train an image classification model
#
# `crack_labels` dataset used in this tutorial section is the output from a labeling project, with the task type of "Image Classification Multi-class". We will use this dataset to train an image classification model that classify whether an image has cracks or not.
# get crack_labels dataset from the workspace
crack_labels = Dataset.get_by_name(workspace, 'crack_labels')
crack_labels
# ### Configure training job
#
# You can ask the system to build a conda environment based on your dependency specification. Once the environment is built, and if you don't change your dependencies, it will be reused in subsequent runs.
# +
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
conda_env = Environment('conda-env')
conda_env.python.conda_dependencies = CondaDependencies.create(pip_packages=['azureml-sdk',
'azureml-contrib-dataset',
'torch','torchvision',
'azureml-dataset-runtime[pandas]'])
# -
# A ScriptRunConfig object is used to submit the run. Create a ScriptRunConfig by specifying
#
# * The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution.
# * The training script name, train.py
# * The input dataset for training
# * The compute target. In this case you will use the AmlCompute you created
# * The environment for the experiment
# +
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='train.py',
arguments=[crack_labels.as_named_input('crack_labels')],
compute_target=compute_target,
enviroment=conda_env)
# -
# ### Submit job to run
#
# Submit the ScriptRunConfig to the Azure ML experiment to kick off the execution.
run = exp.submit(src)
run.wait_for_completion(show_output=True)
| how-to-use-azureml/work-with-data/datasets-tutorial/labeled-datasets/labeled-datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torchvision
import torch
import torch.optim as optim
import torch.nn as nn
# +
import matplotlib.pyplot as plt
# %matplotlib inline
from PIL import Image
import numpy as np
# -
import cv2
from torchvision import models, transforms
vgg = models.vgg19(pretrained=True).features
for param in vgg.parameters():
param.requires_grad_(False)
def load_image(image_path, max_size=400, shape=None):
image = Image.open(image_path).convert("RGB")
if max(image.size)>max_size:
size=max_size
else:
size = max(image.size)
if shape is not None:
size=shape
tfms = transforms.Compose([
transforms.Resize(size),
transforms.ToTensor(),
transforms.Normalize((.5,.5,.5),(.5,.5,.5))
])
image = tfms(image).unsqueeze(0)
return image
content = load_image("sofi_fam.jpeg")
style = load_image("kandinsky_style.jpg", shape = content.shape[-2:])
import pathlib as p
def im_convert(tensor):
image = tensor.cpu().clone().detach().numpy()
image = image.squeeze()
image = image.transpose(1, 2, 0)
image = image * np.array((0.5, 0.5, 0.5)) + np.array((0.5, 0.5, 0.5))
image = image.clip(0, 1)
return image
fig,(ax1,ax2)= plt.subplots(1,2,figsize=(20,10))
ax1.imshow(im_convert(content))
ax1.axis('off')
ax2.imshow(im_convert(style))
ax2.axis('off')
def get_features(image,model):
layers = {
"0":"conv1_1",
"5":"conv2_1",
"10":"conv3_1",
"19":"conv4_1",
"21":"conv4_2", #content extraction
"28":"conv5_1"
}
features={}
for name, layer in model._modules.items():
image = layer(image)
if name in layers:
features[layers[name]] = image
return features
content_features = get_features(content,vgg)
style_features = get_features(style, vgg)
def gram_matrix(tensor):
b,d,h,w = tensor.size()
tensor = tensor.view(d,h*w)
gram = torch.mm(tensor, tensor.t())
return gram
style_grams = {layer:gram_matrix(style_features[layer]) for layer in style_features}
style_weights = {"conv1_1":1.0,
"conv2_1":.75,
"conv3_1":.2,
"conv4_1":.2,
"conv5_1":.2}
content_weight=1
style_weight = 1e6
target = content.clone().requires_grad_(True)
show_every=500
optimizer = optim.Adam([target],lr=.003)
steps =4000
height, width, channels = im_convert(target).shape
image_array = np.empty(shape=(300,height,width, channels))
capture_frame = steps/300
counter = 0
for ii in range(1,steps+1):
target_features = get_features(target,vgg)
content_loss = torch.mean((target_features['conv4_2']- content_features['conv4_2'])**2)
style_loss =0
for layer in style_weights:
target_feature = target_features[layer]
target_gram = gram_matrix(target_feature)
style_gram = style_grams[layer]
layer_style_loss = style_weights[layer]* torch.mean((target_gram - style_gram)**2)
_,d,h,w = target_feature.shape
style_loss += layer_style_loss / (d*h*w)
total_loss = content_weight*content_loss + style_weight*style_loss
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
if ii%show_every==0:
print("Total loss ",total_loss.item())
print("Iteration: ",ii)
plt.imshow(im_convert(target))
plt.axis("off")
plt.show()
if ii%capture_frame==0:
image_array[counter] = im_convert(target)
counter+=1
fig,(ax1,ax2,ax3) = plt.subplots(1,3,figsize = (20,10))
ax1.imshow(im_convert(content))
ax1.axis("off")
ax2.imshow(im_convert(style))
ax2.axis("off")
ax3.imshow(im_convert(target))
ax3.axis("off")
final = Image.fromarray((target * 255).astype(np.uint8))
final.save("final.png","PNG")
| AI Painting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/facebookresearch/vissl/blob/v0.1.6/tutorials/Benchmark_Linear_Image_Classification_on_ImageNet_1K_V0_1_6.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="zL8iG7SJ7Hrk"
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
# + [markdown] id="6XzxTZfKwFNo"
# # Benchmark Linear Image Classification on ImageNet-1K
#
# In this tutorial, we look at a simple example of how to use VISSL to run a linear image classification benchmark for [ResNet-50 Torchvision pre-trained model](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py#L16). This benchmark freezes the model trunk and attaches a linear head on top of the trunk features.
#
# You can make a copy of this tutorial by `File -> Open in playground mode` and make changes there. Please do NOT request access to this tutorial.
#
# **NOTE:** Please ensure your Collab Notebook has a GPU available. To ensure this, simply follow: `Edit -> Notebook Settings -> select GPU.`
# + [markdown] id="VohdWhBSw69e"
# ## Install VISSL
#
# Installing VISSL is straightfoward. We will install VISSL from source using pip, following the instructions from [here](https://github.com/facebookresearch/vissl/blob/master/INSTALL.md#install-vissl-pip-package). Note, you can also install VISSL in a conda environment or from our conda/pip binaries.
# + id="R5ISg59KTOqU"
# Install pytorch version 1.8
# !pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# install Apex by checking system settings: cuda version, pytorch version, and python version
import sys
import torch
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
print(version_str)
# install apex (pre-compiled with optimizer C++ extensions and CUDA kernels)
# !pip install apex -f https://dl.fbaipublicfiles.com/vissl/packaging/apexwheels/{version_str}/download.html
# # clone vissl repository and checkout latest version.
# !git clone --recursive https://github.com/facebookresearch/vissl.git
# %cd vissl/
# !git checkout v0.1.6
# !git checkout -b v0.1.6
# install vissl dependencies
# !pip install --progress-bar off -r requirements.txt
# !pip install opencv-python
# update classy vision install to commit compatible with v0.1.6
# !pip uninstall -y classy_vision
# !pip install classy-vision@https://github.com/facebookresearch/ClassyVision/tarball/4785d5ee19d3bcedd5b28c1eb51ea1f59188b54d
# install vissl dev mode (e stands for editable)
# !pip install -e .[dev]
# + [markdown] id="u6Fxe3MWxqsI"
# VISSL should be successfuly installed by now and all the dependencies should be available.
# + id="Np6atgoOTPrA"
import vissl
import tensorboard
import apex
import torch
# + [markdown] id="IxMXLYLpsJXj"
# ## Download the ResNet-50 weights from Torchvision
#
# We download the weights from the [torchvision ResNet50 model](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py#L16):
# + id="mv0quZwFsWxs"
# !wget https://download.pytorch.org/models/resnet50-19c8e357.pth -P /content/
# + [markdown] id="J0hng2EPY7pr"
# ## Creating a custom data
#
# For the purpose of this tutorial, since we don't have ImageNet on the disk, we will create a dummy dataset by copying an image from COCO dataset in ImageNet dataset folder style as below:
# + id="5-sy6nD-RfwB"
# !mkdir -p /content/dummy_data/train/class1
# !mkdir -p /content/dummy_data/train/class2
# !mkdir -p /content/dummy_data/val/class1
# !mkdir -p /content/dummy_data/val/class2
# create 2 classes in train and add 5 images per class
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class1/img1.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class1/img2.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class1/img3.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class1/img4.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class1/img5.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class2/img1.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class2/img2.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class2/img3.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class2/img4.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/train/class2/img5.jpg
# create 2 classes in val and add 5 images per class
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class1/img1.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class1/img2.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class1/img3.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class1/img4.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class1/img5.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class2/img1.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class2/img2.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class2/img3.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class2/img4.jpg
# !wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O /content/dummy_data/val/class2/img5.jpg
# + [markdown] id="KPGCiTsXZeW3"
# ## Using the custom data in VISSL
#
# Next step for us is to register the dummy data we created above with VISSL. Registering the dataset involves telling VISSL about the dataset name and the paths for the dataset. For this, we create a simple json file with the metadata and save it to `configs/config/dataset_catalog.py` file.
#
# **NOTE**: VISSL uses the specific `dataset_catalog.json` under the path `configs/config/dataset_catalog.json`.
# + id="M8Q6LCqaWjl1"
json_data = {
"dummy_data_folder": {
"train": [
"/content/dummy_data/train", "/content/dummy_data/train"
],
"val": [
"/content/dummy_data/val", "/content/dummy_data/val"
]
}
}
# use VISSL's api to save or you can use your custom code.
from vissl.utils.io import save_file
save_file(json_data, "/content/vissl/configs/config/dataset_catalog.json", append_to_json=False)
# + [markdown] id="otN1pB32cBHK"
# Next, we verify that the dataset is registered with VISSL. For that we query VISSL's dataset catalog as below:
# + colab={"base_uri": "https://localhost:8080/"} id="wZBhH-s5bcHd" outputId="0b6b40da-9751-43ec-9766-6e78a1619e35"
from vissl.data.dataset_catalog import VisslDatasetCatalog
# list all the datasets that exist in catalog
print(VisslDatasetCatalog.list())
# get the metadata of dummy_data_folder dataset
print(VisslDatasetCatalog.get("dummy_data_folder"))
# + [markdown] id="YaUMDwMdzYHN"
# ## Training linear classifier on trunk output
#
# VISSL provides yaml configuration files for all benchmark tasks including linear image classification on ImageNet [here](https://github.com/facebookresearch/vissl/tree/master/configs/config/benchmark).
#
#
# VISSL provides yaml configuration files that reproduces training of all self-supervised approaches [here](https://github.com/facebookresearch/vissl/tree/master/configs/config/pretrain). For the purpose of this tutorial, we will use the config file for training SimCLR approach on 1-gpu.
#
# VISSL provides a [helper python tool](https://github.com/facebookresearch/vissl/blob/main/tools/run_distributed_engines.py) that allows you to train models based on our configuration system. This tool allows:
# - training and feature extraction.
# - training on 1-gpu, multi-gpu, or even multi-machine using Pytorch DDP or Fairscale FSDP.
# - allows training and feature extraction both using VISSL.
# - also allows training on 1-gpu or multi-gpu.
# - can be used to launch multi-machine distributed training.
#
# We are ready to train the linear classifier now. For the purpose of this tutorial, we will use synthetic dataset and train on dummy images. VISSL supports training on wide range of datasets and allows adding custom datasets. Please see VISSL documentation on how to use the datasets. To train on ImageNet instead: assuming your ImageNet dataset folder path is `/path/to/my/imagenet/folder/`, you can add the following command line
# input to your training command:
# ```
# config.DATA.TRAIN.DATASET_NAMES=[imagenet1k_folder] \
# config.DATA.TRAIN.DATA_SOURCES=[disk_folder] \
# config.DATA.TRAIN.DATA_PATHS=["/path/to/my/imagenet/folder/train"] \
# config.DATA.TRAIN.LABEL_SOURCES=[disk_folder]
# ```
# + [markdown] id="fM7IigSpONW0"
# The training command looks like:
# + colab={"base_uri": "https://localhost:8080/"} id="6v0HvauIj9S2" outputId="8ea324ad-9c7b-45d2-81d8-339f6efe2a00"
# !python3 tools/run_distributed_engines.py \
# hydra.verbose=true \
# config=test/integration_test/quick_eval_in1k_linear_imagefolder_head.yaml \
# config.DATA.TRAIN.DATA_SOURCES=[disk_folder] \
# config.DATA.TRAIN.LABEL_SOURCES=[disk_folder] \
# config.DATA.TRAIN.DATASET_NAMES=[dummy_data_folder] \
# config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=2 \
# config.DATA.TRAIN.DATA_LIMIT=-1 \
# config.DATA.TEST.DATA_SOURCES=[disk_folder] \
# config.DATA.TEST.LABEL_SOURCES=[disk_folder] \
# config.DATA.TEST.DATASET_NAMES=[dummy_data_folder] \
# config.DATA.TEST.BATCHSIZE_PER_REPLICA=2 \
# config.DATA.TEST.DATA_LIMIT=-1 \
# config.DISTRIBUTED.NUM_NODES=1 \
# config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
# config.CHECKPOINT.DIR="/content/checkpoints" \
# config.MODEL.WEIGHTS_INIT.PARAMS_FILE="/content/resnet50-19c8e357.pth" \
# config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk._feature_blocks." \
# config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME=""
# + [markdown] id="A8fILq7VzyOu"
# And we are done!! We have the linear classifier trained on the trunk output and the `metrics.json` containing `top-1` and `top-5` accuracy on validation set is available in `checkpoints/metrics.json`.
# + colab={"base_uri": "https://localhost:8080/"} id="otUmgl4ms96M" outputId="bf4229ed-bb9b-4ea0-f948-23a3497663d3"
# ls /content/checkpoints/
# + colab={"base_uri": "https://localhost:8080/"} id="vDMLjudpya2I" outputId="195ed6da-af0e-4396-c27d-e84a550a6bed"
# cat /content/checkpoints/metrics.json
# + [markdown] id="DJKSyC0txO4x"
# ## Training linear classifiers on several trunk features
#
# VISSL also supports training linear classifiers on several features of the trunk. For the purpose of tutorial, we will use [this](https://github.com/facebookresearch/vissl/blob/master/configs/config/test/integration_test/quick_eval_in1k_linear_imagefolder.yaml) config file.
# + [markdown] id="U8hMOAgBxptv"
# Now, let's re-run the previous command with the new config.
# + colab={"base_uri": "https://localhost:8080/"} id="cwEg7GLoxvr5" outputId="a8f0905a-44b0-4c26-cbc1-2c9ce4c9f39f"
# !python3 tools/run_distributed_engines.py \
# hydra.verbose=true \
# config=test/integration_test/quick_eval_in1k_linear_imagefolder.yaml \
# config.DATA.TRAIN.DATA_SOURCES=[disk_folder] \
# config.DATA.TRAIN.LABEL_SOURCES=[disk_folder] \
# config.DATA.TRAIN.DATASET_NAMES=[dummy_data_folder] \
# config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=2 \
# config.DATA.TRAIN.DATA_LIMIT=-1 \
# config.DATA.TEST.DATA_SOURCES=[disk_folder] \
# config.DATA.TEST.LABEL_SOURCES=[disk_folder] \
# config.DATA.TEST.DATASET_NAMES=[dummy_data_folder] \
# config.DATA.TEST.BATCHSIZE_PER_REPLICA=2 \
# config.DATA.TEST.DATA_LIMIT=-1 \
# config.DISTRIBUTED.NUM_NODES=1 \
# config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
# config.CHECKPOINT.DIR="/content/checkpoints_trunk_eval" \
# config.MODEL.WEIGHTS_INIT.PARAMS_FILE="/content/resnet50-19c8e357.pth" \
# config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk.base_model._feature_blocks." \
# config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME=""
# + [markdown] id="GvUJcklKyI5F"
# And we are done!! We have the linear classifier trained on the trunk features `res5` and `res5avg` and the `metrics.json` containing the `top-1` and `top-5` accuracy for each feature in `checkpoints_trunk_eval/metrics.json`.
# + colab={"base_uri": "https://localhost:8080/"} id="DpQ1__UqyQgj" outputId="caac81fc-2ca3-48b3-95d5-4436a11a8b7b"
# ls /content/checkpoints_trunk_eval/
# + colab={"base_uri": "https://localhost:8080/"} id="gsvStuaWySlY" outputId="b200dffb-7110-4aaa-9850-c1577a06eecb"
# cat /content/checkpoints_trunk_eval/metrics.json
# + [markdown] id="9xFUcTj00B_a"
# # Loading Pre-trained models in VISSL
#
# VISSL supports Torchvision models out of the box. Generally, for loading any non-VISSL model, one needs to correctly set the following configuration options:
#
# ```yaml
# WEIGHTS_INIT:
# # path to the .torch weights files
# PARAMS_FILE: ""
# # name of the state dict. checkpoint = {"classy_state_dict": {layername:value}}. Options:
# # 1. classy_state_dict - if model is trained and checkpointed with VISSL.
# # checkpoint = {"classy_state_dict": {layername:value}}
# # 2. "" - if the model_file is not a nested dictionary for model weights i.e.
# # checkpoint = {layername:value}
# # 3. key name that your model checkpoint uses for state_dict key name.
# # checkpoint = {"your_key_name": {layername:value}}
# STATE_DICT_KEY_NAME: "classy_state_dict"
# # specify what layer should not be loaded. Layer names with this key are not copied
# # By default, set to BatchNorm stats "num_batches_tracked" to be skipped.
# SKIP_LAYERS: ["num_batches_tracked"]
# ####### If loading a non-VISSL trained model, set the following two args carefully #########
# # to make the checkpoint compatible with VISSL, if you need to remove some names
# # from the checkpoint keys, specify the name
# REMOVE_PREFIX: ""
# # In order to load the model (if not trained with VISSL) with VISSL, there are 2 scenarios:
# # 1. If you are interested in evaluating the model features and freeze the trunk.
# # Set APPEND_PREFIX="trunk.base_model." This assumes that your model is compatible
# # with the VISSL trunks. The VISSL trunks start with "_feature_blocks." prefix. If
# # your model doesn't have these prefix you can append them. For example:
# # For TorchVision ResNet trunk, set APPEND_PREFIX="trunk.base_model._feature_blocks."
# # 2. where you want to load the model simply and finetune the full model.
# # Set APPEND_PREFIX="trunk."
# # This assumes that your model is compatible with the VISSL trunks. The VISSL
# # trunks start with "_feature_blocks." prefix. If your model doesn't have these
# # prefix you can append them.
# # For TorchVision ResNet trunk, set APPEND_PREFIX="trunk._feature_blocks."
# # NOTE: the prefix is appended to all the layers in the model
# APPEND_PREFIX: "trunk._feature_blocks."
# ```
#
# **NOTE:** The above configuration will only load the TRUNK of a torchvision model. If you wish to load the HEAD and TRUNK of a torchvision model, you will have to convert the torchvision model to a VISSL supported checkpoint.
| tutorials/Benchmark_Linear_Image_Classification_on_ImageNet_1K_V0_1_6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import RPi.GPIO as GPIO # Import Raspberry Pi GPIO library
from time import sleep # Import the sleep function from the time module
GPIO.cleanup()
GPIO.setwarnings(False) # Ignore warning for now
GPIO.setmode(GPIO.BCM) # Use physical pin numbering
lst = (2,3,4,5,6,7,8,9,10,12,11,13,14,15,16,17,18,19,20,21,23,22,24,25,26,27)
GPIO.setup(lst, GPIO.OUT, initial=GPIO.HIGH) # Set pin 8 to be an output pin and set initial value to low (off)
# -
GPIO.setup(channel, GPIO.OUT, initial=GPIO.HIGH)
from uptime import uptime
uptime()
| bottle_recognition/tries/test_GPIO.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCharm (dd_1)
# language: python
# name: pycharm-89f95e86
# ---
# ### What's New in Python 3.6
# Here are just a few highlights.
#
# For the full monty (python), read it here:
#
# `https://docs.python.org/3/whatsnew/3.6.html`
# I will create some additional videos looking at each of these in more detail.
# #### Dictionaries maintain sort order
# Dictionaries will maintain their key order based on when items were inserted.
# This is an implementation detail in 3.6 (i.e. the behavior happens because of the new implementation of dicts, but not guaranteed in the sense that a future release could change that). However, there seems to be official confirmation from GvR that starting in 3.7 it will be guaranteed.
# If you use `prettyprint` be careful as that sorts the dictionary keys lexicographically before printing and this will still be the case in 3.7 (after much debate over it in the Python dev community!)
# #### Preserving Order of **kwargs in Function Arguments
# This is actually unrelated to the dict key order preservation.
# The order in which `**kwarg` arguments are passed to a function is retained when you iterate the `kwarg` parameter inside the called function.
#
# This is actually a big deal. I'll show you how this allows us to easily create a named tuple factory function with default values for example.
# #### Underscores in Numeric Literals
# This one is also really nice to help readability. You can now use underscores in numeric literals.
#
# For example:
1_000_000
0x_FFFF_FFFF
# #### f-Strings
# A much simpler string interpolation. You can embed Python expressions directly into strings (including multi-line) as well as the usual string formatting directives just by preceding the string with an `f` or `F`.
#
# For example:
numerator = 10
denominator = 3
response = f'{numerator}/{denominator} = {numerator / denominator:0.3f}'
print(response)
# #### Type Annotations
# Basically allows us to provide type information to function arguments, class variables, etc.
#
# More info in PEP 484 and PEP 526.
# According to PEP 484:
#
# "It should also be emphasized that **Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention**."
# Tools that make use of these type annotations include `mypy`, `PyCharm`. Probably others too.
# Here's an example of type annotations:
from typing import List
def squares(l: List[int]) -> List[int]:
return [e ** 2 for e in l]
my_list: List[int] = [1, 2, 3, 4]
my_list, squares(my_list)
# But Python does not use those annotations - here I am passing a dictionary to the function (with integer keys so the function still works properly). Obviously dictionaries are not lists!
d = {1: 'a', 2: 'b', 3:'c', 4:'d'}
squares(d)
# The type annotations **do not affect** Python itself - it is simply a set of syntactic additions that allows **external** tools such as `mypy` or `PyCharm` to perform static type checking.
# #### Asynchronous Enhancements
# This is beyond the scope of Part 1, but really useful additions worth at least mentioning have been added to the Python language to better support asynchronous programming (I'll cover that in detail in subsequent parts of the course series - most likely Part 4)
# #### Plus a LOT more
# There's a lot more to really love in 3.6, and you can read all about it here:
#
# `https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep526`
| dd_1/Part 1/Section 10 - Extras/01 - Python 3.6 Highlights.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="w35zPOcMyNwJ" outputId="9a9f7a32-b1c7-499f-c28a-dff8f26528df"
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# + id="ph-qNIaajXGT"
#drive.flush_and_unmount(timeout_ms=24)
# + id="9xc7MvTxyQd7"
import numpy as np
import pandas as pd
import pickle
import numpy as np
import random
import time
import os
#os.environ["OPENCV_IO_MAX_IMAGE_PIXELS"] = pow(2,40).__str__()
import cv2
from tqdm import tqdm
import tensorflow as tf
from tensorflow.python.keras import Sequential
from tensorflow.keras import layers, optimizers
from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.initializers import glorot_uniform
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint, LearningRateScheduler
from IPython.display import display
from tensorflow.keras import backend as K
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn.model_selection import train_test_split
from keras import optimizers
#from sklearn.metrics import classification_report, confusion_matrix
import sklearn
import seaborn as sn
from keras.callbacks import CSVLogger, LambdaCallback
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# + id="LLy6y_e1yQhC"
Dataset_Name = "Bark_101"
base_dir = 'drive/My Drive/Texture/Bark-101-anonymized/Bark-101 Split/'
work_dir = "drive/My Drive/Texture/Bark-101-anonymized/Records/"
#dataset_dir = "drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/MK/D2/"
train_dir = os.path.join(base_dir, 'train')
test_dir = os.path.join(base_dir, 'test')
#data_instance = 64 # 64 256
color_type = 'rgb' # rgb, grayscale
# + id="LnYytOrbKGMH"
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
# + id="3mgmPRJEKk3r"
BATCH_SIZE = 16
# + colab={"base_uri": "https://localhost:8080/"} id="rga2byZPKPQt" outputId="681d8b5e-bc04-4500-dc0d-597a7aa8c43a"
train_generator = train_datagen.flow_from_directory(
train_dir,
#target_size=(800, 804), # target images are automatically resized to (256, 256)
batch_size=BATCH_SIZE,
color_mode=color_type, # grayscale, rgb
class_mode='categorical')
# + colab={"base_uri": "https://localhost:8080/"} id="uwy6M8sgK1KT" outputId="1a556563-4e45-4d30-d2fe-148ca5953bae"
num_classes = train_generator.num_classes
total_train_data = train_generator.samples
print(f"total_train_data = {total_train_data}")
print(f"train_generator.image_shape = {train_generator.image_shape}")
print(f"num_classes = {num_classes}")
# + colab={"base_uri": "https://localhost:8080/"} id="YsvrZgrALXMl" outputId="39f15287-c5a9-431e-d1f1-8b120f25ea39"
test_generator = test_datagen.flow_from_directory(
test_dir,
#target_size=(800, 804), # target images are automatically resized to (256, 256)
batch_size=BATCH_SIZE,
shuffle=False,
color_mode=color_type, # grayscale, rgb
class_mode='categorical')
# + colab={"base_uri": "https://localhost:8080/"} id="HxU0D6TuOOOL" outputId="780bbf15-8518-4904-fb52-8c320f833577"
total_test_data = test_generator.samples
print(f"total_test_data = {total_test_data}")
# + id="roidu5RmFRJq"
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="XRwynF9xPBWy" outputId="f62b2552-138d-41e0-dbc2-cfd96c2db68a"
# DenseNet121 ResNet101 ResNet50 DenseNet201 InceptionV3 Xception NASNetLarge ResNet152V2 InceptionResNetV2 EfficientNetB7
impl_type = "TransferLearning3D.DenseNet201" # TransferLearning3D
dataset = f"{Dataset_Name}.Original.{color_type}.{train_generator.image_shape[:2]}.DataFlow" # +str(img_size)+"p"
dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="rLVsewD-O3gl" outputId="dd9dddb0-676e-4fde-9c7b-84826028551b"
'''
count_no_improvement = 0
epoch_initial = True
#'''
# + colab={"base_uri": "https://localhost:8080/"} id="OxppPm7hO3l8" outputId="1ee78971-c4c1-4a92-dd5c-bc1c48b6cf9c"
#NUM_NEURONS = 16
#NUM_LAYERS = 3
#BATCH_SIZE = 16 # 10
NUM_EPOCHS = 300
epochs_completed = 0
LEARNING_RATE = 0.00001
EPSILON = 1e-4
early_stop_after_epochs = 5
DROPOUT = 0.8 # 0.5 0.0
pad = 0
LOSS = 'categorical_crossentropy'
ACTIVATION_FUNCTION = 'elu' # relu sigmoid elu
FINAL_ACTIVATION_FUNCTION = 'softmax'
validation_split = 0.1
kernel_size=(1,1)
pointTrainableAfter = "allDefault" # "allDefault" 160 170
OPTIMIZER = "Adam" # Adam SGD RMSProp
init_weights = "imagenet" # "imagenet" None
modelExt = ".Dense.1024.1024.2048" # .Dense.128.256.512, .512.512.512 .Dense.512.512.512.512.Res
l2_val = 0.01
# # +"_kernel"+str(kernel_size)+"_lr"+str(LEARNING_RATE)+"_batch"+str(BATCH_SIZE)+"_epochs"+str(NUM_EPOCHS)
#checkpointer_name = "weights_"+dataset+"_"+impl_type+"_nLayers"+str(NUM_LAYERS)+"_nNeurons"+str(NUM_NEURONS)+".hdf5"
ext = f".Flatten.l2.{str(l2_val)}.run_1" # run_1 run_2 .DropAfter .momentum0.9
#'''
checkpointer_name = "weights."+dataset+".pad"+str(pad)+"."+impl_type+".wInit."+str(init_weights)+".TrainableAfter."+str(pointTrainableAfter)+\
modelExt+".actF."+ACTIVATION_FUNCTION+".opt."+OPTIMIZER+".drop."+str(DROPOUT)+".batch"+str(BATCH_SIZE)+ext+".hdf5"
log_name = "log."+checkpointer_name[8:-5]+".log"
print('checkpointer_name =', checkpointer_name)
print('log_name =', log_name)
#'''
# + colab={"base_uri": "https://localhost:8080/"} id="hcB3WVaWO6BV" outputId="534d5603-5871-4dc6-f32b-cf900f9d7e4d"
train_generator.image_shape
# + colab={"base_uri": "https://localhost:8080/"} id="XyK6ZbJOOsjV" outputId="5dc7b660-7812-40a3-8230-0d3cca0f16f4"
#'''
#base_model=DenseNet121(weights=None, include_top=False, input_shape=np_train_dataset2.shape[1:]) # `None` (random initialization)
#base_model=ResNet152V2(weights=None, include_top=False, input_shape=np_train_dataset2.shape[1:])
# ResNet152V2 ResNet50 ResNet101 ResNet152 DenseNet201 InceptionV3 Xception NASNetLarge 'imagenet' ResNet152V2 DenseNet121
#inputs = Input(final_train_imageset.shape[1:])
#x = ZeroPadding2D(padding=(pad,pad))(inputs)
#base_model=tf.keras.applications.ResNet50(weights=init_weights, include_top=False, input_tensor=x)
base_model=tf.keras.applications.DenseNet201(weights=init_weights, include_top=False, input_shape=train_generator.image_shape)
x=base_model.output
x = Flatten()(x)
#'''
x = Dense(1024, kernel_regularizer=tf.keras.regularizers.l2(l2_val), activation=ACTIVATION_FUNCTION)(x)
#x_copy = x
x = Dropout(DROPOUT)(x)
x = Dense(1024, kernel_regularizer=tf.keras.regularizers.l2(l2_val), activation=ACTIVATION_FUNCTION)(x)
x = Dropout(DROPOUT)(x)
x = Dense(2048, kernel_regularizer=tf.keras.regularizers.l2(l2_val), activation=ACTIVATION_FUNCTION)(x)
x = Dropout(DROPOUT)(x)
#x = Add()([x,x_copy])
#'''
outputs=Dense(num_classes,activation='softmax')(x)
model=Model(inputs=base_model.input,outputs=outputs)
model.summary()
#'''
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="WgomlDGDqn6-" outputId="7300f07b-8df4-4052-8e42-02731c2feec3"
'''
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=True, show_dtype=False,
show_layer_names=True, rankdir='TB', expand_nested=True, dpi=64
)
#'''
# + colab={"base_uri": "https://localhost:8080/"} id="ozx7Z-ZiUE-2" outputId="fa79edac-a77d-4bb0-fd12-8dcbc21710df"
count_trainable = 0
count_non_trainable = 0
#'''
if pointTrainableAfter == "allDefault":
for layer in model.layers:
layer.trainable=True
count_trainable += 1
elif pointTrainableAfter > 0:
for layer in model.layers[:pointTrainableAfter]: # [:-pointTrainableAfter]
layer.trainable=False
count_non_trainable += 1
for layer in model.layers[pointTrainableAfter:]: # [-pointTrainableAfter:]
layer.trainable=True
count_trainable += 1
#'''
'''
for layer in model.layers:
layer.trainable=True
count_trainable += 1
#'''
print("count_non_trainable =", count_non_trainable)
print("count_trainable =", count_trainable)
print("Total number of layers =", count_non_trainable+count_trainable)
# + colab={"base_uri": "https://localhost:8080/", "height": 72} id="aqwYD5TGPxyV" outputId="843aa7ed-0eaf-4227-ebd3-f81fda4f0fcb"
'''
checkpointer_name = "weights."+dataset+".pad"+str(pad)+"."+impl_type+".wInit."+str(init_weights)+".TrainableAfter."+str(pointTrainableAfter)+\
modelExt+".opt."+OPTIMIZER+".drop."+str(DROPOUT)+".batch"+str(BATCH_SIZE)+ext+".hdf5"
log_name = "log."+checkpointer_name[8:-5]+".log"
print('checkpointer_name =', checkpointer_name)
print('log_name =', log_name)
#'''
# + colab={"base_uri": "https://localhost:8080/"} id="3T7w_lC1QCPh" outputId="633bee94-a60c-4af3-fbb3-1e2d404c1f34"
# "RMSProp" "SGD" "Adam" "Adamax" "Adadelta" "Adagrad" "SGD"
#optimizer = tf.keras.optimizers.RMSprop(lr = LEARNING_RATE, epsilon=EPSILON)
if OPTIMIZER == "RMSProp":
optimizer = tf.keras.optimizers.RMSprop(lr = LEARNING_RATE, epsilon=EPSILON)
elif OPTIMIZER == "Adam":
optimizer = tf.keras.optimizers.Adam(lr = LEARNING_RATE, epsilon=EPSILON, beta_1=0.9, beta_2=0.999)
elif OPTIMIZER == "Adamax":
optimizer = tf.keras.optimizers.Adamax(lr = LEARNING_RATE, epsilon=EPSILON, beta_1=0.9, beta_2=0.999)
elif OPTIMIZER == "Adadelta":
optimizer = tf.keras.optimizers.Adadelta(lr = LEARNING_RATE, epsilon=EPSILON, rho=0.95)
elif OPTIMIZER == "Adagrad":
optimizer = tf.keras.optimizers.Adagrad(lr = LEARNING_RATE, epsilon=EPSILON, initial_accumulator_value=0.1)
elif OPTIMIZER == "SGD":
optimizer = tf.keras.optimizers.SGD(lr = LEARNING_RATE, momentum=0.9)
model.compile(
#optimizer=OPTIMIZER,
optimizer=optimizer,
loss=LOSS,
metrics=['accuracy']
)
print("OPTIMIZER =", OPTIMIZER)
# + id="OdySVEG3QCpv"
# save the best model with least validation loss
checkpointer = ModelCheckpoint(filepath = work_dir+checkpointer_name,
#monitor='val_accuracy',
monitor='val_loss',
save_weights_only=False,
mode='auto',
verbose = 1, # 0 = silent, 1 = progress bar, 2 = one line per epoch
save_best_only =False
)
checkpointer_best = ModelCheckpoint(filepath = work_dir+"best_"+checkpointer_name,
monitor='val_loss',
save_weights_only=False,
mode='auto',
verbose = 1,
save_best_only = True
)
early_stopping = EarlyStopping(monitor='loss', patience=early_stop_after_epochs)
# + colab={"base_uri": "https://localhost:8080/"} id="Fq3iXmYXQHNL" outputId="7b2cba1c-7016-46fc-b76a-7cb4097dea28"
'''
if 'count_no_improvement' not in globals():
count_no_improvement = 0
print("count_no_improvement =", count_no_improvement)
#'''
#'''
count_no_improvement = 0
epoch_initial = False
#'''
min_delta = 0.0009
print("count_no_improvement =", count_no_improvement)
def checkBestPerformance(epoch, logs):
save_filepath = work_dir+"best_"+checkpointer_name
global epoch_initial
if epoch_initial == True:
epoch_initial = False
model.save(filepath = save_filepath)
print(". Model saved!")
elif epoch_initial == False:
global count_no_improvement
log_data = pd.read_csv(work_dir+log_name, sep=',', usecols=['val_loss', 'val_accuracy'], engine='python')
min_val_loss = float(str(min(log_data.val_loss.values))[:6])
max_val_acc = float(str(max(log_data.val_accuracy.values))[:6])
current_val_acc = float(str(logs['val_accuracy'])[:6])
current_val_loss = float(str(logs['val_loss'])[:6])
if (current_val_loss < min_val_loss) and (abs(current_val_loss-min_val_loss) >= min_delta):
count_no_improvement = 0
model.save(filepath = save_filepath)
print("\nval_loss decreased from",min_val_loss," to",current_val_loss,"( val_accuracy =",current_val_acc,").")
elif (current_val_loss==min_val_loss) and (current_val_acc>max_val_acc):
count_no_improvement = 0
model.save(filepath = save_filepath)
print("\nval_accuracy increased to", current_val_acc, ".")
else:
count_no_improvement += 1
print(". count_no_improvement =", count_no_improvement)
if count_no_improvement >= early_stop_after_epochs:
global list_callbacks
del list_callbacks, count_no_improvement
#print("count_no_improvement =", count_no_improvement, "... list_callbacks =", list_callbacks)
# + colab={"base_uri": "https://localhost:8080/"} id="YYewyuDiREFZ" outputId="dc20447b-1696-4ba1-83ac-c737e43360e4"
epochs_completed = 0
list_callbacks = []
csv_logger = CSVLogger(work_dir+log_name, separator=',', append=True)
#if 'list_callbacks' in globals():
# del list_callbacks
try:
log_data = pd.read_csv(work_dir+log_name, sep=',', usecols=['epoch'], engine='python')
epochs_completed = log_data.shape[0]
#if epochs_completed > 0:
model = load_model(work_dir+checkpointer_name)
list_callbacks = [checkpointer, LambdaCallback(on_epoch_end=checkBestPerformance), csv_logger]
print("epochs_completed =", epochs_completed)
except Exception as error:
if epochs_completed == 0:
# list_callbacks = [checkpointer, checkpointer_best, csv_logger, early_stopping]
list_callbacks = [checkpointer, LambdaCallback(on_epoch_end=checkBestPerformance), csv_logger]
print("epochs_completed =", epochs_completed)
elif epochs_completed > 0:
print(error)
print('checkpointer_name =', checkpointer_name)
# + colab={"base_uri": "https://localhost:8080/"} id="6JKqrHnwRGGz" outputId="47282e89-b74b-4f8b-d5bb-c2c9da6f5350"
print('checkpointer_name =', checkpointer_name)
print("Previously completed epochs =", epochs_completed)
print("count_no_improvement =", count_no_improvement, "\n")
#'''
try:
start_time = time.time()
history = model.fit(train_generator,
steps_per_epoch=total_train_data // BATCH_SIZE,
shuffle=True,
epochs = NUM_EPOCHS - epochs_completed,
validation_data=test_generator,
validation_steps=total_test_data // BATCH_SIZE,
callbacks=list_callbacks
)
elapsed_time = time.time() - start_time
print("\nTime elapsed: ", elapsed_time)
except Exception as error:
print("\nError:", error)
#'''
# + id="UJsgsZFzQCs2"
# weights.Bark_101.Original.rgb.(256, 256).DataFlow.pad0.TransferLearning3D.DenseNet201.wInit.imagenet.TrainableAfter.allDefault.Dense.1024.1024.2048.actF.elu.opt.Adam.drop.0.5.batch16.Flatten.l2.0.001.run_1.hdf5
# weights.Bark_101.Original.rgb.(256, 256).DataFlow.pad0.TransferLearning3D.DenseNet201.wInit.imagenet.TrainableAfter.allDefault.Dense.1024.1024.2048.actF.elu.opt.Adam.drop.0.8.batch16.Flatten.l2.0.01.run_1.hdf5
# + id="l_xKnoFvFQ4F"
'''
Record: Bark_101_impl_1_Original_RGB_Dense201_Custom_withImageNet_DataFlow: (41.9%)
;
---
Test Acc: 0.4734, Test Loss: 5.7897: ep77, weights.Bark_101.Original.rgb.(256, 256).DataFlow.pad0.TransferLearning3D.DenseNet201.wInit.imagenet.TrainableAfter.allDefault.Dense.1024.1024.2048.actF.elu.opt.Adam.drop.0.5.batch16.Flatten.l2.0.001.run_1.hdf5
#'''
csv_logger = CSVLogger(work_dir+log_name, separator=',', append=True)
log_data = pd.read_csv(work_dir+log_name, sep=',', usecols=['epoch'], engine='python')
epochs_completed = log_data.shape[0]
result = model.evaluate(test_generator, steps=total_test_data // BATCH_SIZE)
print("Test Acc: {}, Test Loss: {}: ep{}, {}\n".format(round(result[1],4), round(result[0],4), epochs_completed, checkpointer_name))
# + id="IQM7JieEFQ1G"
#checkpointer_name = "weights.Fashion.DenseNet121.wInit.None.TrainableAfterallDefault.opt.SGD.drop.0.0.batch32.Flatten.run_1.hdf5"
model_loaded = load_model(work_dir+"best_"+checkpointer_name)
print("Loaded "+work_dir+"best_"+checkpointer_name+".")
# + id="chvVsOEgRfsO"
'''
Record: Bark_101_impl_1_Original_RGB_Dense201_Custom_withImageNet_DataFlow: (41.9%)
;
---
Test Acc: 0.4906, Test Loss: 5.7325: ep77, best_weights.Bark_101.Original.rgb.(256, 256).DataFlow.pad0.TransferLearning3D.DenseNet201.wInit.imagenet.TrainableAfter.allDefault.Dense.1024.1024.2048.actF.elu.opt.Adam.drop.0.5.batch16.Flatten.l2.0.001.run_1.hdf5
#'''
'''
csv_logger = CSVLogger(work_dir+log_name, separator=',', append=True)
log_data = pd.read_csv(work_dir+log_name, sep=',', usecols=['epoch'], engine='python')
epochs_completed = log_data.shape[0]
#'''
result2 = model_loaded.evaluate(test_generator, steps=total_test_data // BATCH_SIZE)
#print("nLayers: {}, nNeurons: {}, DROPOUT: {}, Test Acc: {}, Test Loss: {}".format(NUM_LAYERS, NUM_NEURONS, DROPOUT, round(result2[1], 4), round(result2[0], 4)))
print("Test Acc: {}, Test Loss: {}: ep{}, {}\n".format(round(result2[1],4), round(result2[0],4), epochs_completed, "best_"+checkpointer_name))
# + id="z1GzwpODRlRf"
import csv
with open(work_dir+'Records.csv', "a") as fp:
wr = csv.writer(fp, dialect='excel')
try:
wr.writerow([checkpointer_name[8:-5], round(result2[1], 4), round(result2[0], 4), elapsed_time])
except:
wr.writerow([checkpointer_name[8:-5], round(result2[1], 4), round(result2[0], 4)])
print("Saved results.")
# + id="Dv35-hm1Rfv5"
# + id="r41Tx24EuA8A"
#Confution Matrix and Classification Report
#'''
Y_pred = model_loaded.predict_generator(test_generator, verbose=1)
#'''
#'''
save_predictions_filename = f"Y_pred.{checkpointer_name[8:-5]}"
np.save(f"{work_dir}{save_predictions_filename}", Y_pred, allow_pickle=True)
print(f"Saved: {work_dir}{save_predictions_filename}")
#'''
# + id="RCOqGtOtSVG5"
'''
save_predictions_filename = f"Y_pred.{checkpointer_name[8:-5]}"
np.save(f"{work_dir}{save_predictions_filename}", Y_pred, allow_pickle=True)
print(f"Saved: {work_dir}{save_predictions_filename}")
#'''
# + id="f5DLKM2YTGiR"
#Y_pred_loaded = np.load(f"{work_dir}{save_predictions_filename_2}.npy", allow_pickle=True)
Y_pred_loaded = np.load(f"{work_dir}{save_predictions_filename}.npy", allow_pickle=True)
print(f"Y_pred_loaded.shape = {Y_pred_loaded.shape}")
# + id="sKVblYYLTR8O"
# + id="vPVcI8W8uVxT"
y_pred = np.argmax(Y_pred_loaded, axis=1)
# + id="EWjMUxOC_idw"
y_true = test_generator.classes
# + id="9AnENPzD_XCi"
list_class_names_in_generator = list(test_generator.class_indices.keys())
list_class_names_in_generator[:5]
# + id="g9ECGTlXdLEY"
len(list_class_names_in_generator)
# + id="7c1QqkvSB4rw"
list_y_true_rearranged = []
list_y_pred_rearranged = []
for true_class,pred_class in zip(y_true,y_pred):
#print(f"true_class = {true_class}; pred_class = {pred_class}")
#y_true_rearranged = int(list_class_names_in_generator[true_class][5:])
#y_pred_rearranged = int(list_class_names_in_generator[pred_class][5:])
y_true_rearranged = int(list_class_names_in_generator[true_class])
y_pred_rearranged = int(list_class_names_in_generator[pred_class])
list_y_true_rearranged.append(y_true_rearranged)
list_y_pred_rearranged.append(y_pred_rearranged)
# + id="twUaOkzO_lYu"
np_y_true_rearranged = np.array(list_y_true_rearranged)
np_y_pred_rearranged = np.array(list_y_pred_rearranged)
print(f"np_y_true_rearranged.shape = {np_y_true_rearranged.shape}")
print(f"np_y_pred_rearranged.shape = {np_y_pred_rearranged.shape}")
print(f"np_y_true_rearranged: {np_y_true_rearranged}")
print(f"np_y_pred_rearranged: {np_y_pred_rearranged}")
# + id="LxgBOkM5Es08"
print(f"np_y_true_rearranged.shape = {np_y_true_rearranged.shape}\n")
index = -5
print(f"y_true[{index}:] = {y_true[index:]}")
print(f"y_pred[{index}:] = {y_pred[index:]}\n")
print(f"np_y_true_rearranged[{index}:] = {np_y_true_rearranged[index:]}")
print(f"np_y_pred_rearranged[{index}:] = {np_y_pred_rearranged[index:]}\n")
print(f"np.unique(np_y_true_rearranged) = {np.unique(np_y_true_rearranged)}")
print(f"np.unique(np_y_pred_rearranged) = {np.unique(np_y_pred_rearranged)}")
# + id="yo2maYGREHa1"
# + id="ATQ8O5enuJcA"
conf_matrix = sklearn.metrics.confusion_matrix(np_y_true_rearranged, np_y_pred_rearranged)
print(f"Confusion Matrix:\n{conf_matrix}")
# + id="iWt92DPh8BIG"
#plt.figure(figsize = (30,30))
plt.matshow(conf_matrix)
# + id="qYbVekfx9Qk-"
df_conf_matrix = pd.DataFrame(conf_matrix, index = [f"Class {i+1}" for i in range(num_classes)],
columns = [f"Class {i+1}" for i in range(num_classes)])
# + id="bhhb_ykf7mGr"
title = "Confusion matrix for "+dataset+" "+impl_type+"\n"
plt.figure(figsize = (30,15))
plt.title(title)
sn.heatmap(df_conf_matrix, annot=True)
img_path = work_dir+'Images/conf_matrix_'+checkpointer_name[8:-5]+'.png'
plt.savefig(img_path, dpi=600)
print(f"img_path = {img_path}")
# + id="OyvlKAMW-q4-"
# + id="hgY4v21DyQwL"
#Confution Matrix and Classification Report
'''
Y_pred = model_loaded.predict_generator(final_test_imageset, len(final_test_imageset))
y_pred = np.argmax(Y_pred, axis=1)
print('Confusion Matrix')
print(sklearn.metrics.confusion_matrix(np_test_label, y_pred))
#'''
# + id="U5j7b3KcRvwj"
# Precision [TP/TP+FP] = The ratio of correctly predicted positive observations to the total predicted positive observations.
# Recall (Sensitivity) [TP/TP+FN] = The ratio of correctly predicted positive observations to the all observations in actual class - 'yes'.
# F1 score [F1 Score = 2*(Recall * Precision) / (Recall + Precision)] = The weighted average of Precision and Recall.
# Support = The number of samples of the true response that lie in that class.
'''
print('Classification Report:')
print(sklearn.metrics.classification_report(test_generator.classes, y_pred))
#'''
# + id="jTwSA-kRYavk"
# + id="AaZZkLXW6Vb8"
test_generator.class_indices.keys()
# + id="ceGRitu9RvzV"
#'''
print('Classification Report')
print(sklearn.metrics.classification_report(np_y_true_rearranged, np_y_pred_rearranged, target_names=test_generator.class_indices.keys()))
#'''
# + id="ujLQjTf2Rv11"
log_data = pd.read_csv(work_dir+log_name, sep=',', engine='python')
# + id="To55jgGSRv4a"
# Getting the model history keys
#history.history.keys()
log_data.head()
# + id="aytAYSJ4Rv7T"
# plot the training artifacts
title = "Val loss for "+dataset+" "+impl_type+"\n"
plt.plot(log_data['loss'])
plt.plot(log_data['val_loss'])
plt.title(title)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train_loss','val_loss'], loc = 'best')
plt.grid(b=True, which='major', axis='both')
img_path = work_dir+'Images/vLoss_'+checkpointer_name[8:-5]+'.png'
plt.savefig(img_path, dpi=600)
plt.show()
print('img_path =', img_path)
# + id="<KEY>"
title = "Val acc for "+dataset+" "+impl_type+"\n"
plt.plot(log_data['accuracy'])
plt.plot(log_data['val_accuracy'])
plt.title(title)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train_accuracy','val_accuracy'], loc = 'best')
plt.grid(b=True, which='major', axis='both')
img_path = work_dir+'Images/vAcc_'+checkpointer_name[8:-5]+'.png'
plt.savefig(img_path, dpi=600)
plt.show()
print('img_path =', img_path)
# + id="w4msuVKMSk8E"
| Bark 101/Bark_101_impl_1_1_Original_RGB_Dense201_Custom_withImageNet_DataFlow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bayesian Network
from IPython.display import Image
# ## Bayesian Models
# 1. What are Bayesian Models
# 2. Independencies in Bayesian Networks
# 3. How is Bayesian Model encoding the Joint Distribution
# 4. How we do inference from Bayesian models
# 5. Types of methods for inference
# ### 1. What are Bayesian Models
# A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are mostly used when we want to represent causal relationship between the random variables. Bayesian Networks are parameterized using Conditional Probability Distributions (CPD). Each node in the network is parameterized using $P(node | Pa(node))$ where $Pa(node)$ represents the parents of node in the network.
#
# We can take the example of the student model:
Image('../images/2/student_full_param.png')
# In pgmpy we define the network structure and the CPDs separately and then associate them with the structure. Here's an example for defining the above model:
# +
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete import TabularCPD
# Defining the model structure. We can define the network by just passing a list of edges.
model = BayesianModel([('D', 'G'), ('I', 'G'), ('G', 'L'), ('I', 'S')])
# Defining individual CPDs.
cpd_d = TabularCPD(variable='D', variable_card=2, values=[[0.6], [0.4]])
cpd_i = TabularCPD(variable='I', variable_card=2, values=[[0.7], [0.3]])
# The representation of CPD in pgmpy is a bit different than the CPD shown in the above picture. In pgmpy the colums
# are the evidences and rows are the states of the variable. So the grade CPD is represented like this:
#
# +---------+---------+---------+---------+---------+
# | diff | intel_0 | intel_0 | intel_1 | intel_1 |
# +---------+---------+---------+---------+---------+
# | intel | diff_0 | diff_1 | diff_0 | diff_1 |
# +---------+---------+---------+---------+---------+
# | grade_0 | 0.3 | 0.05 | 0.9 | 0.5 |
# +---------+---------+---------+---------+---------+
# | grade_1 | 0.4 | 0.25 | 0.08 | 0.3 |
# +---------+---------+---------+---------+---------+
# | grade_2 | 0.3 | 0.7 | 0.02 | 0.2 |
# +---------+---------+---------+---------+---------+
cpd_g = TabularCPD(variable='G', variable_card=3,
values=[[0.3, 0.05, 0.9, 0.5],
[0.4, 0.25, 0.08, 0.3],
[0.3, 0.7, 0.02, 0.2]],
evidence=['I', 'D'],
evidence_card=[2, 2])
cpd_l = TabularCPD(variable='L', variable_card=2,
values=[[0.1, 0.4, 0.99],
[0.9, 0.6, 0.01]],
evidence=['G'],
evidence_card=[3])
cpd_s = TabularCPD(variable='S', variable_card=2,
values=[[0.95, 0.2],
[0.05, 0.8]],
evidence=['I'],
evidence_card=[2])
# Associating the CPDs with the network
model.add_cpds(cpd_d, cpd_i, cpd_g, cpd_l, cpd_s)
# check_model checks for the network structure and CPDs and verifies that the CPDs are correctly
# defined and sum to 1.
model.check_model()
# +
# CPDs can also be defined using the state names of the variables. If the state names are not provided
# like in the previous example, pgmpy will automatically assign names as: 0, 1, 2, ....
cpd_d_sn = TabularCPD(variable='D', variable_card=2, values=[[0.6], [0.4]], state_names={'D': ['Easy', 'Hard']})
cpd_i_sn = TabularCPD(variable='I', variable_card=2, values=[[0.7], [0.3]], state_names={'I': ['Dumb', 'Intelligent']})
cpd_g_sn = TabularCPD(variable='G', variable_card=3,
values=[[0.3, 0.05, 0.9, 0.5],
[0.4, 0.25, 0.08, 0.3],
[0.3, 0.7, 0.02, 0.2]],
evidence=['I', 'D'],
evidence_card=[2, 2],
state_names={'G': ['A', 'B', 'C'],
'I': ['Dumb', 'Intelligent'],
'D': ['Easy', 'Hard']})
cpd_l_sn = TabularCPD(variable='L', variable_card=2,
values=[[0.1, 0.4, 0.99],
[0.9, 0.6, 0.01]],
evidence=['G'],
evidence_card=[3],
state_names={'L': ['Bad', 'Good'],
'G': ['A', 'B', 'C']})
cpd_s_sn = TabularCPD(variable='S', variable_card=2,
values=[[0.95, 0.2],
[0.05, 0.8]],
evidence=['I'],
evidence_card=[2],
state_names={'S': ['Bad', 'Good'],
'I': ['Dumb', 'Intelligent']})
# These defined CPDs can be added to the model. Since, the model already has CPDs associated to variables, it will
# show warning that pmgpy is now replacing those CPDs with the new ones.
model.add_cpds(cpd_d_sn, cpd_i_sn, cpd_g_sn, cpd_l_sn, cpd_s_sn)
model.check_model()
# -
# We can now call some methods on the BayesianModel object.
model.get_cpds()
# Printing a CPD which doesn't have state names defined.
print(cpd_g)
# Printing a CPD with it's state names defined.
print(model.get_cpds('G'))
model.get_cardinality('G')
# ### 2. Independencies in Bayesian Networks
#
# Independencies implied by the network structure of a Bayesian Network can be categorized in 2 types:
#
# 1. __Local Independencies:__ Any variable in the network is independent of its non-descendents given its parents. Mathematically it can be written as: $$ (X \perp NonDesc(X) | Pa(X) $$
# where $NonDesc(X)$ is the set of variables which are not descendents of $X$ and $Pa(X)$ is the set of variables which are parents of $X$.
#
# 2. __Global Independencies:__ For discussing global independencies in Bayesian Networks we need to look at the various network structures possible.
# Starting with the case of 2 nodes, there are only 2 possible ways for it to be connected:
Image('../images/2/two_nodes.png')
# In the above two cases it is fairly obvious that change in any of the node will affect the other. For the first case we can take the example of $difficulty \rightarrow grade$. If we increase the difficulty of the course the probability of getting a higher grade decreases. For the second case we can take the example of $SAT \leftarrow Intel$. Now if we increase the probability of getting a good score in SAT that would imply that the student is intelligent, hence increasing the probability of $i_1$. Therefore in both the cases shown above any change in the variables leads to change in the other variable.
#
# Now, there are four possible ways of connection between 3 nodes:
Image('../images/2/three_nodes.png')
# Now in the above cases we will see the flow of influence from $A$ to $C$ under various cases.
#
# 1. __Causal:__ In the general case when we make any changes in the variable $A$, it will have effect of variable $B$ (as we discussed above) and this change in $B$ will change the values in $C$. One other possible case can be when $B$ is observed i.e. we know the value of $B$. So, in this case any change in $A$ won't affect $B$ since we already know the value. And hence there won't be any change in $C$ as it depends only on $B$. Mathematically we can say that: $(A \perp C | B)$.
# 2. __Evidential:__ Similarly in this case also observing $B$ renders $C$ independent of $A$. Otherwise when $B$ is not observed the influence flows from $A$ to $C$. Hence $(A \perp C | B)$.
# 3. __Common Evidence:__ This case is a bit different from the others. When $B$ is not observed any change in $A$ reflects some change in $B$ but not in $C$. Let's take the example of $D \rightarrow G \leftarrow I$. In this case if we increase the difficulty of the course the probability of getting a higher grade reduces but this has no effect on the intelligence of the student. But when $B$ is observed let's say that the student got a good grade. Now if we increase the difficulty of the course this will increase the probability of the student to be intelligent since we already know that he got a good grade. Hence in this case $(A \perp C)$ and $( A \not\perp C | B)$. This structure is also commonly known as V structure.
# 4. __Common Cause:__ The influence flows from $A$ to $C$ when $B$ is not observed. But when $B$ is observed and change in $A$ doesn't affect $C$ since it's only dependent on $B$. Hence here also $( A \perp C | B)$.
#
# Let's not see a few examples for finding the independencies in a newtork using pgmpy:
# Getting the local independencies of a variable.
model.local_independencies('G')
# Getting all the local independencies in the network.
model.local_independencies(['D', 'I', 'S', 'G', 'L'])
# Active trail: For any two variables A and B in a network if any change in A influences the values of B then we say
# that there is an active trail between A and B.
# In pgmpy active_trail_nodes gives a set of nodes which are affected (i.e. correlated) by any
# change in the node passed in the argument.
model.active_trail_nodes('D')
model.active_trail_nodes('D', observed='G')
# ### 3. How is this Bayesian Network representing the Joint Distribution over the variables ?
# Till now we just have been considering that the Bayesian Network can represent the Joint Distribution without any proof. Now let's see how to compute the Joint Distribution from the Bayesian Network.
#
# From the chain rule of probabiliy we know that:
#
# $P(A, B) = P(A | B) * P(B)$
#
# Now in this case:
#
# $P(D, I, G, L, S) = P(L| S, G, D, I) * P(S | G, D, I) * P(G | D, I) * P(D | I) * P(I)$
#
# Applying the local independence conditions in the above equation we will get:
#
# $P(D, I, G, L, S) = P(L|G) * P(S|I) * P(G| D, I) * P(D) * P(I)$
#
# From the above equation we can clearly see that the Joint Distribution over all the variables is just the product of all the CPDs in the network. Hence encoding the independencies in the Joint Distribution in a graph structure helped us in reducing the number of parameters that we need to store.
# ### 4. Inference in Bayesian Models
# Till now we discussed just about representing Bayesian Networks. Now let's see how we can do inference in a Bayesian Model and use it to predict values over new data points for machine learning tasks. In this section we will consider that we already have our model. We will talk about constructing the models from data in later parts of this tutorial.
#
# In inference we try to answer probability queries over the network given some other variables. So, we might want to know the probable grade of an intelligent student in a difficult class given that he scored good in SAT. So for computing these values from a Joint Distribution we will have to reduce over the given variables that is $I = 1$, $D = 1$, $S = 1$ and then marginalize over the other variables that is $L$ to get $P(G | I=1, D=1, S=1)$.
# But carrying on marginalize and reduce operation on the complete Joint Distribution is computationaly expensive since we need to iterate over the whole table for each operation and the table is exponential is size to the number of variables. But in Graphical Models we exploit the independencies to break these operations in smaller parts making it much faster.
#
# One of the very basic methods of inference in Graphical Models is __Variable Elimination__.
# #### Variable Elimination
# We know that:
#
# $P(D, I, G, L, S) = P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$
#
# Now let's say we just want to compute the probability of G. For that we will need to marginalize over all the other variables.
#
# $P(G) = \sum_{D, I, L, S} P(D, I, G, L, S)$
#
# $P(G) = \sum_{D, I, L, S} P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$
#
# $P(G) = \sum_D \sum_I \sum_L \sum_S P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$
#
# Now since not all the conditional distributions depend on all the variables we can push the summations inside:
#
# $P(G) = \sum_D \sum_I \sum_L \sum_S P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$
#
# $P(G) = \sum_D P(D) \sum_I P(G|D, I) * P(I) \sum_S P(S|I) \sum_L P(L|G)$
#
# So, by pushing the summations inside we have saved a lot of computation because we have to now iterate over much smaller tables.
#
# Let's take an example for inference using Variable Elimination in pgmpy:
from pgmpy.inference import VariableElimination
infer = VariableElimination(model)
g_dist = infer.query(['G'])
print(g_dist)
# There can be cases in which we want to compute the conditional distribution let's say $P(G | D=0, I=1)$. In such cases we need to modify our equations a bit:
#
# $P(G | D=0, I=1) = \sum_L \sum_S P(L|G) * P(S| I=1) * P(G| D=0, I=1) * P(D=0) * P(I=1)$
#
# $P(G | D=0, I=1) = P(D=0) * P(I=1) * P(G | D=0, I=1) * \sum_L P(L | G) * \sum_S P(S | I=1)$
#
# In pgmpy we will just need to pass an extra argument in the case of conditional distributions:
print(infer.query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent'}))
# #### Predicting values from new data points
# Predicting values from new data points is quite similar to computing the conditional probabilities. We need to query for the variable that we need to predict given all the other features. The only difference is that rather than getting the probabilitiy distribution we are interested in getting the most probable state of the variable.
#
# In pgmpy this is known as MAP query. Here's an example:
infer.map_query(['G'])
infer.map_query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent'})
infer.map_query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent', 'L': 'Good', 'S': 'Good'})
# ### 5. Other methods for Inference
# Even though exact inference algorithms like Variable Elimination optimize the inference task, it is still computationally quite expensive in the case of large models. For such cases we can use approximate algorithms like Message Passing Algorithms, Sampling Algorithms etc. We will talk about a few other exact and approximate algorithms in later parts of the tutorial.
| notebooks/2. Bayesian Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="pBQsZEJmubLs"
# <img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
# <br></br>
#
# # Neural Network Framework (Keras)
#
# ## *Data Science Unit 4 Sprint 2 Assignmnet 3*
#
# ## Use the Keras Library to build a Multi-Layer Perceptron Model on the Boston Housing dataset
#
# - The Boston Housing dataset comes with the Keras library so use Keras to import it into your notebook.
# - Normalize the data (all features should have roughly the same scale)
# - Import the type of model and layers that you will need from Keras.
# - Instantiate a model object and use `model.add()` to add layers to your model
# - Since this is a regression model you will have a single output node in the final layer.
# - Use activation functions that are appropriate for this task
# - Compile your model
# - Fit your model and report its accuracy in terms of Mean Squared Error
# - Use the history object that is returned from model.fit to make graphs of the model's loss or train/validation accuracies by epoch.
# - Run this same data through a linear regression model. Which achieves higher accuracy?
# - Do a little bit of feature engineering and see how that affects your neural network model. (you will need to change your model to accept more inputs)
# - After feature engineering, which model sees a greater accuracy boost due to the new features?
# + colab_type="code" id="8NLTAR87uYJ-" colab={}
import tensorflow as tf
import tensorflow.keras as k
from keras.datasets import boston_housing
import numpy as np
import pandas as pd
# + id="WYw1p_YJaOxx" colab_type="code" colab={}
SEED = np.random.seed(42)
# + id="yBpK0c5YaWTu" colab_type="code" colab={}
(X_train, y_train), (X_test, y_test) = boston_housing.load_data()
X_train = k.utils.normalize(X_train)
X_test = k.utils.normalize(X_test)
# + id="1lOrewRNcLmj" colab_type="code" outputId="7b36a183-9b5a-4d89-d11d-fd4864e21dd6" colab={"base_uri": "https://localhost:8080/", "height": 34}
y_train[:10]
# + id="-NjM_7_Gb9Kd" colab_type="code" outputId="66690666-108d-4019-aeb7-6afaec7e6c57" colab={"base_uri": "https://localhost:8080/", "height": 419}
pd.DataFrame(X_train)
# + [markdown] id="LXhTHW7F65dn" colab_type="text"
# ### Baseline
# + id="Vipj0Awv0YiQ" colab_type="code" colab={}
housing_model = k.Sequential()
# + id="bCzGLmAJ0dNe" colab_type="code" outputId="a25aa5d9-bdc2-47da-9e87-37810956a8e7" colab={"base_uri": "https://localhost:8080/", "height": 221}
# Input => Hidden
housing_model.add(k.layers.Dense(12, input_dim=13, activation='relu'))
# Output
housing_model.add(k.layers.Dense(1))
#Compile
housing_model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mse'])
housing_model.summary()
# + id="uM5BPYlA1cWq" colab_type="code" colab={}
history = housing_model.fit(X_train,y_train, epochs=100, verbose=False)
# + id="hZ4kfuF63cYQ" colab_type="code" outputId="3851a46f-ae12-4b0a-d387-a4c068dad096" colab={"base_uri": "https://localhost:8080/", "height": 51}
scores = housing_model.evaluate(X_test, y_test)
print(f'{housing_model.metrics_names[1]}: {scores[1]}')
# + id="T3G9eDqx3vMf" colab_type="code" outputId="ff926992-061b-4dd6-c84d-610a8a587316" colab={"base_uri": "https://localhost:8080/", "height": 281}
import matplotlib.pyplot as plt
fig, ((ax1,ax2)) = plt.subplots(1,2)
ax1.plot(history.history['loss'], color = 'r')
ax1.set_title("Loss")
ax2.plot(history.history['mean_squared_error'], color = 'b')
ax2.set_title("MSE");
# + [markdown] id="Dzp_74qh4F3o" colab_type="text"
# ## 2nd iter
# + id="iKAlGEix3_0n" colab_type="code" colab={}
housing_model = k.Sequential(name='round2...fight')
# + id="I-4Awth64S23" colab_type="code" outputId="cbca3bf7-7c28-4d5a-9936-5003226562f7" colab={"base_uri": "https://localhost:8080/", "height": 289}
# Input => Hidden
housing_model.add(k.layers.Dense(12, input_dim=13, activation='linear'))
# Hidden
housing_model.add(k.layers.Dense(2000, activation='relu'))
housing_model.add(k.layers.Dense(2000, activation='linear'))
# Output
housing_model.add(k.layers.Dense(1,activation='linear'))
#Compile
housing_model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mse', 'mae'])
housing_model.summary()
# + id="wBBWRIe640Jz" colab_type="code" outputId="1185c8a6-11d5-41a3-fbc5-6d82012482fe" colab={"base_uri": "https://localhost:8080/", "height": 1000}
history = housing_model.fit(X_train,y_train, epochs=100, verbose=5)
# + id="Padvvek045eS" colab_type="code" outputId="7851641c-e4ab-4048-c597-45929b66eb7c" colab={"base_uri": "https://localhost:8080/", "height": 51}
scores = housing_model.evaluate(X_test, y_test)
print(f'{housing_model.metrics_names[1]}: {scores[1]}')
# + id="UZ193hTZ74ll" colab_type="code" outputId="ee5bcf30-d69b-44f1-f621-591c01c57518" colab={"base_uri": "https://localhost:8080/", "height": 281}
fig, ((ax1,ax2)) = plt.subplots(1,2)
ax1.plot(history.history['mean_absolute_error'], color = 'r')
ax1.set_title("MAE")
ax2.plot(history.history['mean_squared_error'], color = 'b')
ax2.set_title("MSE");
# + [markdown] id="94yu40kdWQ3-" colab_type="text"
# ### Linear Model
#
# + id="dpSU5oSxWQDI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1e16e643-f77a-4ff9-9259-85c4124b70be"
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
lr = LinearRegression()
lr.fit(X_train, y_train)
preds = lr.predict(X_test)
MSE = mean_squared_error(y_test, preds)
print(f'{MSE:.05f}')
# + [markdown] id="FFzOpUqYYbT2" colab_type="text"
# ## Feature Engineering
#
#
# 0. - CRIM - per capita crime rate by town
# 1. - ZN - proportion of residential land zoned for lots over 25,000 sq.ft.
# 2. - INDUS - proportion of non-retail business acres per town.
# 3. - CHAS - Charles River dummy variable (1 if tract bounds river; 0 otherwise)
# 4. - NOX - nitric oxides concentration (parts per 10 million)
# 5. - RM - average number of rooms per dwelling
# 6. - AGE - proportion of owner-occupied units built prior to 1940
# 7. - DIS - weighted distances to five Boston employment centres
# 8. - RAD - index of accessibility to radial highways
# 9. - TAX - full-value property-tax rate per $10,000
#
# 9. - PTRATIO - pupil-teacher ratio by town
# 10. - B - 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
# 11. - LSTAT - % lower status of the population
# 12. - MEDV - Median value of owner-occupied homes in $1000's
# + id="q4O4wlJVYeWL" colab_type="code" colab={}
(X_train, y_train), (X_test, y_test) = boston_housing.load_data()
df = pd.DataFrame(X_train)
df[13] = pd.Series(y_train)
df1 = pd.DataFrame(X_test)
df1[13] = pd.Series(y_test)
df = pd.concat([df, df1])
# + id="4dyNZ9h8ZfTd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="31dbc67c-dfae-43d1-b63d-baa720b9b83c"
print(df.shape)
df.head()
# + id="97qWz-V7e7uA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 317} outputId="600715ab-02af-41ed-d24b-bde377870b62"
df.describe()
# + [markdown] id="8Gu7J6_ae7aM" colab_type="text"
#
# + id="g3FV2s8FbwZw" colab_type="code" colab={}
for col in df.columns:
df[f'{col}_outlier'] = df[col].apply(lambda x: 1 if x > \
np.mean(df[col]) + (np.std(df[col])*2) # 2 standard deviations away from the mean
else 0)
# + id="IcIm4Z8Lfq4Z" colab_type="code" colab={}
from sklearn.model_selection import train_test_split as tts
target = 13
omit = [13, '13_outlier']
feats = [col for col in df.columns if col not in omit]
X = df[feats]
y = df[target]
X_train, X_test, y_train, y_test = tts(X,y, test_size=.1 ,random_state=SEED)
# + id="XAvIGBqhkONS" colab_type="code" colab={}
X_train = k.utils.normalize(X_train)
X_test = k.utils.normalize(X_test)
# + id="NT6KEX3fiEt4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6805c270-ccf7-4fc4-e2d5-461dd7f4d597"
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
print(f'{((18.16 - mean_squared_error(y_test, y_pred))/18.16)*100:.02f}%\
improvement')
# + id="F7kjR0S4jLOv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b8e7c725-47c9-4a39-dc14-9a6c00b19b4d"
X_train.shape
# + id="IfTrZZqGkNUD" colab_type="code" colab={}
# + id="dnA1ure5iZDE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="3a83d144-6463-43a6-b85c-35ca4502676b"
housing_model = k.Sequential(name='round3...fight')
# Input => Hidden
housing_model.add(k.layers.Dense(13, input_dim=26, activation='linear'))
# Hidden
housing_model.add(k.layers.Dense(500, activation='relu'))
housing_model.add(k.layers.Dense(500, activation='linear'))
# Output
housing_model.add(k.layers.Dense(1,activation='linear'))
#Compile
housing_model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mse', 'mae'])
housing_model.summary()
# + id="fD6dDhi7jYI5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a8efeaa9-ba28-4319-8c87-145d8f14b2c4"
history = housing_model.fit(X_train,y_train, epochs=100, verbose=5)
# + id="Ik4FBbxSjeY9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="57d7e0c7-675f-4e91-a3fb-d2a0916cd2c3"
scores = housing_model.evaluate(X_test, y_test)
print(f'{housing_model.metrics_names[1]}: {scores[1]}')
# + id="0iNKrv7tktju" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="e733ccb7-99ed-4082-bc48-7e2cd2332a6a"
fig, ((ax1,ax2)) = plt.subplots(1,2)
ax1.plot(history.history['mean_absolute_error'], color = 'r')
ax1.set_title("MAE")
ax2.plot(history.history['mean_squared_error'], color = 'b')
ax2.set_title("MSE");
# + id="EH50MG7Tk1Y0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="37467d82-59f1-447d-c56e-cf2ddc92cf49"
print(f'{((33.2-17.9)/33.2)*100:.02f}% improvement')
# + [markdown] colab_type="text" id="SfcFnOONyuNm"
# ## Use the Keras Library to build an image recognition network using the Fashion-MNIST dataset (also comes with keras)
#
# - Load and preprocess the image data similar to how we preprocessed the MNIST data in class.
# - Make sure to one-hot encode your category labels
# - Make sure to have your final layer have as many nodes as the number of classes that you want to predict.
# - Try different hyperparameters. What is the highest accuracy that you are able to achieve.
# - Use the history object that is returned from model.fit to make graphs of the model's loss or train/validation accuracies by epoch.
# - Remember that neural networks fall prey to randomness so you may need to run your model multiple times (or use Cross Validation) in order to tell if a change to a hyperparameter is truly producing better results.
# + id="gx-DpgWU4zqK" colab_type="code" colab={}
# + id="kNWRkt813-lH" colab_type="code" colab={}
# + colab_type="code" id="szi6-IpuzaH1" colab={}
##### Your Code Here #####
# + [markdown] colab_type="text" id="zv_3xNMjzdLI"
# ## Stretch Goals:
#
# - Use Hyperparameter Tuning to make the accuracy of your models as high as possible. (error as low as possible)
# - Use Cross Validation techniques to get more consistent results with your model.
# - Use GridSearchCV to try different combinations of hyperparameters.
# - Start looking into other types of Keras layers for CNNs and RNNs maybe try and build a CNN model for fashion-MNIST to see how the results compare.
| curriculum/unit-4-machine-learning/sprint-2-intro-nn/module3-Intro-to-Keras/LS_DS_423_Keras_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Credits: Forked from [deep-learning-keras-tensorflow](https://github.com/leriomaggio/deep-learning-keras-tensorflow) by <NAME>
# + [markdown] slideshow={"slide_type": "slide"}
# # ConvNet HandsOn with Keras
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Problem Definition
#
# *Recognize handwritten digits*
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Data
#
# The MNIST database ([link](http://yann.lecun.com/exdb/mnist)) has a database of handwritten digits.
#
# The training set has $60,000$ samples.
# The test set has $10,000$ samples.
#
# The digits are size-normalized and centered in a fixed-size image.
#
# The data page has description on how the data was collected. It also has reports the benchmark of various algorithms on the test dataset.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Load the data
#
# The data is available in the repo's `data` folder. Let's load that using the `keras` library.
#
# For now, let's load the data and see how it looks.
# + slideshow={"slide_type": "subslide"}
import numpy as np
import keras
from keras.datasets import mnist
# + slideshow={"slide_type": "skip"}
# !mkdir -p $HOME/.keras/datasets/euroscipy_2016_dl-keras/data/
# + slideshow={"slide_type": "subslide"}
# Set the full path to mnist.pkl.gz
path_to_dataset = "euroscipy_2016_dl-keras/data/mnist.pkl.gz"
# + code_folding=[] slideshow={"slide_type": "fragment"}
# Load the datasets
(X_train, y_train), (X_test, y_test) = mnist.load_data(path_to_dataset)
# + [markdown] slideshow={"slide_type": "slide"}
# # Basic data analysis on the dataset
# + slideshow={"slide_type": "subslide"}
# What is the type of X_train?
# + slideshow={"slide_type": "subslide"}
# What is the type of y_train?
# + slideshow={"slide_type": "subslide"}
# Find number of observations in training data
# + slideshow={"slide_type": "subslide"}
# Find number of observations in test data
# + slideshow={"slide_type": "subslide"}
# Display first 2 records of X_train
# + slideshow={"slide_type": "subslide"}
# Display the first 10 records of y_train
# + slideshow={"slide_type": "subslide"}
# Find the number of observations for each digit in the y_train dataset
# + slideshow={"slide_type": "subslide"}
# Find the number of observations for each digit in the y_test dataset
# + slideshow={"slide_type": "subslide"}
# What is the dimension of X_train?. What does that mean?
# + [markdown] slideshow={"slide_type": "slide"}
# ### Display Images
#
# Let's now display some of the images and see how they look
#
# We will be using `matplotlib` library for displaying the image
# + slideshow={"slide_type": "subslide"}
from matplotlib import pyplot
import matplotlib as mpl
# %matplotlib inline
# + slideshow={"slide_type": "subslide"}
# Displaying the first training data
# -
fig = pyplot.figure()
ax = fig.add_subplot(1,1,1)
imgplot = ax.imshow(X_train[1], cmap=mpl.cm.Greys)
imgplot.set_interpolation('nearest')
ax.xaxis.set_ticks_position('top')
ax.yaxis.set_ticks_position('left')
pyplot.show()
# + slideshow={"slide_type": "subslide"}
# Let's now display the 11th record
# -
| deep-learning/keras-tutorial/2.2.1 Supervised Learning - ConvNet HandsOn Part I.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import azureml.core
from azureml.core import Workspace
# Load the workspace
ws = Workspace.from_config()
# +
# Get the default datastore
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/Customer.csv','./data/residents_source1.csv','./data/residents_source2.csv',
'./data/payments.csv','./data/surveys.csv','./data/leases.csv','./data/workorders.csv'], # Upload the csv files in /data
target_path='propertymgmt-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
# -
from azureml.data import DataType
Customer_data_types = {
'CustomerId': DataType.to_string(),
'pid': DataType.to_string(),
'surveytype': DataType.to_string(),
'surverydate': DataType.to_datetime(),
'question': DataType.to_string(),
'answer': DataType.to_float(),
'FirstName': DataType.to_string(),
'LastName': DataType.to_string(),
'Name': DataType.to_string(),
'Gender': DataType.to_string(),
'Email': DataType.to_string(),
'Telephone': DataType.to_string(),
'Country': DataType.to_string(),
'City': DataType.to_string(),
'State': DataType.to_string(),
'PostCode': DataType.to_string(),
'StreetAddress': DataType.to_string(),
'DateOfBirth': DataType.to_datetime(),
'CreatedDate': DataType.to_string(),
'Source': DataType.to_string(),
'SurveyEmail': DataType.to_string(),
'sourcedata_residents_source1_cid': DataType.to_string(),
'sourcedata_residents_source1_cid_Alternate': DataType.to_string(),
'sourcedata_residents_source2_cid': DataType.to_string(),
'sourcedata_residents_source2_cid_Alternate': DataType.to_string(),
'sourcedata_surveys_sid': DataType.to_string(),
'sourcedata_surveys_sid_Alternate': DataType.to_string()
}
# +
#Create a Tabular dataset from the path on the datastore
from azureml.core import Dataset
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'propertymgmt-data/Customer.csv'),
set_column_types=Customer_data_types)
tab_data_set = tab_data_set.register(workspace=ws,
name='CustomerData',
description='Customer Data',
tags = {'format':'CSV'},
create_new_version=True)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'propertymgmt-data/residents_source1.csv'))
tab_data_set = tab_data_set.register(workspace=ws,
name='Residents1Data',
description='Resident Data',
tags = {'format':'CSV'},
create_new_version=True)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'propertymgmt-data/residents_source2.csv'))
tab_data_set = tab_data_set.register(workspace=ws,
name='Residents2Data',
description='Resident Data',
tags = {'format':'CSV'},
create_new_version=True)
#Create a Tabular dataset from the path on the datastore
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'propertymgmt-data/leases.csv'))
tab_data_set = tab_data_set.register(workspace=ws,
name='LeasesData',
description='Leases Data',
tags = {'format':'CSV'},
create_new_version=True)
#Create a Tabular dataset from the path on the datastore
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'propertymgmt-data/payments.csv'))
tab_data_set = tab_data_set.register(workspace=ws,
name='PaymentsData',
description='Payments Data',
tags = {'format':'CSV'},
create_new_version=True)
#Create a Tabular dataset from the path on the datastore
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'propertymgmt-data/surveys.csv'))
tab_data_set = tab_data_set.register(workspace=ws,
name='SurveysData',
description='Survey Data',
tags = {'format':'CSV'},
create_new_version=True)
#Create a Tabular dataset from the path on the datastore
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'propertymgmt-data/workorders.csv'))
tab_data_set = tab_data_set.register(workspace=ws,
name='WorkOrdersData',
description='Work Orders Data',
tags = {'format':'CSV'},
create_new_version=True)
# +
from azureml.core import Workspace, Dataset, Datastore, ScriptRunConfig, Experiment
from azureml.data.data_reference import DataReference
import os
import azureml.dataprep as dprep
import pandas as pd
import numpy as np
import scripts.pipeline_library as pl
import azureml.core
from azureml.core import Workspace
ws = Workspace.from_config()
customerData = Dataset.get_by_name(ws, name='CustomerData')
resident1Data = Dataset.get_by_name(ws, name='Residents1Data')
resident2Data = Dataset.get_by_name(ws, name='Residents2Data')
leaseData = Dataset.get_by_name(ws, name='LeasesData')
paymentData = Dataset.get_by_name(ws, name='PaymentsData')
surveyData = Dataset.get_by_name(ws, name='SurveysData')
workorderData = Dataset.get_by_name(ws, name='WorkOrdersData')
config = {
"output_datastore" : None,
"output_path" : None,
"model" : None,
"run" : None,
"workspace": ws,
"step_type" : "train",
"model_folder" : "models",
"model_name" : 'model',
"description" : "Lease Renewal Prediction Model"
}
pl.pipeline_steps(customerData,resident1Data, resident2Data, leaseData,paymentData,surveyData,workorderData,config)
# +
# from azureml.core import Workspace, Dataset, Datastore, ScriptRunConfig, Experiment
# from azureml.data.data_reference import DataReference
# import os
# import azureml.dataprep as dprep
# import pandas as pd
# import numpy as np
# import scripts.pipeline_library as pl
# import azureml.core
# from azureml.core import Workspace
# from azureml.core.model import Model
# ws = Workspace.from_config()
# customerData = Dataset.get_by_name(ws, name='CustomerData')
# resident1Data = Dataset.get_by_name(ws, name='Residents1Data')
# resident2Data = Dataset.get_by_name(ws, name='Residents2Data')
# leaseData = Dataset.get_by_name(ws, name='LeasesData')
# paymentData = Dataset.get_by_name(ws, name='PaymentsData')
# surveyData = Dataset.get_by_name(ws, name='SurveysData')
# workorderData = Dataset.get_by_name(ws, name='WorkOrdersData')
# import joblib
# model_name='model'
# model_path = Model.get_model_path(model_name=model_name, _workspace=ws)
# loaded_model = joblib.load(model_path)
# config = {
# "output_datastore" : None,
# "output_path" : None,
# "model" : loaded_model,
# "run" : None,
# "workspace": ws,
# "step_type" : "test",
# "model_folder" : "models",
# "model_name" : 'model',
# "description" : "Lease Renewal Prediction Model"
# }
# pl.pipeline_steps(customerData,resident1Data, resident2Data, leaseData,paymentData,surveyData,workorderData,config)
| Code/AMLNotebooks/CreateTrainingPipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from chatterbot import ChatBot
from chatterbot.trainers import ListTrainer, ChatterBotCorpusTrainer
df = pd.read_json("./kidswritejokes_tweets.json")
df.shape
df.sample(10)
joke_list = list(df['text'])
jl = list(filter(lambda x: (x.count('\n')>1 & x.count('@')==0 & x.count('//')==0 ), joke_list))
joke_list_list = list(map(lambda jk: list(filter(lambda x: len(x) >0, jk.split('\n')))[-2:], jl))
joke_list_list = list(filter(lambda jk:
jk[0].find('http') ==-1 &
jk[0].find('.com') ==-1 &
jk[0].find('@') ==-1 &
jk[0].find('\\') ==-1 &
jk[1].find('http') ==-1 &
jk[1].find('.com') ==-1 &
jk[1].find('@') ==-1 &
jk[1].find('\\') ==-1 , joke_list_list))
joke_list_list
| jupyter/kidswritejokes_blog.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from __future__ import print_function, division
from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local") \
.appName("test") \
.enableHiveSupport() \
.getOrCreate()
sc = spark.sparkContext
# -
# # 共享变數(shared varable)
#
# ### 變數在Driver 上的更新不會傳回 Master
# 一般情况,当传递一个操作函数( 例如 map 或 reduce) 给 Spark时,Spark 实际上是操作这个函数变数的副本。这些变数被复制到每台机器上,而且这些变数在远端机器上的更新都不会传送回驱动程式。一般来说,跨任务的读写操作变数效率不高而Driver是我们提交Spark程式的节点,并且所有的reduce类型的操作都会汇总到Driver节点进行整合。节点之间会将map/reduce等操作函数传递一个独立副本到每一个节点,这些变数也会复制到每台机器上,而节点之间的运算是相互独立的,变数的更新并不会传递回Driver程式。
#
# ### 共享變數能讓節點之間共享同個值
# 那么有个问题,如果我们想在节点之间共享一份变数,比如一份公共的配置项, Spark为我们提供了两种特定的共享变数,来完成节点间变数的共享广播变数 ( broadcast variable) 与累加器 ( accumulator)
#
#
# ## 广播变数 (broadcast)
# 广播变数允许程式将一个可读变数存在每台机器的记忆体里,而不是每个任务都存有一份副本。例如,利用广播变数,我们能将一个大资料量输入的集合副本发送到每个节点上。
#
# 累加器比较简单直观,如果我们需要在spark中进行一些全局统计就可以使用它。但是有时候仅仅一个累加器并不能满足我们的需求,比如资料库中一份公共配置表格,需要同步给各个节点进行查询。Spark也尝试着利用有效的广播演算法去分配广播变数,以减少通信的成本。
# ### 建立 RDD
rdd = sc.parallelize(['dog', 'cat', 'dog', 'cat', 'cat'],4)
mapper = {'dog': 1 ,'cat': 2}
mapper['dog']
broadcastVar = sc.broadcast(mapper)
broadcastVar.value
broadcastVar.value.get('cat')
rdd.map(lambda x: broadcastVar.value.get(x)).collect()
# ## 更新廣播變項
# 广播变量创建以后,我们就能够在集群的任何函数中使用它来代替变量v,这样我们就不需要再次传递变量v到每个节点上。另外,为了保证所有的节点得到广播变量具有相同的值,对象v不能在广播之后被修改。
broadcastVar['pig'] = 3
broadcastVar.unpersist()
mapper['pig'] = 3
mapper
broadcastVar = sc.broadcast(mapper)
broadcastVar.value
# ### 解除廣播變項
broadcastVar.destroy()
broadcastVar.value
rdd.map(lambda x: broadcastVar.value.get(x)).collect()
broadcastVar.destroy()
# # 累加器 (Accumulators)
# 顾名思义,累加器是一种只能利用关连操作做“加” 操作的变数,因此他能够快速的执行并行操作。而且他们能够操作counters和sums。Spark 原本支援数值类型的累加器,开发人员可以自行增加可被支援的类型。
# 如果建立一个具名的累加器,它可在 Spark UI 上显示。这对理解运作阶段 ( running stages ) 的过程很有帮助。
# 一个累加器可以通过调用SparkContext.accumulator(v)方法从一个初始变量v中创建。运行在集群上的任务可以通过add方法或者使用+=操作来给它加值。然而,它们无法读取这个值。和广播变项相反,累加器是一种“add only”的变项
# ### 用累加器來計算符合條件的值
rdd = sc.parallelize(range(100))
counter = sc.accumulator(0)
def conditional_counter(x):
global counter
if x % 2 == 0:
counter += 1
return x
rdd_count = rdd.map(lambda x: conditional_counter(x))
counter.value
# ### 注意! accumulators 只有在 action 階段才會被觸發
rdd_count.count()
counter.value
# ### 累加器的陷阱
rdd_count.count()
counter.value
# ### 累加器不會歸零
# 
# ### 透過 Cache 來打斷連結
# 
counter = sc.accumulator(0)
rdd = sc.parallelize(range(100))
rdd_count = rdd.map(lambda x: conditional_counter(x))
rdd_count.persist()
rdd_count.count()
counter.value
rdd_count.count()
counter.value
rdd_count.count()
counter.value
# ### 用累加器改寫基本功能
# ### count()
rdd.count()
count_accu = sc.accumulator(0)
rdd.foreach(lambda x: count_accu.add(1))
count_accu.value
# ### sum
rdd.reduce(lambda x, y: x + y)
sum_accu = sc.accumulator(0)
rdd.foreach(lambda x: sum_accu.add(x))
sum_accu.value
# ### Matrix Product
rdd1 = sc.parallelize([1,2,3,4])
rdd2 = sc.parallelize([2,4,6,8])
# ### dot(rdd1, rdd2) = 1*2 + 2*4 + 3*6 + 4*8 = 60
prod_accu = sc.accumulator(0)
rdd1.zip(rdd2).foreach(lambda (x,y): prod_accu.add(x * y))
prod_accu.value
| pyspark/example/spark_core/4.5_shared_variable.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# File contains a number of useful pandas code snippets
import pandas as pd
import numpy as np
# http://pandas.pydata.org/pandas-docs/stable/10min.html#selection-by-label
s = pd.Series([1,3,5,np.nan,6,8])
s
dates = pd.date_range('20130101', periods=6)
dates
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
df.index
df.columns
df.values
df.describe()
df.sort_index(axis=1, ascending=False)
df.sort_values(by='B')
df['B']
| learning/pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using PyTorch DALI plugin: using various readers
#
# ### Overview
#
# This example shows how different readers could be used to interact with PyTorch. It shows how flexible DALI is.
#
# The following readers are used in this example:
#
# - MXNetReader
# - CaffeReader
# - FileReader
# - TFRecordReader
#
# For details on how to use them please see other [examples](..).
# Let us start from defining some global constants
# +
# MXNet RecordIO
db_folder = "/data/imagenet/train-480-val-256-recordio/"
# Caffe LMDB
lmdb_folder = "/data/imagenet/train-lmdb-256x256"
# image dir with plain jpeg files
image_dir = "../images"
# TFRecord
tfrecord = "/data/imagenet/train-val-tfrecord-480/train-00001-of-01024"
tfrecord_idx = "idx_files/train-00001-of-01024.idx"
tfrecord2idx_script = "tfrecord2idx"
N = 8 # number of GPUs
BATCH_SIZE = 128 # batch size per GPU
ITERATIONS = 32
IMAGE_SIZE = 3
# -
# Create idx file by calling `tfrecord2idx` script
# +
from subprocess import call
import os.path
if not os.path.exists("idx_files"):
os.mkdir("idx_files")
if not os.path.isfile(tfrecord_idx):
call([tfrecord2idx_script, tfrecord, tfrecord_idx])
# -
# Let us define:
# - common part of pipeline, other pipelines will inherit it
# +
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
class CommonPipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(CommonPipeline, self).__init__(batch_size, num_threads, device_id)
self.decode = ops.ImageDecoder(device = "mixed", output_type = types.RGB)
self.resize = ops.Resize(device = "gpu",
image_type = types.RGB,
interp_type = types.INTERP_LINEAR)
self.cmn = ops.CropMirrorNormalize(device = "gpu",
output_dtype = types.FLOAT,
crop = (227, 227),
image_type = types.RGB,
mean = [128., 128., 128.],
std = [1., 1., 1.])
self.uniform = ops.Uniform(range = (0.0, 1.0))
self.resize_rng = ops.Uniform(range = (256, 480))
def base_define_graph(self, inputs, labels):
images = self.decode(inputs)
images = self.resize(images, resize_shorter = self.resize_rng())
output = self.cmn(images, crop_pos_x = self.uniform(),
crop_pos_y = self.uniform())
return (output, labels)
# -
# - MXNetReaderPipeline
# +
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
class MXNetReaderPipeline(CommonPipeline):
def __init__(self, batch_size, num_threads, device_id, num_gpus):
super(MXNetReaderPipeline, self).__init__(batch_size, num_threads, device_id)
self.input = ops.MXNetReader(path = [db_folder+"train.rec"], index_path=[db_folder+"train.idx"],
random_shuffle = True, shard_id = device_id, num_shards = num_gpus)
def define_graph(self):
images, labels = self.input(name="Reader")
return self.base_define_graph(images, labels)
# -
# - CaffeReadPipeline
class CaffeReadPipeline(CommonPipeline):
def __init__(self, batch_size, num_threads, device_id, num_gpus):
super(CaffeReadPipeline, self).__init__(batch_size, num_threads, device_id)
self.input = ops.CaffeReader(path = lmdb_folder,
random_shuffle = True, shard_id = device_id, num_shards = num_gpus)
def define_graph(self):
images, labels = self.input(name="Reader")
return self.base_define_graph(images, labels)
# - FileReadPipeline
class FileReadPipeline(CommonPipeline):
def __init__(self, batch_size, num_threads, device_id, num_gpus):
super(FileReadPipeline, self).__init__(batch_size, num_threads, device_id)
self.input = ops.FileReader(file_root = image_dir)
def define_graph(self):
images, labels = self.input(name="Reader")
return self.base_define_graph(images, labels)
# - TFRecordPipeline
# +
import nvidia.dali.tfrecord as tfrec
class TFRecordPipeline(CommonPipeline):
def __init__(self, batch_size, num_threads, device_id, num_gpus):
super(TFRecordPipeline, self).__init__(batch_size, num_threads, device_id)
self.input = ops.TFRecordReader(path = tfrecord,
index_path = tfrecord_idx,
features = {"image/encoded" : tfrec.FixedLenFeature((), tfrec.string, ""),
"image/class/label": tfrec.FixedLenFeature([1], tfrec.int64, -1)
})
def define_graph(self):
inputs = self.input(name="Reader")
images = inputs["image/encoded"]
labels = inputs["image/class/label"]
return self.base_define_graph(images, labels)
# -
# Let us create pipelines and pass them to PyTorch generic iterator
# +
from __future__ import print_function
import numpy as np
from nvidia.dali.plugin.pytorch import DALIGenericIterator
pipe_types = [[MXNetReaderPipeline, (0, 999)],
[CaffeReadPipeline, (0, 999)],
[FileReadPipeline, (0, 1)],
[TFRecordPipeline, (1, 1000)]]
for pipe_t in pipe_types:
pipe_name, label_range = pipe_t
print ("RUN: " + pipe_name.__name__)
pipes = [pipe_name(batch_size=BATCH_SIZE, num_threads=2, device_id = device_id, num_gpus = N) for device_id in range(N)]
pipes[0].build()
dali_iter = DALIGenericIterator(pipes, ['data', 'label'], pipes[0].epoch_size("Reader"))
for i, data in enumerate(dali_iter):
if i >= ITERATIONS:
break
# Testing correctness of labels
for d in data:
label = d["label"]
image = d["data"]
## labels need to be integers
assert(np.equal(np.mod(label, 1), 0).all())
## labels need to be in range pipe_name[2]
assert((label >= label_range[0]).all())
assert((label <= label_range[1]).all())
print("OK : " + pipe_name.__name__)
| docs/examples/pytorch/pytorch-various-readers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Análisis de Patrones en Texto
# Muchas veces nos enfretamos a diferentes cadenas de carácteres sobre los cuales queremos realizar cierto tratamiento con diversos fines. En esta seccion realizaremos una introducción a algunas funciones especiales del tipo de dato **string** y a las **expresiones regulares**.
# Objetivos
# -------------
#
# - Manipulación de cadenas de carácteres
# - Expresiones regulares
#
# ## Búsqueda de Patrones en String
# -----------------------------------
# En esta sección, repasaremos algunos de los conceptos básicos acerca de las funciones básicas en cadena de carácteres y búsqueda de patrones más avanzados
# **String**
# Como ya hemos visto en el curso un string esta se encuentra delimitado por <code>'text'</code>, <code>"text"</code> para casos de una cadenas de una sola línea. Para múltiples líneas se tiene <code>"""text""""</code>.
my_string = "This is a string"
my_string2 = 'This is also a string'
my_string = 'And this? It's the wrong string'
my_string = "And this? It's the correct string"
print(my_string)
# **Repaso de Funciones básicas**
# len -> nos brinda la longitud de la cadena
len(my_string)
# str() -> Conversión a string
str(123)
# +
# Concatenacion
string1= 'Awesome day'
string2 = 'for biking'
print(string1 +" "+ string2)
# -
"hola " + 2
nombre = 'Gonzalo'
f"hola {nombre}"
# +
# indexación
print(string1[0]) # Obtengo primer elemento de la cadena de carácteres
print(string1[-1]) #Obtengo último carácter de la cadena
print(string1[len(string1)-1]) #Obtengo último carácter de la cadena
# +
# Silicing
print(string1[0:3]) # captura de los 3 primeros carácters
print(string1[:5]) # 5 primeros elementos
print(string1[5:]) # de la posición 5 en adelante
# +
# Stride
print(string1[0:6:2]) # selecciono los carácteres de la posición 0,6,2
print(string1[::-1]) # reversión de cadena
# -
# **Operaciones básicas**
# +
# lower -> Conversión a minusculas
print(string1.lower())
# Upper -> conversión a mayúsculas
print(string1.upper())
# Capitalize -> primera letra del texto en mayúscula
print(string1.capitalize())
print(string1.title())
# +
# Split -> Divide un texto según un separador
my_string = "This string will be split"
print(my_string.split(sep=" "))
print(my_string.split(sep=" ", maxsplit=2)) #maxsplit -> delimita la cantidad de palabras a ser separadas de la cadena
# +
# \n -> define un salto de línea en texto
# \t -> tabular
my_string = "This string will be split\nin two"
print(my_string)
# splitlines -> separa texto según saltos de línea
print(my_string.splitlines())
print(my_string.split('\n'))
# +
# join -> Permite concatenar strings de un listado
my_list = ["this", "would", "be", "a", "string"]
print(" ".join(my_list))
# +
# strip -> realiza una limpieza de texto quitando espacios en blanco o saltos de
# línea de los extremos de una cadena de carácteres
my_string = " This string will be stripped\n"
print(my_string)
print(my_string.strip())
# -
# **Búsqueda de patrones**
# +
# Find -> Realiza una búsqueda en texto
my_string = "Where's Waldo?"
print(my_string.find("Waldo"))
print(my_string.find("Wenda")) # No se encotro palabra buscada
# -
my_string[8]
# +
# index -> similar a find permite realizar la búsqueda
my_string = "Where's Waldo?"
my_string.index("Waldo")
# -
print(my_string.index("Wenda"))
# Count -> permite obtener la cantidad de veces en que aparece una palabra en texto
my_string = "How many fruits do you have in your fruit basket?"
my_string.count("fruit")
# +
# replace -> permite reemplazar un texto por otro
my_string = "The red house is between the blue house and the old house"
print(my_string.replace("house", "car"))
print(my_string.replace("house", "car", 2)) # reemplza la palabra 'house' 2 veces
# -
# # Ejercicios
# 1. Escribir una función que, dado un string, retorne la longitud de la última palabra. Se considera que las palabras están separadas por uno o más espacios. También podría haber espacios al principio o al final del string pasado por parámetro.
#
# **Consideraciones:**
#
# - Se considera que las cadenas ingresadas estarán conformadas solo por palabras [abc..] y espacios
#
# **Ejemplo de entrada y salida:**
#
# - Input: "<NAME>" -> Output Esperado: 5
# - Input: " Bienvenido al curso " -> Outpur Esperado: 5
#
cadena = "<NAME>"
cadena2 = " Bienvenido al curso "
lista_palabra = cadena2.strip().split(' ')
lista_palabra
lista_palabra[-1]
len(lista_palabra[-1])
| Modulo2/3. Funciones Strings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import pandas as pd
import numpy as np
raw_data_path = os.path.join(os.path.pardir,'data','raw')
train_file_path = os.path.join(raw_data_path, 'train.csv')
test_file_path = os.path.join(raw_data_path, 'test.csv')
train_df = pd.read_csv(train_file_path,index_col='Complaint-ID')
test_df = pd.read_csv(test_file_path,index_col='Complaint-ID')
test_df['Complaint-Status'] = -1
complaints = pd.concat((train_df, test_df), axis=0)
complaints.info()
print(complaints['Complaint-reason'].unique())
print(complaints.shape)
print(complaints.groupby('Complaint-reason').size())
carray = complaints['Complaint-reason'].unique()
def getMap(carray) :
count = 1
tmap = {}
for i in range(len(carray)):
tmap.update({carray[i]:i})
return tmap
carray = complaints['Complaint-Status'].unique()
getMap(carray)
def getComplaintReasonId(complaintReason) :
map = {'Loan servicing, payments, escrow account': 0,
'Incorrect information on credit report': 1,
'Using a debit or ATM card': 2,
"Cont'd attempts collect debt not owed": 3,
'Payoff process': 4,
'Loan modification,collection,foreclosure': 5,
'Problems caused by my funds being low': 6,
'Credit card protection / Debt protection': 7,
'Managing the loan or lease': 8,
'Problem when making payments': 9,
'Incorrect information on your report': 10,
'False statements or representation': 11,
'Disclosure verification of debt': 12,
'Customer service / Customer relations': 13,
'Improper use of your report': 14,
'Deposits and withdrawals': 15,
'Communication tactics': 16,
'Settlement process and costs': 17,
'Dealing with my lender or servicer': 18,
'Closing/Cancelling account': 19,
'Applying for a mortgage or refinancing an existing mortgage': 20,
'Problems when you are unable to pay': 21,
'Taking out the loan or lease': 22,
"Charged fees or interest I didn't expect": 23,
"Problem with a credit reporting company's investigation into an existing problem": 24,
'Account opening, closing, or management': 25,
'Struggling to pay mortgage': 26,
'Credit decision / Underwriting': 27,
'Improper contact or sharing of info': 28,
'Unable to get credit report/credit score': 29,
'Attempts to collect debt not owed': 30,
'Struggling to pay your loan': 31,
'Problem with a purchase shown on your statement': 32,
'Other': 33,
"Can't repay my loan": 34,
'Billing disputes': 35,
'Making/receiving payments, sending money': 36,
'Identity theft / Fraud / Embezzlement': 37,
"Credit reporting company's investigation": 38,
'Took or threatened to take negative or legal action': 39,
'Application, originator, mortgage broker': 40,
'Trouble using your card': 41,
'Credit monitoring or identity protection': 42,
'Getting a credit card': 43,
'Managing, opening, or closing account': 44,
'Unsolicited issuance of credit card': 45,
'Written notification about debt': 46,
'Problem with fraud alerts or security freezes': 47,
'Money was taken from your bank account on the wrong day or for the wrong amount': 48,
'Trouble during payment process': 49,
'Getting a line of credit': 50,
'Other features, terms, or problems': 51,
'Problems at the end of the loan or lease': 52,
'Unable to get your credit report or credit score': 53,
'Struggling to repay your loan': 54,
'Improper use of my credit report': 55,
'Dealing with your lender or servicer': 56,
'Advertising and marketing': 57,
'Money was not available when promised': 58,
'Credit monitoring or identity theft protection services': 59,
'Fraud or scam': 60,
'Threatened to contact someone or share information improperly': 61,
'Charged bank acct wrong day or amt': 62,
'Credit line increase/decrease': 63,
'Billing statement': 64,
'Fees': 65,
'Delinquent account': 66,
'Closing an account': 67,
'Taking/threatening an illegal action': 68,
'Other fee': 69,
'Problem with a lender or other company charging your account': 70,
'Managing an account': 71,
'Balance transfer': 72,
'Adding money': 73,
'Disclosures': 74,
'Unauthorized transactions/trans. issues': 75,
'APR or interest rate': 76,
'Other transaction problem': 77,
'Late fee': 78,
'Rewards': 79,
'Transaction issue': 80,
'Unexpected or other fees': 81,
'Other service problem': 82,
'Closing on a mortgage': 83,
'Other transaction issues': 84,
"Can't contact lender or servicer": 85,
'Problem with additional add-on products or services': 86,
'Sale of account': 87,
"Can't stop charges to bank account": 88,
'Advertising and marketing, including promotional offers': 89,
'Fees or interest': 90,
'Bankruptcy': 91,
'Applying for a mortgage': 92,
'Credit determination': 93,
'Getting a loan': 94,
"Received a loan you didn't apply for": 95,
"Can't contact lender": 96,
'Shopping for a loan or lease': 97,
'Account terms and changes': 98,
'Unexpected/Other fees': 99,
"Charged fees or interest you didn't expect": 100,
'Problem caused by your funds being low': 101,
'Lost or stolen check': 102,
"Received a loan I didn't apply for": 103,
'Privacy': 104,
'Cash advance fee': 105,
'Problem with a purchase or transfer': 106,
'Other service issues': 107,
'Closing your account': 108,
'Opening an account': 109,
'Payment to acct not credited': 110,
'Getting a loan or lease': 111,
'Application processing delay': 112,
'Convenience checks': 113,
'Incorrect/missing disclosures or info': 114,
'Cash advance': 115,
'Balance transfer fee': 116,
'Advertising, marketing or disclosures': 117,
'Customer service/Customer relations': 118,
'Trouble using the card': 119,
'Applied for loan/did not receive money': 120,
'Wrong amount charged or received': 121,
'Arbitration': 122,
'Forbearance / Workout plans': 123,
'Problem adding money': 124,
'Getting the loan': 125,
'Struggling to pay your bill': 126,
'Problem with customer service': 127,
'Overdraft, savings or rewards features': 128,
'Managing the line of credit': 129,
'Problem getting a card or closing an account': 130,
'Problem with the payoff process at the end of the loan': 131,
'Unauthorized transactions or other transaction problem': 132,
"Loan payment wasn't credited to your account": 133,
'Excessive fees': 134,
'Confusing or missing disclosures': 135,
'Lender repossessed or sold the vehicle': 136,
'Confusing or misleading advertising or marketing': 137,
'Managing, opening, or closing your mobile wallet account': 138,
"Problem with a company's investigation into an existing issue": 139,
'Vehicle was repossessed or sold the vehicle': 140,
'Credit limit changed': 141,
'Advertising': 142,
'Shopping for a line of credit': 143,
'Incorrect exchange rate': 144,
'Overlimit fee': 145,
'Lost or stolen money order': 146,
"Was approved for a loan, but didn't receive the money": 147,
'Problem with an overdraft': 148,
'Identity theft protection or other monitoring services': 149,
'Problem with cash advance': 150,
"Can't stop withdrawals from your bank account": 151}
return map[complaintReason]
getComplaintReasonId('Loan servicing, payments, escrow account')
complaints['ComplaintReasonId'] = complaints['Complaint-reason'].map(lambda x : getComplaintReasonId(x))
complaints.ComplaintReasonId
def getComplaintStatusId(status) :
map= {'Closed with explanation': 0,
'Closed with non-monetary relief': 1,
'Closed': 2,
'Closed with monetary relief': 3,
'Untimely response': 4,
-1: 5}
return map[status]
complaints['ComplaintStatusId'] = complaints['Complaint-Status'].map(lambda x : getComplaintStatusId(x))
complaints.ComplaintStatusId
complaints = complaints[['ComplaintStatusId','ComplaintReasonId']]
complaints
#
# %matplotlib inline
import matplotlib.pyplot as plt
complaints.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False, figsize=(9,9),
title='Box Plot for each input variable')
plt.show()
import pylab as pl
pl.suptitle("Histogram for each numeric input variable")
plt.savefig('complaints_hist')
plt.show()
processed_data_path = os.path.join(os.path.pardir,'data','processed')
write_train_path = os.path.join(processed_data_path,'train.csv')
write_test_path = os.path.join(processed_data_path,'test.csv')
complaints.loc[complaints.ComplaintStatusId != -1].to_csv(write_train_path)
columns = [column for column in complaints.columns if column !='ComplaintStatusId']
complaints.loc[complaints.ComplaintStatusId == --1,columns].to_csv(write_test_path)
processed_data_path = os.path.join(os.path.pardir,'data','processed')
train_file_path = os.path.join(processed_data_path,'train.csv')
test_file_path = os.path.join(processed_data_path,'test.csv')
train_df = pd.read_csv(train_file_path,index_col='Complaint-ID')
test_df = pd.read_csv(test_file_path,index_col='Complaint-ID')
train_df.info()
X = train_df[['ComplaintReasonId','ComplaintStatusId']]
y = train_df['ComplaintStatusId']
print (X.shape,y.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state = 0)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
print('mean survival in train {0:.3f}'.format(np.mean(y_train)))
print('mean survival in test {0:.3f}'.format(np.mean(y_test)))
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
print('Accuracy of Logistic regression classifier on training set: {:.2f}'
.format(logreg.score(X_train, y_train)))
print('Accuracy of Logistic regression classifier on test set: {:.2f}'
.format(logreg.score(X_test, y_test)))
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier().fit(X_train, y_train)
print('Accuracy of Decision Tree classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
print('Accuracy of K-NN classifier on training set: {:.2f}'
.format(knn.score(X_train, y_train)))
print('Accuracy of K-NN classifier on test set: {:.2f}'
.format(knn.score(X_test, y_test)))
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis()
lda.fit(X_train, y_train)
print('Accuracy of LDA classifier on training set: {:.2f}'
.format(lda.score(X_train, y_train)))
print('Accuracy of LDA classifier on test set: {:.2f}'
.format(lda.score(X_test, y_test)))
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train, y_train)
print('Accuracy of GNB classifier on training set: {:.2f}'
.format(gnb.score(X_train, y_train)))
print('Accuracy of GNB classifier on test set: {:.2f}'
.format(gnb.score(X_test, y_test)))
from sklearn.svm import SVC
svm = SVC()
svm.fit(X_train, y_train)
print('Accuracy of SVM classifier on training set: {:.2f}'
.format(svm.score(X_train, y_train)))
print('Accuracy of SVM classifier on test set: {:.2f}'
.format(svm.score(X_test, y_test)))
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
pred = knn.predict(X_test)
print(confusion_matrix(y_test, pred))
print(classification_report(y_test, pred))
| notebooks/1_Complaint_Status_Tracking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
import panel as pn
pn.extension()
# The ``Select`` widget allows selecting a ``value`` from a list or dictionary of ``options`` by selecting it from a dropdown menu. It falls into the broad category of single-value, option-selection widgets that provide a compatible API and include the [``RadioBoxGroup``](RadioBoxGroup.ipynb), [``AutocompleteInput``](AutocompleteInput.ipynb) and [``DiscreteSlider``](DiscreteSlider.ipynb) widgets.
#
# For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb).
#
# #### Parameters:
#
# For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).
#
# ##### Core
#
# * **``options``** (list or dict): A list or dictionary of options to select from
# * **``value``** (object): The current value; must be one of the option values
#
# ##### Display
#
# * **``disabled``** (boolean): Whether the widget is editable
# * **``name``** (str): The title of the widget
#
# ___
# +
select = pn.widgets.Select(name='Select', options=['Biology', 'Chemistry', 'Physics'])
select
# -
# Like most other widgets, ``Select`` has a value parameter that can be accessed or set:
select.value
# ### Controls
#
# The `Select` widget exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:
pn.Row(select.controls(jslink=True), select)
| examples/reference/widgets/Select.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## This notebook can be used as a quick check for variables in model list
# +
import warnings
warnings.filterwarnings('ignore')
import pyaerocom as pya
pya.change_verbosity('critical')
import traceback
from models import all_model_ids
TEST = 0
models = all_model_ids()
if TEST:
models = models[:2]
models
# -
# ## List of variables of interest
VARS = ['ang4487aer', 'od550aer', 'od550lt1aer', 'ec*', 'scatc*', 'absc*', 'ssa*']
def highlight_row(x):
#copy df to new - original data are not changed
df = x.copy()
#set by condition
m1 = df['Dim'] == 4
m2 = 'altitude' not in df['Dim names']
mask = m1 * m2
df.loc[mask, :] = 'background-color: #ffcccc'
df.loc[~mask,:] = 'background-color: ""'
return df
df = pya.utils.create_varinfo_table(model_ids=models,
vars_or_var_patterns=VARS,
read_data=True,
sort_by_cols=['Var', 'Model'])
df.to_csv('output/model_var_overview.csv')
df.style.apply(highlight_row, axis=None)
| model_var_overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Rivaldop/metodologidatascience/blob/main/ANN_CNN_iris.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="yKDc3KHbKtYQ"
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X = iris.data
y = iris.target
# + colab={"base_uri": "https://localhost:8080/"} id="KfBa5c8VMk0s" outputId="d01520aa-bc6b-4933-e40c-ffd57d31c9a6"
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=.10)
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=.15)
print('X_train', X_train.shape)
print('X_val', X_val.shape)
print('X_test', X_test.shape)
# + id="b7s18mwFMkyh"
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(hidden_layer_sizes=(64, ), activation='relu',max_iter=1000, epsilon=1e-08)
# + colab={"base_uri": "https://localhost:8080/"} id="NVWn7C1JMkv8" outputId="6178bde3-784e-49ab-9ff9-ed6f51d6b0ee"
from sklearn.metrics import accuracy_score
mlp.fit(X_train, Y_train)
prediksi_val = mlp.predict(X_val)
acc_val = accuracy_score(Y_val, prediksi_val)
print('Akurasi Validasi Training ANN:', acc_val)
# + colab={"base_uri": "https://localhost:8080/"} id="JkG56qIbMktl" outputId="dfed4d40-d7c5-4c3d-c390-a58f5b7afbb5"
prediksi_test = mlp.predict(X_test)
acc_test = accuracy_score(Y_test, prediksi_test)
print('Akurasi Testing ANN:', acc_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 355} id="NlU46EMIMkrE" outputId="0b2ae854-36c9-47c4-c72f-60bc38c2cee4"
from sklearn.metrics import accuracy_score, plot_confusion_matrix
prediksi = mlp.predict(X_test)
plot_confusion_matrix(mlp, X_test, Y_test)
accuracy = accuracy_score(Y_test, prediksi)
print('Akurasi Testing ANN:', accuracy)
# + colab={"base_uri": "https://localhost:8080/", "height": 374} id="QabcSfyLMkor" outputId="3514f0bd-aeed-4bd4-c024-f207fbd38ded"
from keras.utils import to_categorical
Y_train = to_categorical(Y_train,3)
Y_val = to_categorical(Y_val,3)
Y_test = to_categorical(Y_test,3)
# + id="whEkiV7SMkmT"
from keras.models import Sequential
from keras.layers import Flatten, Dense
model = Sequential()
model.add(Flatten())
model.add(Dense(64,activation='relu'))
model.add(Dense(3,activation='softmax'))
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['acc'])
# + id="s5ORho0pY7O6"
# + colab={"base_uri": "https://localhost:8080/", "height": 329} id="-AMypbPoMkj0" outputId="f6964887-2c5a-423f-bae1-f9422e10f4f4"
model.fit(X_train,Y_train,epochs=100,batch_size=5,validation_data=(X_val,Y_val))
# + colab={"base_uri": "https://localhost:8080/"} id="5qMEs6fKMkhb" outputId="47c6249c-b54d-4f44-9154-ab586ec8bfc9"
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 763} id="Y7sE4zhhMkfD" outputId="b2688f2e-0c14-46a8-9d85-74c47d08ed6b"
from sklearn.metrics import confusion_matrix
loss, accuracy = model.evaluate(X_test, Y_test)
print('Akurasi Testing ANN:', accuracy)
# + id="Js9jRm-uMkcm"
| ANN_CNN_iris.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + language="html"
#
# <!DOCTYPE html>
# <html>
# <body style="background-color: rgb(255, 255, 255);">
# <h1 style="color: cyan; font-size: 50px; font-family: 'Courier New'; background-color: rgb(0, 0, 0); border: 0px; padding: 0px; margin: 0px"><b><center>Analisis de error de Esfuerzos</center></b></h1>
# <h2 style="color: cyan; font-size: 20px; font-family: 'Courier New'; background-color: rgb(0, 0, 0); border: 0px; padding: 0px"><b><center>Parte del proyecto de tesis de: <NAME></center></b></h1>
# </html>
# -
import numpy as np
import bqplot as bq
import ipywidgets as ipw
from bqplot import pyplot as bplt
# +
HTMLs_names = ['Analisis de:', 'Formula 1', 'Formula 2', 'Formula 3', 'Formula 4', 'I:', 'Relacion:', 'R:', 'Data:', 'M1', 'M2', '% Error', 'r:']
HTMLs = [ipw.HTML(value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>{name}</b></font>", layout=ipw.Layout(width='auto', height='auto', margin='0px', padding='0px 0px 2px 0px', border='0px')) for name in HTMLs_names]
HTMLMath1 = ipw.HTMLMath(
value = r"<font size = '3'; font color = 'cyan'>$$\sigma_1=\frac{M_{xz}R}{I}$$",
)
HTMLMath2 = ipw.HTMLMath(
value = r"<font size = '3'; font color = 'cyan'>$$\sigma_2=\frac{M_{xy}R}{I}$$",
)
HTMLMath3 = ipw.HTMLMath(
value = r"<font size = '3'; font color = 'cyan'>$$\sigma_3=\frac{M_{xz}R}{I\sqrt{2}}+\frac{M_{xy}R}{I\sqrt{2}}$$",
)
HTMLMath4 = ipw.HTMLMath(
value = r"<font size = '3'; font color = 'cyan'>$$\sigma_4=\frac{R}{I}\sqrt{M_{xz}^2+M_{xy}^2}$$",
)
HTML1 = ipw.HTML(
value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>1</b></font>",
layout = ipw.Layout(width='auto', height='auto')
)
HTML2 = ipw.HTML(
value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>1</b></font>",
layout = ipw.Layout(width='auto', height='auto')
)
HTML3 = ipw.HTML(
value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>1</b></font>",
layout = ipw.Layout(width='auto', height='auto')
)
HTML4 = ipw.HTML(
value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>1</b></font>",
layout = ipw.Layout(width='auto', height='auto')
)
HTML5 = ipw.HTML(
value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>1</b></font>",
layout = ipw.Layout(width='auto', height='auto')
)
HTML6 = ipw.HTML(
value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>R/r = 1</b></font>",
layout = ipw.Layout(width='auto', height='auto')
)
Dropdown1 = ipw.Dropdown(
options = ['Esfuerzos flexionantes', 'Esfuerzos cortantes'],
value ='Esfuerzos flexionantes',
layout = ipw.Layout(width='auto', margin='15px 0px 15px 0px')
)
def tipo(esfuerzo):
if esfuerzo == 'Esfuerzos flexionantes':
HTMLMath1.value = r"<font size = '3'; font color = 'cyan'>$$\sigma_1=\frac{M_{xz}R}{I}$$"
HTMLMath2.value = r"<font size = '3'; font color = 'cyan'>$$\sigma_2=\frac{M_{xy}R}{I}$$"
HTMLMath3.value = r"<font size = '3'; font color = 'cyan'>$$\sigma_3=\frac{M_{xz}R}{I\sqrt{2}}+\frac{M_{xy}R}{I\sqrt{2}}$$"
HTMLMath4.value = r"<font size = '3'; font color = 'cyan'>$$\sigma_4=\frac{R}{I}\sqrt{M_{xz}^2+M_{xy}^2}$$"
HTMLs[9].value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>M1</b></font>"
HTMLs[10].value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>M2</b></font>"
elif esfuerzo == 'Esfuerzos cortantes':
HTMLMath1.value = r"<font size = '3'; font color = 'cyan'>$$\tau_1=\frac{4\left(R^2-Rr-r^2\right)V_y}{3\pi\left(R^4-r^4\right)}$$"
HTMLMath2.value = r"<font size = '3'; font color = 'cyan'>$$\tau_2=\frac{4\left(R^2-Rr-r^2\right)V_z}{3\pi\left(R^4-r^4\right)}$$"
HTMLMath3.value = r"<font size = '3'; font color = 'cyan'>$$\tau_3=\frac{2R^2\left(V_y+V_z\right)}{3\pi \left(R^4-r^4\right)}$$"
HTMLMath4.value = r"<font size = '3'; font color = 'cyan'>$$\tau_4=\frac{4\left(R^2-Rr-r^2\right)\sqrt{V_y^2+V_z^2}}{3\pi\left(R^4-r^4\right)}$$"
HTMLs[9].value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>V1</b></font>"
HTMLs[10].value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>V2</b></font>"
f_tipo = ipw.interactive(tipo, esfuerzo = Dropdown1)
Dropdown1.description='';
FloatSlider1 = ipw.FloatSlider(
description = '',
value = 1,
min = 1,
max = 100,
step = 1,
readout = False,
style = {'handle_color': 'cyan'}
)
HBox1 = ipw.HBox([FloatSlider1, HTML2])
def upd_html2(valor):
texto2 = str(valor)
HTML2.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>{texto2}</b></font>"
f_upd_html2 = ipw.interactive(upd_html2, valor = FloatSlider1)
FloatSlider1.description = ''
FloatSlider2 = ipw.FloatSlider(
description = '',
value = 1,
min = 1,
max = 100,
step = 1,
readout = False,
style = {'handle_color': 'cyan'}
)
HBox2 = ipw.HBox([FloatSlider2, HTML3])
def upd_html3(valor):
texto3 = str(valor)
HTML3.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>{texto3}</b></font>"
f_upd_html3 = ipw.interactive(upd_html3, valor = FloatSlider2)
FloatSlider2.description = ''
FloatSlider3 = ipw.FloatSlider(
description = '',
value = 1,
min = 1,
max = 100,
step = 1,
readout = False,
style = {'handle_color': 'cyan'}
)
HBox3 = ipw.HBox([FloatSlider3, HTML4])
def upd_html4(valor):
texto4 = str(valor)
HTML4.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>{texto4}</b></font>"
f_upd_html4 = ipw.interactive(upd_html4, valor = FloatSlider3)
FloatSlider3.description = ''
FloatSlider4 = ipw.FloatSlider(
description = '',
value = 1,
min = 1,
max = 100,
step = 1,
readout = False,
style = {'handle_color': 'cyan'},
layout = ipw.Layout(width='20%')
)
x = np.random.randint(1, 100, size=(100)) * 0
y = np.random.randint(1, 100, size=(100)) * 0
er = np.random.randint(1, 100, size=(100)) * 0
def upd_html5(valor):
texto5 = str(valor)
HTML5.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>{texto5}</b></font>"
FloatText1.value = x[int(valor - 1)]
FloatText2.value = y[int(valor - 1)]
FloatText3.value = er[int(valor - 1)]
f_upd_html5 = ipw.interactive(upd_html5, valor = FloatSlider4)
FloatSlider4.description = ''
FloatSlider5 = ipw.FloatSlider(
description = '',
value = 0,
min = 0,
max = 99,
step = 1,
readout = False,
style = {'handle_color': 'cyan'}
)
HBox7 = ipw.HBox([FloatSlider5, HTML1])
def upd_html1(valor):
texto6 = str(valor)
HTML1.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>{texto6}</b></font>"
f_upd_html1 = ipw.interactive(upd_html1, valor = FloatSlider5)
FloatSlider5.description = ''
FloatText1 = ipw.FloatText(
description = '',
value = 1,
disabled = False,
layout = ipw.Layout(width='10%')
)
FloatText2 = ipw.FloatText(
description = '',
value = 1,
disabled = False,
layout = ipw.Layout(width='10%')
)
FloatText3 = ipw.FloatText(
description = '',
value = 1,
disabled = True,
layout = ipw.Layout(width='10%')
)
HBox6 = ipw.HBox([HTMLs[8], FloatSlider4, HTML5, HTML6, HTMLs[9], FloatText1, HTMLs[10], FloatText2, HTMLs[11], FloatText3],
layout = ipw.Layout(justify_content = 'space-between')
)
ToggleButton1 = ipw.ToggleButton(
description = 'R/r >= √2',
value = 0,
layout = ipw.Layout(width='180px', height='28px', margin='15px 0px 0px 0px', border='0px', padding='0px'),
button_style= 'warning'
)
def upd_tgbt(valor):
if Dropdown1.value == 'Esfuerzos cortantes':
if valor == 1:
ToggleButton1.description = 'R/r < √2'
ToggleButton1.button_style = 'primary'
HTMLMath3.value = r"<font size = '3'; font color = 'cyan'>$$\tau_3=\frac{\left(R+r\right)\left(V_y+V_z\right)}{\pi\left(R^2+r^2\right)\left(R-\sqrt{2r^2-R^2}\right)}$$"
else:
ToggleButton1.description = 'R/r >= √2'
ToggleButton1.button_style = 'warning'
HTMLMath3.value = r"<font size = '3'; font color = 'cyan'>$$\tau_3=\frac{2R^2\left(V_y+V_z\right)}{3\pi \left(R^4-r^4\right)}$$"
f_upd_tgbt = ipw.interactive(upd_tgbt, valor = ToggleButton1)
HBox5 = ipw.HBox([Dropdown1, ToggleButton1])
VBox1 = ipw.VBox([HTMLs[0], HBox5, HTMLs[1], HTMLMath1, HTMLs[2], HTMLMath2, HTMLs[3], HTMLMath3, HTMLs[4], HTMLMath4, HTMLs[6], HBox2, HTMLs[5], HBox1, HTMLs[7], HBox3, HTMLs[12], HBox7],
layout = ipw.Layout(border='0px', padding='0px', width='25%', height='auto')
)
scale_x1 = bq.LinearScale(min = 0, max = 100)
scale_y1 = bq.LinearScale(min = 0, max = 100)
Lines1 = bq.Lines(x = np.arange(1,101), y = np.random.rand(1, 100),
scales = {'x': scale_x1, 'y': scale_y1,},
labels=['Formula 1'],
display_legend=True,
colors = ['red'],
opacities = [1],
stroke_width = 1.5
)
Lines2 = bq.Lines(x = np.arange(1,101), y = np.random.rand(1, 100),
scales = {'x': scale_x1, 'y': scale_y1,},
labels=['Formula 2'],
display_legend=True,
colors = ['green'],
opacities = [1],
stroke_width = 1.5
)
Lines3 = bq.Lines(x = np.arange(1,101), y = np.random.rand(1, 100),
scales = {'x': scale_x1, 'y': scale_y1,},
labels=['Formula 3'],
display_legend=True,
colors = ['white'],
opacities = [1],
stroke_width = 1.5
)
Lines4 = bq.Lines(x = np.arange(1,101), y = np.random.rand(1, 100),
scales = {'x': scale_x1, 'y': scale_y1,},
labels=['Formula 4'],
display_legend=True,
colors = ['blue'],
opacities = [0.5],
stroke_width = 1.5
)
scale_y11 = bq.LinearScale(min = 0, max = 60)
Lines5 = bq.Lines(x = np.arange(1,101), y = np.random.rand(1, 100),
scales = {'x': scale_x1, 'y': scale_y11,},
labels=['% Error'],
display_legend=True,
colors = ['yellow'],
opacities = [0.5],
stroke_width = 1.5,
interpolation = 'basis'
)
scale_x2 = bq.LinearScale(min = 0, max = 100)
scale_y2 = bq.LinearScale(min = 0, max = 10)
Lines6 = bq.Lines(x = np.arange(1,101), y = np.zeros((1,100)),
scales = {'x': scale_x2, 'y': scale_y2,},
labels=['Esfuerzos Flexionantes'],
display_legend=True,
opacities = [1],
colors = ['magenta'],
interpolation = 'basis'
)
scale_y22 = bq.LinearScale(min = 0, max = 60)
Lines7 = bq.Lines(x = np.arange(1,101), y = np.zeros((1,100)),
scales = {'x': scale_x2, 'y': scale_y22,},
labels=['Esfuerzos Cortantes'],
display_legend=True,
opacities = [0.8],
colors = ['orange'],
interpolation = 'basis'
)
scale_x3 = bq.LinearScale(min = -100, max = 100)
scale_y3 = bq.LinearScale(min = -100, max = 100)
tetha = np.linspace(0,2*np.pi,50)
Lines8 = bq.Lines(x = FloatSlider5.value * np.cos(tetha), y = FloatSlider5.value * np.sin(tetha),
scales = {'x': scale_x3, 'y': scale_y3,},
# labels=['Circulo menor'],
# display_legend=True,
opacities = [1],
colors = ['black'],
interpolation = 'basis'
)
Lines9 = bq.Lines(x = FloatSlider3.value * np.cos(tetha), y = FloatSlider3.value * np.sin(tetha),
scales = {'x': scale_x3, 'y': scale_y3,},
# labels=['Circulo mayor'],
# display_legend=True,
opacities = [1],
colors = ['black'],
interpolation = 'basis'
)
scale_x4 = bq.LinearScale(min = 0, max = 100)
scale_y4 = bq.LinearScale(min = 0, max = 100)
Lines10 = bq.Lines(x = np.arange(1,101), y = np.zeros((1,100)),
scales = {'x': scale_x4, 'y': scale_y4,},
labels=['<NAME>'],
display_legend=True,
opacities = [1],
colors = ['blue'],
interpolation = 'basis'
)
Lines11 = bq.Lines(x = np.arange(1,101), y = np.zeros((1,100)),
scales = {'x': scale_x4, 'y': scale_y4,},
labels=['<NAME>'],
display_legend=True,
opacities = [1],
colors = ['red'],
interpolation = 'basis'
)
ax_x1 = bq.Axis(scale = scale_x1, label = 'Ensayo', label_color = 'cyan', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'cyan'}, color='cyan')
ax_y1 = bq.Axis(scale = scale_y1, label = 'Magnitud del Esfuerzo', label_color = 'cyan', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'cyan'}, color='cyan', orientation = 'vertical')
ax_y11 = bq.Axis(scale = scale_y11, label = '% Error', label_color = 'cyan', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'cyan'}, color='cyan', orientation = 'vertical', side='right')
m_fig = dict(left=50, top=20, bottom=30, right=30)
panzoom = bq.interacts.PanZoom(scales={'x': [scale_x1], 'y': [scale_y1]})
fig1 = bplt.figure(title='Comparacion de formulas: Error disperso', title_style={'font-size': '20px','fill': 'cyan'}, marks = [Lines1, Lines2, Lines3, Lines4, Lines5], axes = [ax_x1, ax_y1, ax_y11], fig_margin = m_fig, layout=ipw.Layout(width='auto', height='50%'))
fig1.background_style = {'fill': 'Black'}
fig1.interaction = panzoom
ax_x2 = bq.Axis(scale = scale_x2, label = 'Relacion', label_color = 'cyan', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'cyan'}, color='cyan')
ax_y2 = bq.Axis(scale = scale_y2, label = '% de Error: Flexionantes', label_color = 'cyan', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'cyan'}, color='cyan', orientation = 'vertical')
ax_y22 = bq.Axis(scale = scale_y22, label = '% de Error: Cortantes', label_color = 'cyan', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'cyan'}, color='cyan', orientation = 'vertical', side='right')
m_fig = dict(left=50, top=20, bottom=30, right=30)
panzoom = bq.interacts.PanZoom(scales={'x': [scale_x2], 'y': [scale_y22]})
fig2 = bplt.figure(title='Comparacion de formulas: Error maximo', title_style={'font-size': '20px','fill': 'cyan'}, marks = [Lines6, Lines7], axes = [ax_x2, ax_y2, ax_y22], fig_margin = m_fig, layout=ipw.Layout(width='auto', height='50%'))
fig2.background_style = {'fill': 'Black'}
fig2.interaction = panzoom
ax_x3 = bq.Axis(scale = scale_x3, label = 'X', label_color = 'black', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'black'}, color='black')
ax_y3 = bq.Axis(scale = scale_y3, label = 'Y', label_color = 'black', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'black'}, color='black', orientation = 'vertical')
m_fig = dict(left=50, top=20, bottom=30, right=30)
panzoom = bq.interacts.PanZoom(scales={'x': [scale_x3], 'y': [scale_y3]})
fig3 = bplt.figure(title='Seccion del eje', title_style={'font-size': '20px','fill': 'black'}, marks = [Lines8, Lines9], axes = [ax_x3, ax_y3], fig_margin = m_fig, layout=ipw.Layout(width='25%', height='auto'))
fig3.interaction = panzoom
fig3.max_aspect_ratio = 1
fig3.min_aspect_ratio = 1
ax_x4 = bq.Axis(scale = scale_x4, label = 'Diametro interior del eje hueco', label_color = 'black', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'black'}, color='black')
ax_y4 = bq.Axis(scale = scale_y4, label = 'Esfuerzo cortante', label_color = 'black', label_offset = '-2em', grid_lines = 'none', tick_format = '0.0f', tick_style = {'stroke': 'black'}, color='black', orientation = 'vertical')
m_fig = dict(left=50, top=20, bottom=30, right=30)
panzoom = bq.interacts.PanZoom(scales={'x': [scale_x4], 'y': [scale_y4]})
fig4 = bplt.figure(title='Variacion del esfuerzo cortante en funcion al radio interno', title_style={'font-size': '20px','fill': 'black'}, marks = [Lines10, Lines11], axes = [ax_x4, ax_y4], fig_margin = m_fig, layout=ipw.Layout(width='75%', height='auto'))
fig4.interaction = panzoom
HBox7 = ipw.HBox([fig3, fig4],
layout = ipw.Layout(height='300px', justify_content = 'space-between')
)
VBox2 = ipw.VBox([fig1, fig2],
layout = ipw.Layout(margin = '0px', border='0px', padding='0px', width='75%', height='auto', justify_content = 'flex-end', align_items = 'stretch')
)
HBox4 = ipw.HBox([VBox1, VBox2])
Lines6_d = Lines6.y
def flex_1(valor):
global x
global y
global er
z = np.random.randint(1000, 10000, size=(100))
x = z * FloatSlider2.value
y = z
if Dropdown1.value == 'Esfuerzos flexionantes':
f1 = (FloatSlider3.value / valor) * x
f2 = (FloatSlider3.value / valor) * y
f3 = (FloatSlider3.value / valor) * ((x + y) / np.sqrt(2))
f4 = (FloatSlider3.value / valor) * np.sqrt(x ** 2 + y ** 2)
elif Dropdown1.value == 'Esfuerzos cortantes':
if FloatSlider5.value < FloatSlider3.value:
f1 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - FloatSlider5.value ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * FloatSlider5.value + FloatSlider5.value ** 2) * x)
f2 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - FloatSlider5.value ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * FloatSlider5.value + FloatSlider5.value ** 2) * y)
if FloatSlider5.value <= FloatSlider3.value / np.sqrt(2):
f3 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - FloatSlider5.value ** 4))) * (FloatSlider3.value ** 2 * (x + y))
else:
f3 = ((2 * np.sqrt(2) * 0.3535 * (FloatSlider3.value + FloatSlider5.value)) / (np.pi * (FloatSlider3.value ** 2 + FloatSlider5.value ** 2) * (FloatSlider3.value - np.sqrt(abs(2 * FloatSlider5.value ** 2 - FloatSlider3.value ** 2))))) * (x + y)
f4 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - FloatSlider5.value ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * FloatSlider5.value + FloatSlider5.value ** 2)) * np.sqrt((x ** 2 + y ** 2))
else:
f1 = np.random.randint(1, 100, size=(100)) * 0 + 1
f2 = np.random.randint(1, 100, size=(100)) * 0 + 1
f3 = np.random.randint(1, 100, size=(100)) * 0 + 1
f4 = np.random.randint(1, 100, size=(100)) * 0 + 1
mf = np.amax([f1, f2, f3], axis=0)
er = abs((f4 - mf) / mf) * 100;
ep = np.amax(er)
Lines1.y = f1
Lines2.y = f2
Lines3.y = f3
Lines4.y = f4
Lines5.y = er
if Dropdown1.value == 'Esfuerzos flexionantes':
Lines6_d[int(FloatSlider2.value - 1)] = ep
Lines6.y = Lines6_d + 0
elif Dropdown1.value == 'Esfuerzos cortantes':
Lines7_d[int(FloatSlider2.value - 1)] = ep
Lines7.y = Lines7_d + 0
Lines4.scales['y'].max = np.amax([f1, f2, f3, f4]) * 1.1
f_flex_1 = ipw.interactive(flex_1, valor = FloatSlider1)
FloatSlider1.description = ''
Lines7_d = Lines7.y
def flex_2(valor):
global x
global y
global er
z = np.random.randint(1000, 10000, size=(100))
x = z * valor
y = z
if Dropdown1.value == 'Esfuerzos flexionantes':
f1 = (FloatSlider3.value / FloatSlider1.value) * x
f2 = (FloatSlider3.value / FloatSlider1.value) * y
f3 = (FloatSlider3.value / FloatSlider1.value) * (x + y) / np.sqrt(2)
f4 = (FloatSlider3.value / FloatSlider1.value) * np.sqrt(x ** 2 + y ** 2)
elif Dropdown1.value == 'Esfuerzos cortantes':
if FloatSlider5.value < FloatSlider3.value:
f1 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - FloatSlider5.value ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * FloatSlider5.value + FloatSlider5.value ** 2) * x)
f2 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - FloatSlider5.value ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * FloatSlider5.value + FloatSlider5.value ** 2) * y)
if FloatSlider5.value <= FloatSlider3.value / np.sqrt(2):
f3 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - FloatSlider5.value ** 4))) * (FloatSlider3.value ** 2 * (x + y))
else:
f3 = ((2 * np.sqrt(2) * 0.3535 * (FloatSlider3.value + FloatSlider5.value)) / (np.pi * (FloatSlider3.value ** 2 + FloatSlider5.value ** 2) * (FloatSlider3.value - np.sqrt(abs(2 * FloatSlider5.value ** 2 - FloatSlider3.value ** 2))))) * (x + y)
f4 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - FloatSlider5.value ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * FloatSlider5.value + FloatSlider5.value ** 2)) * np.sqrt((x ** 2 + y ** 2))
else:
f1 = np.random.randint(1, 100, size=(100)) * 0 + 1
f2 = np.random.randint(1, 100, size=(100)) * 0 + 1
f3 = np.random.randint(1, 100, size=(100)) * 0 + 1
f4 = np.random.randint(1, 100, size=(100)) * 0 + 1
mf = np.amax([f1, f2, f3], axis=0)
er = abs((f4 - mf) / mf) * 100;
ep = np.amax(er)
Lines1.y = f1
Lines2.y = f2
Lines3.y = f3
Lines4.y = f4
Lines5.y = er
if Dropdown1.value == 'Esfuerzos flexionantes':
Lines6_d[int(valor - 1)] = ep
Lines6.y = Lines6_d + 0
elif Dropdown1.value == 'Esfuerzos cortantes':
Lines7_d[int(valor - 1)] = ep
Lines7.y = Lines7_d + 0
Lines4.scales['y'].max = np.amax([f1, f2, f3, f4]) * 1.1
f_flex_2 = ipw.interactive(flex_2, valor = FloatSlider2)
FloatSlider2.description = ''
Lines11_d = Lines11.y
def flex_3(valor):
global x
global y
global er
z = np.random.randint(1000, 10000, size=(100))
x = z * FloatSlider2.value
y = z
if FloatSlider5.value != 0:
texto7 = 'R/r = ' + str(round(valor / FloatSlider5.value, 2))
HTML6.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>{texto7}</b></font>"
if valor / FloatSlider5.value >= np.sqrt(2):
ToggleButton1.value = 0
elif valor / FloatSlider5.value < np.sqrt(2):
ToggleButton1.value = 1
else:
HTML6.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>R/r = ∞</b></font>"
ToggleButton1.value = 0
if Dropdown1.value == 'Esfuerzos flexionantes':
f1 = (FloatSlider3.value / FloatSlider1.value) * x
f2 = (FloatSlider3.value / FloatSlider1.value) * y
f3 = (FloatSlider3.value / FloatSlider1.value) * ((x + y) / np.sqrt(2))
f4 = (FloatSlider3.value / FloatSlider1.value) * np.sqrt(x ** 2 + y ** 2)
elif Dropdown1.value == 'Esfuerzos cortantes':
if FloatSlider5.value < valor:
f1 = (2 / (3 * np.pi * (valor ** 4 - FloatSlider5.value ** 4))) * (2 * (valor ** 2 - valor * FloatSlider5.value + FloatSlider5.value ** 2) * x)
f2 = (2 / (3 * np.pi * (valor ** 4 - FloatSlider5.value ** 4))) * (2 * (valor ** 2 - valor * FloatSlider5.value + FloatSlider5.value ** 2) * y)
if FloatSlider5.value <= valor / np.sqrt(2):
f3 = (2 / (3 * np.pi * (valor ** 4 - FloatSlider5.value ** 4))) * (valor ** 2 * (x + y))
else:
f3 = ((2 * np.sqrt(2) * 0.3535 * (valor + FloatSlider5.value)) / (np.pi * (valor ** 2 + FloatSlider5.value ** 2) * (valor - np.sqrt(abs(2 * FloatSlider5.value ** 2 - valor ** 2))))) * (x + y)
f4 = (2 / (3 * np.pi * (valor ** 4 - FloatSlider5.value ** 4))) * (2 * (valor ** 2 - valor * FloatSlider5.value + FloatSlider5.value ** 2)) * np.sqrt((x ** 2 + y ** 2))
else:
f1 = np.random.randint(1, 100, size=(100)) * 0 + 1
f2 = np.random.randint(1, 100, size=(100)) * 0 + 1
f3 = np.random.randint(1, 100, size=(100)) * 0 + 1
f4 = np.random.randint(1, 100, size=(100)) * 0 + 1
mf = np.amax([f1, f2, f3], axis=0)
er = abs((f4 - mf) / mf) * 100;
ep = np.amax(er)
Lines1.y = f1
Lines2.y = f2
Lines3.y = f3
Lines4.y = f4
Lines5.y = er
if Dropdown1.value == 'Esfuerzos flexionantes':
Lines6_d[int(FloatSlider2.value - 1)] = ep
Lines6.y = Lines6_d + 0
elif Dropdown1.value == 'Esfuerzos cortantes':
Lines7_d[int(FloatSlider2.value - 1)] = ep
Lines7.y = Lines7_d + 0
Lines4.scales['y'].max = np.amax([f1, f2, f3, f4]) * 1.1
Lines9.x = valor * np.cos(tetha)
Lines9.y = valor * np.sin(tetha)
VM = (2 / (3 * np.pi * (valor ** 4 - 0))) * (2 * (valor ** 2 - valor * 0 + 0 ** 2)) * np.sqrt((FloatText1.value ** 2 + FloatText2.value ** 2))
Lines10.y = np.zeros((1,100)) + VM
VH = (2 / (3 * np.pi * (valor ** 4 - FloatSlider5.value ** 4))) * (2 * (valor ** 2 - valor * FloatSlider5.value + FloatSlider5.value ** 2)) * np.sqrt((FloatText1.value ** 2 + FloatText2.value ** 2))
Lines11_d[int(FloatSlider5.value)] = VH
Lines11.y = Lines11_d + 0
f_flex_3 = ipw.interactive(flex_3, valor = FloatSlider3)
FloatSlider3.description = ''
def flex_5(valor):
global x
global y
global er
z = np.random.randint(1000, 10000, size=(100))
x = z * FloatSlider2.value
y = z
if valor != 0:
texto7 = 'R/r = ' + str(round(FloatSlider3.value / valor, 2))
HTML6.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>{texto7}</b></font>"
if FloatSlider3.value / valor >= np.sqrt(2):
ToggleButton1.value = 0
elif FloatSlider3.value / valor < np.sqrt(2):
ToggleButton1.value = 1
else:
HTML6.value = f"<font size = '3'; font color = 'cyan'; font face = 'Courier New'><b>R/r = ∞</b></font>"
ToggleButton1.value = 0
if Dropdown1.value == 'Esfuerzos flexionantes':
f1 = (FloatSlider3.value / FloatSlider1.value) * x
f2 = (FloatSlider3.value / FloatSlider1.value) * y
f3 = (FloatSlider3.value / FloatSlider1.value) * ((x + y) / np.sqrt(2))
f4 = (FloatSlider3.value / FloatSlider1.value) * np.sqrt(x ** 2 + y ** 2)
elif Dropdown1.value == 'Esfuerzos cortantes':
if valor < FloatSlider3.value:
f1 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - valor ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * valor + valor ** 2) * x)
f2 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - valor ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * valor + valor ** 2) * y)
if valor <= FloatSlider3.value / np.sqrt(2):
f3 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - valor ** 4))) * (FloatSlider3.value ** 2 * (x + y))
else:
f3 = ((2 * np.sqrt(2) * 0.3535 * (FloatSlider3.value + valor)) / (np.pi * (FloatSlider3.value ** 2 + valor ** 2) * (FloatSlider3.value - np.sqrt(abs(2 * valor ** 2 - FloatSlider3.value ** 2))))) * (x + y)
f4 = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - valor ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * valor + valor ** 2)) * np.sqrt((x ** 2 + y ** 2))
else:
f1 = np.random.randint(1, 100, size=(100)) * 0 + 1
f2 = np.random.randint(1, 100, size=(100)) * 0 + 1
f3 = np.random.randint(1, 100, size=(100)) * 0 + 1
f4 = np.random.randint(1, 100, size=(100)) * 0 + 1
mf = np.amax([f1, f2, f3], axis=0)
er = abs((f4 - mf) / mf) * 100;
ep = np.amax(er)
Lines1.y = f1
Lines2.y = f2
Lines3.y = f3
Lines4.y = f4
Lines5.y = er
if Dropdown1.value == 'Esfuerzos flexionantes':
Lines6_d[int(FloatSlider2.value - 1)] = ep
Lines6.y = Lines6_d + 0
elif Dropdown1.value == 'Esfuerzos cortantes':
Lines7_d[int(FloatSlider2.value - 1)] = ep
Lines7.y = Lines7_d + 0
Lines4.scales['y'].max = np.amax([f1, f2, f3, f4]) * 1.1
Lines8.x = valor * np.cos(tetha)
Lines8.y = valor * np.sin(tetha)
VM = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - 0))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * 0 + 0 ** 2)) * np.sqrt((FloatText1.value ** 2 + FloatText2.value ** 2))
Lines10.y = np.zeros((1,100)) + VM
VH = (2 / (3 * np.pi * (FloatSlider3.value ** 4 - valor ** 4))) * (2 * (FloatSlider3.value ** 2 - FloatSlider3.value * valor + valor ** 2)) * np.sqrt((FloatText1.value ** 2 + FloatText2.value ** 2))
Lines11_d[int(valor)] = VH
Lines11.y = Lines11_d + 0
f_flex_5 = ipw.interactive(flex_5, valor = FloatSlider5)
FloatSlider5.description = ''
display(HBox4)
display(HBox6)
display(HBox7)
| AnalisisdeerrorV6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# <h1 align = 'center'> Neural Networks Demystified </h1>
# <h2 align = 'center'> Part 2: Forward Propagation </h2>
#
#
# <h4 align = 'center' > @stephencwelch </h4>
from IPython.display import YouTubeVideo
YouTubeVideo('UJwK6jAStmg')
# <h3 align = 'center'> Variables </h3>
#
# |Code Symbol | Math Symbol | Definition | Dimensions
# | :-: | :-: | :-: | :-: |
# |X|$$X$$|Input Data, each row in an example| (numExamples, inputLayerSize)|
# |y |$$y$$|target data|(numExamples, outputLayerSize)|
# |W1 | $$W^{(1)}$$ | Layer 1 weights | (inputLayerSize, hiddenLayerSize) |
# |W2 | $$W^{(2)}$$ | Layer 2 weights | (hiddenLayerSize, outputLayerSize) |
# |z2 | $$z^{(2)}$$ | Layer 2 activation | (numExamples, hiddenLayerSize) |
# |a2 | $$a^{(2)}$$ | Layer 2 activity | (numExamples, hiddenLayerSize) |
# |z3 | $$z^{(3)}$$ | Layer 3 activation | (numExamples, outputLayerSize) |
# Last time, we setup our neural network on paper. This time, we’ll implement it in the programming language python. We’ll build our network as a python class and our init method will take care of instantiating important constants and variables. We’ll make these values accessible to the whole class by placing a self dot in front of each variable name.
# Our network has 2 inputs, 3 hidden units, and 1 output. These are examples of hyperparameters. Hyperparameters are constants that establish the structure and behavior of a neural network, but are not updated as we train the network. Our learning algorithm is not capable of, for example, deciding that it needs another hidden unit, this is something that WE must decide on before training. What a neural network does learn are parameters, specifically the weights on the synapses.
# We’ll take care of moving data through our network in a method called forward. Rather than pass inputs through the network one at a time, we’re going to use matrices to pass through multiple inputs at once. Doing this allows for big computational speedups, especially when using tools like MATLAB or Numpy. Our input data matrix, X, is of dimension 3 by 2, because we have 3, 2-dimensional examples. Our corresponding output data, y, is of dimension 3 by 1.
#Import code from last time
# %pylab inline
from partOne import *
print X.shape, y.shape
class Neural_Network(object):
def __init__(self):
#Define Hyperparameters
self.inputLayerSize = 2
self.outputLayerSize = 1
self.hiddenLayerSize = 3
def forward(self, X):
#Propagate inputs though network
# Each input value, or element in matrix X, needs to be multiplied by a corresponding weight and then added together with all the other results for each neuron. This is a complex operation, but if we take the three outputs we're looking for as a single row of a matrix, and place all our individual weights into a matrix of weights, we can create the exact behavior we need by multiplying our input data matrix by our weight matrix. Using matrix multiplication allows us to pass multiple inputs through at once by simply adding rows to the matrix X. From here on out, we'll refer to these matrics as X, W one, and z two, where z two the activity of our second layer. Notice that each entry in z is a sum of weighted inputs to each hidden neuron. Z is of size 3 by 3, one row for each example, and one column for each hidden unit.
# We now have our first official formula, $z^{(2)} = XW^{(1)}$. Matrix notation is really nice here, becuase it allows us to express the complex underlying process in a single line!
# $$
# z^{(2)} = XW^{(1)} \tag{1}\\
# $$
#
# Now that we have the activities for our second layer, z two, we need to apply the activation function. We'll independently apply the function to each entry in matrix z using a python method for this called sigmoid, because we’re using a sigmoid as our activation function. Using numpy is really nice here, because we can pass in a scalar, vector, or matrix, Numpy will apply the activation function element-wise, and return a result of the same dimension as it was given.
def sigmoid(z):
#Apply sigmoid activation function to scalar, vector, or matrix
return 1/(1+np.exp(-z))
testInput = np.arange(-6,6,0.01)
plot(testInput, sigmoid(testInput), linewidth= 2)
grid(1)
sigmoid(1)
sigmoid(np.array([-1,0,1]))
sigmoid(np.random.randn(3,3))
# We now have our second formula for forward propagation, using f to denote our activation function, we can write that a two, our second layer activity, is equal to f of z two. a two will be a matrix of the same size as z two, 3 by 3.
# $$
# a^{(2)} = f(z^{(2)}) \tag{2}\\
# $$
# To finish forward propagation we need to propagate a two all the way to the output, yhat. We've already done the heavy lifting in the previous layer, so all we have to do now is multiply a two by our senond layer weights W2 and apply one more activation funcion. W2 will be of size 3x1, one weight for each synapse. Multiplying a2, a 3 by 3, by W2, a 3 by 1 results in a 3 by 1 matrix z three, the activity or our third layer. z3 has three activity values, one for each example. Last but not least, we'll apply our activation function to z three yielding our official estimate of your test score, yHat.
# $$
# z^{(3)} = a^{(2)}W^{(2)} \tag{3}\\
# $$
# $$
# \hat{y} = f(z^{(3)}) \tag{4}\\
# $$
# We need to implement our forward propagation formulas in python. First we'll initialize our weight matrices in our init method. For starting values, we'll use random numbers.
# We'll implement forward propagation in our forward method, using numpy's built in dot method for matrix multiplication and our own sigmoid method.
class Neural_Network(object):
def __init__(self):
#Define Hyperparameters
self.inputLayerSize = 2
self.outputLayerSize = 1
self.hiddenLayerSize = 3
#Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize, self.outputLayerSize)
def forward(self, X):
#Propagate inputs though network
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def sigmoid(self, z):
#Apply sigmoid activation function to scalar, vector, or matrix
return 1/(1+np.exp(-z))
# And there you have it, a python class capable of estimating your test score given how many hours you sleep and how many hours you study. We can pass in our input data and get real outputs. Now, you may be noticing that our estimates are quite terrible. That's because we have not yet trained our network, that's what we'll work on next time.
| Python/ML_DL/DL/Neural-Networks-Demystified-master/Part 2 Forward Propagation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
class Ball:
def __init__(self, color):
self.color = color # ボールは色を示す変数(文字列の予定)だけ持つ
bag = set() # 袋を一つ作る
for i in range(3): # 赤い玉を3つ加える
bag.add(Ball("red"))
for i in range(4): # 白い玉を4つ加える
bag.add(Ball("white"))
print(bag)
# -
| math/bag1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os, sys
import torch
from pathlib import Path
import numpy as np
import matplotlib
from matplotlib import cm
matplotlib.use("agg")
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
__file__ = os.path.dirname(os.path.realpath("__file__"))
root_dir = (Path(__file__).parent / "..").resolve()
lib_dir = (root_dir / "lib").resolve()
print("The root path: {:}".format(root_dir))
print("The library path: {:}".format(lib_dir))
assert lib_dir.exists(), "{:} does not exist".format(lib_dir)
if str(lib_dir) not in sys.path:
sys.path.insert(0, str(lib_dir))
from datasets import ConstantGenerator, SinGenerator, SyntheticDEnv
from datasets import DynamicQuadraticFunc
from datasets.synthetic_example import create_example_v1
# +
def draw_fig(save_dir, timestamp, xaxis, yaxis):
save_path = save_dir / '{:04d}'.format(timestamp)
# print('Plot the figure at timestamp-{:} into {:}'.format(timestamp, save_path))
dpi, width, height = 40, 1500, 1500
figsize = width / float(dpi), height / float(dpi)
LabelSize, LegendFontsize, font_gap = 80, 80, 5
fig = plt.figure(figsize=figsize)
cur_ax = fig.add_subplot(1, 1, 1)
cur_ax.scatter(xaxis, yaxis, color="k", s=10, alpha=0.9, label="Timestamp={:02d}".format(timestamp))
cur_ax.set_xlabel("X", fontsize=LabelSize)
cur_ax.set_ylabel("f(X)", rotation=0, fontsize=LabelSize)
cur_ax.set_xlim(-6, 6)
cur_ax.set_ylim(-40, 40)
for tick in cur_ax.xaxis.get_major_ticks():
tick.label.set_fontsize(LabelSize - font_gap)
tick.label.set_rotation(10)
for tick in cur_ax.yaxis.get_major_ticks():
tick.label.set_fontsize(LabelSize - font_gap)
plt.legend(loc=1, fontsize=LegendFontsize)
fig.savefig(str(save_path) + '.pdf', dpi=dpi, bbox_inches="tight", format="pdf")
fig.savefig(str(save_path) + '.png', dpi=dpi, bbox_inches="tight", format="png")
plt.close("all")
def visualize_env(save_dir):
save_dir.mkdir(parents=True, exist_ok=True)
dynamic_env, function = create_example_v1(100, num_per_task=500)
additional_xaxis = np.arange(-6, 6, 0.1)
for timestamp, dataset in dynamic_env:
num = dataset.shape[0]
# timeaxis = (torch.zeros(num) + timestamp).numpy()
xaxis = dataset[:,0].numpy()
xaxis = np.concatenate((additional_xaxis, xaxis))
# compute the ground truth
function.set_timestamp(timestamp)
yaxis = function(xaxis)
draw_fig(save_dir, timestamp, xaxis, yaxis)
home_dir = Path.home()
desktop_dir = home_dir / 'Desktop'
vis_save_dir = desktop_dir / 'vis-synthetic'
visualize_env(vis_save_dir)
# -
# Plot the data
cmd = 'ffmpeg -y -i {:}/%04d.png -pix_fmt yuv420p -vf fps=2 -vf scale=1000:1000 -vb 5000k {:}/vis.mp4'.format(vis_save_dir, vis_save_dir)
print(cmd)
os.system(cmd)
| notebooks/LFNA/synthetic-visualize-env.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Classification modeling
# ---
#
# Working with interpolated data!!!
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import random
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, plot_confusion_matrix, classification_report, plot_roc_curve
from sklearn.tree import DecisionTreeClassifier, plot_tree, export_text
random.seed(42)
# -
# ---
# ### Load the data
df = pd.read_csv('../../coastal_upwelling_output/interpolated.csv')
df.rename({'Unnamed: 0':'time'},inplace=True, axis=1)
# df.set_index('time', inplace=True)
df
df.isna().sum()
# ---
# ### Checking feature correlation
#
# One of the big assumptions we make when building logistic regression models is that our independent features are independent of each other. We can print out a heatmap to check whether our features are correlated to each other or not.
plt.figure(figsize=(12,12))
sns.heatmap(df.corr()[['CUTI']].sort_values(by='CUTI', ascending=False),
annot=True);
# Are the deeper depths correlated here because they change so little?
# ---
# ### Using PolynomialFeatures
#
# Now let's add those features back in and use feature interactions combined with regularization to try upping the accuracy while accounting for the multicollinearity.
# An easy way to get a variety of feature interactions is using sklearn's PolynomialFeatures function. There are four features in this model, so I'll set the degree to 4 so that there will be an engineered feature that includes all 4 of the original features.
# +
X = df.drop(columns=['upwelling', 'time', 'CUTI'])
y = df['upwelling']
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
# +
pipe = Pipeline([
('sc', StandardScaler()),
('logreg', LogisticRegression(max_iter=1000, solver='liblinear'))
])
pipe_params = {
'logreg__penalty':['l1', 'l2'],
'logreg__C': np.linspace(0.001, 1, 10)
}
gs_lr = GridSearchCV(pipe, pipe_params, cv=5, verbose=1, return_train_score=True)
gs_lr.fit(X_train, y_train)
# -
print(f'Best parameters: {gs_lr.best_params_}')
print(f'Best score: {gs_lr.best_score_}')
# Now that we have the best parameters, we can create a logistic regression model with these parameters and see what the coefficients are for our poly features.
print(gs_lr.cv_results_['mean_train_score'].mean())
print(gs_lr.cv_results_['mean_test_score'].mean())
print(f'Train accuracy: {gs_lr.score(X_train, y_train)}')
print(f'Test accuracy: {gs_lr.score(X_test, y_test)}')
# Question to self: do the coefficients need to be exponentiated to get their actual values, since the logistic regression model uses the logit function to transform the data?
gs_lr.predict(X_train)
gs_lr.best_estimator_['logreg'].coef_
coefs = gs_lr.best_estimator_['logreg'].coef_[0]
sorted(list(zip(X.columns, coefs)), key=lambda x: x[1])[:10]
sorted(list(zip(X.columns, coefs)), key=lambda x: x[1])[-10:]
gs_lr_train_preds = gs_lr.predict(X_train)
gs_lr_test_preds = gs_lr.predict(X_test)
# +
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 8))
cm = confusion_matrix(y_train, gs_lr_train_preds)
ConfusionMatrixDisplay(cm).plot(ax=ax1)
ax1.set_title('Confusion Matrix: Train Data')
cm = confusion_matrix(y_test, gs_lr_test_preds)
ConfusionMatrixDisplay(cm).plot(ax=ax2)
ax2.set_title('Confusion Matrix: Test Data');
# -
# Looks like our false negatives outnumber our false positives
print(classification_report(y_test, gs_lr_test_preds))
# +
# ROC curve
plot_roc_curve(gs_lr, X_test, y_test)
# add worst case scenario line
plt.plot([0,1],[0,1], label='baseline', linestyle='--')
# add a legend
plt.legend();
# want AUC (area under curve) to be as close to 1 as possible
# -
# #### Explore misclassified data
# Get indices of misclassified data source: https://stackoverflow.com/questions/25551977/retrieve-misclassified-documents-using-scikitlearn
misclass_ind_lr = np.where(y_test != gs_lr_test_preds)
misclass_ind_lr
X_test
df
df.iloc[X_test.index]['time']
X_test_times = df.iloc[X_test.index]['time']
X_test_times.iloc[misclass_ind_lr]
pd.DataFrame(X_test_times.iloc[misclass_ind_lr]).reset_index(drop=True)
frames_lr = [pd.DataFrame(X_test_times.iloc[misclass_ind_lr]), X_test.iloc[misclass_ind_lr], pd.DataFrame(y_test.iloc[misclass_ind_lr])]
misclass_df_lr = pd.concat(frames_lr, axis=1)
misclass_df_lr
# ---
# ### Decision tree classifiers
# Decision trees come in a lot of different shapes, so it'd be best to use GridSearchCV to find the best parameters for a tree for upwelling classification.
X = df.drop(columns=['time', 'upwelling', 'CUTI'])
y = df['upwelling']
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
# +
param_grid = {
'max_depth': [5, 7, 9],
'min_samples_split': [5, 10, 15, 20],
'min_samples_leaf': [2, 3, 4, 5, 6],
'ccp_alpha': [0, 0.01, 0.1, 1, 10]
}
gs_dt = GridSearchCV(estimator=DecisionTreeClassifier(),
param_grid=param_grid,
verbose=1,
cv=5)
# %time gs_dt.fit(X_train, y_train)
# -
gs_dt.best_estimator_
print(f'Score on training set: {gs_dt.score(X_train, y_train)}')
print(f'Score on testing set: {gs_dt.score(X_test, y_test)}')
gs_dt_train_preds = gs_dt.predict(X_train)
gs_dt_test_preds = gs_dt.predict(X_test)
# +
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 8))
cm = confusion_matrix(y_train, gs_dt_train_preds)
ConfusionMatrixDisplay(cm).plot(ax=ax1)
ax1.set_title('Confusion Matrix: Train Data')
cm = confusion_matrix(y_test, gs_dt_test_preds)
ConfusionMatrixDisplay(cm).plot(ax=ax2)
ax2.set_title('Confusion Matrix: Test Data');
# -
print(gs_dt.best_estimator_.feature_importances_)
list(X_train.columns)
# We saw that `seawater_temperature` was the most strongly correlated feature to upwelling, but `sea_surface_temperature` ended up having the greatest feature importance. I want to trust the model on this, but I'm wondering why this happened.
# +
# Establish size of figure.
plt.figure(figsize = (50, 30))
# Plot our tree.
plot_tree(gs_dt.best_estimator_,
feature_names = X_train.columns,
class_names = ['Not upwelling', 'Upwelling'],
filled = True);
# + tags=[]
print(export_text(gs_dt.best_estimator_,
list(X_train.columns)));
# -
print(classification_report(y_test, gs_dt.predict(X_test)))
# +
# ROC curve
fig, ax = plt.subplots(figsize=(8,8))
plot_roc_curve(gs_dt, X_test, y_test, ax=ax, name='Decision Tree')
plot_roc_curve(gs_lr, X_test, y_test, ax=ax, name='Logistic Regression')
# add worst case scenario line
plt.plot([0,1],[0,1], label='baseline', linestyle='--')
# add a legend
plt.legend();
# want AUC (area under curve) to be as close to 1 as possible
# -
# Interpretation goes here
# #### Explore misclassified data
# Get indices of misclassified data
misclass_ind_dt = np.where(y_test != gs_dt_test_preds)
# misclass_ind_dt
X_test_times = df.iloc[X_test.index]['time']
X_test_times.iloc[misclass_ind_dt]
pd.DataFrame(X_test_times.iloc[misclass_ind_dt]).reset_index(drop=True)
frames_dt = [pd.DataFrame(X_test_times.iloc[misclass_ind_dt]), X_test.iloc[misclass_ind_dt], pd.DataFrame(y_test.iloc[misclass_ind_dt])]
misclass_df_dt = pd.concat(frames_dt, axis=1)
misclass_df_dt
# +
df['CUTI'] = df['CUTI']
df['CUTI']
| notebooks/.ipynb_checkpoints/09_interpolated_modeling-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="-eFju4_DDKeX"
# # Lambda School Data Science - Ridge Regression
#
# Regularize your way to a better tomorrow.
# + [markdown] colab_type="text" id="5v5cBm19JxOj"
# # Lecture
#
# Data science depends on math, and math is generally focused on situations where:
#
# 1. a solution exists,
# 2. the solution is unique,
# 3. the solution's behavior changes continuously with the initial conditions.
#
# These are known as [well-posed problems](https://en.wikipedia.org/wiki/Well-posed_problem), and are the sorts of assumptions so core in traditional techniques that it is easy to forget about them. But they do matter, as there can be exceptions:
#
# 1. no solution - e.g. no $x$ such that $Ax = b$
# 2. multiple solutions - e.g. several $x_1, x_2, ...$ such that $Ax = b$
# 3. "chaotic" systems - situations where small changes in initial conditions interact and reverberate in essentially unpredictable ways - for instance, the difficulty in longterm predictions of weather (N.B. not the same thing as longterm predictions of *climate*) - you can think of this as models that fail to generalize well, because they overfit on the training data (the initial conditions)
#
# Problems suffering from the above are called ill-posed problems. Relating to linear algebra and systems of equations, the only truly well-posed problems are those with a single unique solution.
#
# 
#
# Think for a moment - what would the above plot look like if there was no solution? If there were multiple solutions? And how would that generalize to higher dimensions?
#
# A lot of what you covered with linear regression was about getting matrices into the right shape for them to be solvable in this sense. But some matrices just won't submit to this, and other problems may technically "fit" linear regression but still be violating the above assumptions in subtle ways.
#
# [Overfitting](https://en.wikipedia.org/wiki/Overfitting) is in some ways a special case of this - an overfit model uses more features/parameters than is "justified" by the data (essentially by the *dimensionality* of the data, as measured by $n$ the number of observations). As the number of features approaches the number of observations, linear regression still "works", but it starts giving fairly perverse results. In particular, it results in a model that fails to *generalize* - and so the core goal of prediction and explanatory power is undermined.
#
# How is this related to well and ill-posed problems? It's not clearly a no solution or multiple solution case, but it does fall in the third category - overfitting results in fitting to the "noise" in the data, which means the particulars of one random sample or another (different initial conditions )will result in dramatically different models.
#
# ## Stop and think - what are ways to address these issues?
#
# Let's examine in the context of housing data.
# + colab={"base_uri": "https://localhost:8080/", "height": 206} colab_type="code" id="TDh_Oz9HDHeR" outputId="f3e4d42e-57c0-432b-c369-95522bc37dd3"
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.preprocessing import scale
boston = load_boston()
boston.data = scale(boston.data) # Very helpful for regularization!
df = pd.DataFrame(boston.data, columns=boston.feature_names)
df['Price'] = boston.target
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="3u24Yr-SkIhb" outputId="3cc8f97f-96d0-4b08-ced9-e29d1c740a22"
df.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="0vlZShpFkll2" outputId="aeeeee4c-8dfc-4b63-e73a-98863358bdbb"
# Let's try good old least squares!
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
X = df.drop('Price', axis='columns')
y = df.Price
lin_reg = LinearRegression().fit(X, y)
mean_squared_error(y, lin_reg.predict(X))
# + [markdown] colab_type="text" id="erOFuJKWlTad"
# That seems like a pretty good score, but...
#
# 
#
# Chances are this doesn't generalize very well. You can verify this by splitting the data to properly test model validity.
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="CG6DZ1UcqbEx" outputId="04af7cd1-5847-4531-b105-32a0cf449dd7"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=43)
lin_reg_split = LinearRegression().fit(X_train, y_train)
print(mean_squared_error(y, lin_reg_split.predict(X)))
print(mean_squared_error(y_test, lin_reg_split.predict(X_test)))
# + [markdown] colab_type="text" id="ILHGe53Iqehg"
# Oops! 💥
#
# ### What can we do?
#
# - Use fewer features - sure, but it can be a lot of work to figure out *which* features, and (in cases like this) there may not be any good reason to really favor some features over another.
# - Get more data! This is actually a pretty good approach in tech, since apps generate lots of data all the time (and we made this situation by artificially constraining our data). But for case studies, existing data, etc. it won't work.
# - **Regularize!**
#
# ## Regularization just means "add bias"
#
# OK, there's a bit more to it than that. But that's the core intuition - the problem is the model working "too well", so fix it by making it harder for the model!
#
# It may sound strange - a technique that is purposefully "worse" - but in certain situations, it can really get results.
#
# What's bias? In the context of statistics and machine learning, bias is when a predictive model fails to identify relationships between features and the output. In a word, bias is *underfitting*.
#
# We want to add bias to the model because of the [bias-variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff) - variance is the sensitivity of a model to the random noise in its training data (i.e. *overfitting*), and bias and variance are naturally (inversely) related. Increasing one will always decrease the other, with regards to the overall generalization error (predictive accuracy on unseen data).
#
# Visually, the result looks like this:
#
# 
#
# The blue line is overfit, using more dimensions than are needed to explain the data and so much of the movement is based on noise and won't generalize well. The green line still fits the data, but is less susceptible to the noise - depending on how exactly we parameterize "noise" we may throw out actual correlation, but if we balance it right we keep that signal and greatly improve generalizability.
#
# ### Look carefully at the above plot and think of ways you can quantify the difference between the blue and green lines...
#
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="7aQlX9e9lQLr" outputId="d5cef801-efec-4c36-fe27-c4b3f02a6750"
# Now with regularization via ridge regression
from sklearn.linear_model import Ridge
ridge_reg = Ridge().fit(X, y)
mean_squared_error(y, ridge_reg.predict(X))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="qiMXYAWGomcB" outputId="5a583ecf-c93f-40f2-8502-41d9e561ba32"
# The score is a bit worse than OLS - but that's expected (we're adding bias)
# Let's try split
ridge_reg_split = Ridge().fit(X_train, y_train)
mean_squared_error(y_test, ridge_reg_split.predict(X_test))
# + colab={"base_uri": "https://localhost:8080/", "height": 4674} colab_type="code" id="PJhjFFeF2uoA" outputId="6289580e-aed7-4839-c3e6-2c643574e2ea"
# A little better (to same test split w/OLS) - can we improve it further?
# We just went with defaults, but as always there's plenty of parameters
help(Ridge)
# + [markdown] colab_type="text" id="F4eY9TKw4S4F"
# How to tune alpha? For now, let's loop and try values.
#
# (For longterm/stretch/next week, check out [cross-validation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html#sklearn.linear_model.RidgeCV).)
# + colab={"base_uri": "https://localhost:8080/", "height": 3490} colab_type="code" id="DISx148Z4Sqi" outputId="6df5438d-e168-4714-82c4-ae4688bfdd23"
alphas = []
mses = []
for alpha in range(0, 200, 1):
ridge_reg_split = Ridge(alpha=alpha).fit(X_train, y_train)
mse = mean_squared_error(y_test, ridge_reg_split.predict(X_test))
print(alpha, mse)
alphas.append(alpha)
mses.append(mse)
# + colab={"base_uri": "https://localhost:8080/", "height": 347} colab_type="code" id="iRB3KHyWiO4y" outputId="a98e6ff2-c184-4fe5-eb76-a64b2705c4b3"
from matplotlib.pyplot import scatter
scatter(alphas, mses);
# + [markdown] colab_type="text" id="WzgTBd-FcctM"
# ## What's the intuition? What are we doing?
#
# The `alpha` parameter corresponds to the weight being given to the extra penalty being calculated by [Tikhonov regularization](https://en.wikipedia.org/wiki/Tikhonov_regularization) (this parameter is sometimes referred to as $\lambda$ in the context of ridge regression).
#
# Normal linear regression (OLS) minimizes the **sum of square error of the residuals**.
#
# Ridge regression minimizes the **sum of square error of the residuals** *AND* **the squared slope of the fit model, times the alpha parameter**.
#
# This is why the MSE for the first model in the for loop (`alpha=0`) is the same as the MSE for linear regression - it's the same model!
#
# As `alpha` is increased, we give more and more penalty to a steep slope. In two or three dimensions this is fairly easy to visualize - beyond, think of it as penalizing coefficient size. Each coefficient represents the slope of an individual dimension (feature) of the model, so ridge regression is just squaring and summing those.
#
# So while `alpha=0` reduces to OLS, as `alpha` approaches infinity eventually the penalty gets so extreme that the model will always output every coefficient as 0 (any non-zero coefficient resulting in a penalty that outweighs whatever improvement in the residuals), and just fit a flat model with intercept at the mean of the dependent variable.
#
# Of course, what we want is somewhere in-between these extremes. Intuitively, what we want to do is apply an appropriate "cost" or penalty to the model for fitting parameters, much like adjusted $R^2$ takes into account the cost of adding complexity to a model. What exactly is an appropriate penalty will vary, so you'll have to put on your model comparison hat and give it a go!
#
# PS - scaling the data helps, as that way this cost is consistent and can be added uniformly across features, and it is simpler to search for the `alpha` parameter.
#
# ### Bonus - magic! ✨
#
# Ridge regression doesn't just reduce overfitting and help with the third aspect of well-posed problems (poor generalizability). It can also fix the first two (no unique solution)!
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="rdogs9EMX6Vd" outputId="eaf2492e-2a61-4e96-c2eb-a1baf6d2f4c6"
df_tiny = df.sample(10, random_state=27)
print(df_tiny.shape)
X = df_tiny.drop('Price', axis='columns')
y = df_tiny.Price
lin_reg = LinearRegression().fit(X, y)
lin_reg.score(X, y) # Perfect multi-collinearity!
# NOTE - True OLS would 💥 here
# scikit protects us from actual error, but still gives a poor model
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="zesVR59NhA7A" outputId="83b429ca-d564-4d0b-fe0c-0b8943b6275c"
ridge_reg = Ridge().fit(X, y)
ridge_reg.score(X, y) # More plausible (not "perfect")
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="WP6zwLtshaVR" outputId="50f9033f-fbbc-4dcb-c96c-17e44bb3df81"
# Using our earlier test split
mean_squared_error(y_test, lin_reg.predict(X_test))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="QeL_O8vNhSqj" outputId="a3aefe55-881f-4667-869e-c04d8c17a95e"
# Ridge generalizes *way* better (and we've not even tuned alpha)
mean_squared_error(y_test, ridge_reg.predict(X_test))
# + [markdown] colab_type="text" id="x2N5WDV6nd3S"
# ## And a bit more math
#
# The regularization used by Ridge Regression is also known as **$L^2$ regularization**, due to the squaring of the slopes being summed. This corresponds to [$L^2$ space](https://en.wikipedia.org/wiki/Square-integrable_function), a metric space of square-integrable functions that generally measure what we intuitively think of as "distance" (at least, on a plane) - what is referred to as Euclidean distance.
#
# The other famous norm is $L^1$, also known as [taxicab geometry](https://en.wikipedia.org/wiki/Taxicab_geometry), because it follows the "grid" to measure distance like a car driving around city blocks (rather than going directly like $L^2$). When referred to as a distance this is called "Manhattan distance", and can be used for regularization (see [LASSO](https://en.wikipedia.org/wiki/Lasso_(statistics%29), which [uses the $L^1$ norm](https://www.quora.com/What-is-the-difference-between-L1-and-L2-regularization-How-does-it-solve-the-problem-of-overfitting-Which-regularizer-to-use-and-when)).
#
# All this comes down to - regularization means increasing model bias by "watering down" coefficients with a penalty typically based on some sort of distance metric, and thus reducing variance (overfitting the model to the noise in the data). It gives us another lever to try and another tool for our toolchest!
#
# ## Putting it all together - one last example
#
# The official scikit-learn documentation has many excellent examples - [this one](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols_ridge_variance.html#sphx-glr-auto-examples-linear-model-plot-ols-ridge-variance-py) illustrates how ridge regression effectively reduces the variance, again by increasing the bias, penalizing coefficients to reduce the effectiveness of features (but also the impact of noise).
#
# ```
# Due to the few points in each dimension and the straight line that linear regression uses to follow these points as well as it can, noise on the observations will cause great variance as shown in the first plot. Every line’s slope can vary quite a bit for each prediction due to the noise induced in the observations.
#
# Ridge regression is basically minimizing a penalised version of the least-squared function. The penalising shrinks the value of the regression coefficients. Despite the few data points in each dimension, the slope of the prediction is much more stable and the variance in the line itself is greatly reduced, in comparison to that of the standard linear regression
# ```
# + colab={"base_uri": "https://localhost:8080/", "height": 425} colab_type="code" id="LaOYdswIB6Bo" outputId="7081e218-bc17-478a-f6dd-37735fce78c3"
# Code source: <NAME>
# Modified for documentation by <NAME>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
X_train = np.c_[.5, 1].T
y_train = [.5, 1]
X_test = np.c_[0, 2].T
np.random.seed(0)
classifiers = dict(ols=linear_model.LinearRegression(),
ridge=linear_model.Ridge(alpha=.1))
for name, clf in classifiers.items():
fig, ax = plt.subplots(figsize=(4, 3))
for _ in range(6):
this_X = .1 * np.random.normal(size=(2, 1)) + X_train
clf.fit(this_X, y_train)
ax.plot(X_test, clf.predict(X_test), color='gray')
ax.scatter(this_X, y_train, s=3, c='gray', marker='o', zorder=10)
clf.fit(X_train, y_train)
ax.plot(X_test, clf.predict(X_test), linewidth=2, color='blue')
ax.scatter(X_train, y_train, s=30, c='red', marker='+', zorder=10)
ax.set_title(name)
ax.set_xlim(0, 2)
ax.set_ylim((0, 1.6))
ax.set_xlabel('X')
ax.set_ylabel('y')
fig.tight_layout()
plt.show()
# + [markdown] colab_type="text" id="Xb1MFgypBVQd"
# # Live Lecture - Ridge versus OLS
#
# First and foremost, we'll review/discuss and address any questions about the above. As time allows, we'll look at data and compare OLS to ridge regression - if there's particular data you'd like to volunteer (maybe something you've looked at in the past) please bring it to the lecture!
# + colab={} colab_type="code" id="uE89YmqtBqWX"
# TODO - live data exploration, Ridge versus OLS!
# + [markdown] colab_type="text" id="k0AhsAmuJzT9"
# # Assignment
#
# Following is data describing characteristics of blog posts, with a target feature of how many comments will be posted in the following 24 hours.
#
# https://archive.ics.uci.edu/ml/datasets/BlogFeedback
#
# Investigate - you can try both linear and ridge. You can also sample to smaller data size and see if that makes ridge more important. Don't forget to scale!
#
# Focus on the training data, but if you want to load and compare to any of the test data files you can also do that.
#
# Note - Ridge may not be that fundamentally superior in this case. That's OK! It's still good to practice both, and see if you can find parameters or sample sizes where ridge does generalize and perform better.
#
# When you've fit models to your satisfaction, answer the following question:
#
# ```
# Did you find cases where Ridge performed better? If so, describe (alpha parameter, sample size, any other relevant info/processing). If not, what do you think that tells you about the data?
# ```
#
# You can create whatever plots, tables, or other results support your argument. In this case, your target audience is a fellow data scientist, *not* a layperson, so feel free to dig in!
# -
# 1...50:
# Average, standard deviation, min, max and median of the
# Attributes 51...60 for the source of the current blog post
# With source we mean the blog on which the post appeared.
# For example, myblog.blog.org would be the source of
# the post myblog.blog.org/post_2010_09_10
# 51: Total number of comments before basetime
# 52: Number of comments in the last 24 hours before the
# basetime
# 53: Let T1 denote the datetime 48 hours before basetime,
# Let T2 denote the datetime 24 hours before basetime.
# This attribute is the number of comments in the time period
# between T1 and T2
# 54: Number of comments in the first 24 hours after the
# publication of the blog post, but before basetime
# 55: The difference of Attribute 52 and Attribute 53
# 56...60:
# The same features as the attributes 51...55, but
# features 56...60 refer to the number of links (trackbacks),
# while features 51...55 refer to the number of comments.
# 61: The length of time between the publication of the blog post
# and basetime
# 62: The length of the blog post
# 63...262:
# The 200 bag of words features for 200 frequent words of the
# text of the blog post
# 263...269: binary indicator features (0 or 1) for the weekday
# (Monday...Sunday) of the basetime
# 270...276: binary indicator features (0 or 1) for the weekday
# (Monday...Sunday) of the date of publication of the blog
# post
# 277: Number of parent pages: we consider a blog post P as a
# parent of blog post B, if B is a reply (trackback) to
# blog post P.
# 278...280:
# Minimum, maximum, average number of comments that the
# parents received
# 281: The target: the number of comments in the next 24 hours
# (relative to basetime)
# !wget 'https://archive.ics.uci.edu/ml/machine-learning-databases/00304/BlogFeedback.zip'
# !unzip BlogFeedback.zip
# + colab={} colab_type="code" id="HKKnNsttRpwI"
# TODO - write some code!
from sklearn.preprocessing import scale
data = pd.read_csv('../module4-ridge-regression/blogData_train.csv', header=None)
data.head()
# -
data['target'] = data[280]
data['length'] = data[61]
data['comments'] = data[50]
data['comments_24hr'] = data[51]
data['time_before_publication'] = data[60]
data['comments_in_T1_T2'] = data[52]
data['comments_first24hr'] = data[53]
data['comments_diff'] = data[54]
data.head()
cols = ['target', 'length', 'comments','comments_24hr','time_before_publication', 'comments_diff', 'comments_in_T1_T2','comments_first24hr']
df = data.drop(columns =cols)
df.describe()
target = df[280].values
features = df.drop(columns=280)
# +
X_train, X_test, y_train, y_test = train_test_split(features,target, test_size=.2, random_state = 42)
print('Size X_train', X_train.shape)
print('Size y_train', y_train.shape)
# -
ridge_reg = Ridge(alpha=100).fit(X_train,y_train)
z = mean_squared_error(y_test,ridge_reg.predict(X_test))
print(z)
# +
ridge_reg = Ridge().fit(X_train, y_train)
preds = ridge_reg.predict(X_test)
mse = mean_squared_error(y_test, preds)
rmse = sqrt(mse)
score = ridge_reg.score(features, target)
print('Mean Squared Error: ', mse)
print('Root Mean Squared Error: ',rmse)
print('Train Score: ', ridge_reg.score(X_train,y_train))
print('Test Score: ', ridge_reg.score(X_test,y_test))
# +
#Do that arange listing
mse_l = []
alpha = []
for a in np.arange(0,300,1):
model = Ridge(alpha=a).fit(X_train,y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
alpha.append(a)
mse_l.append(mse)
plt.scatter(alpha,mse_l);
# -
print(mse_l,alpha)
df.head()
# +
from math import sqrt
features = df[[51,52]]
target = df[280]
# -
X_train, X_test, y_train, y_test = train_test_split(features,target,test_size=.2)
# +
lin_reg = LinearRegression()
model = lin_reg.fit(X_train,y_train)
beta_0 = model.intercept_
beta_1 = model.coef_[0]
preds = lin_reg.predict(X_test)
mse = mean_squared_error(y_test, preds)
rmse = sqrt(mse)
print('Slope Coefficient: ', beta_1)
print('Intercept Value: ', beta_0)
print('Mean Squared Error: ', mse)
print('Root Mean Squared Error: ',rmse)
print('Score: ', model.score(X_train,y_train))
# +
df_scale = scale(df)
blogs = pd.DataFrame(df_scale, columns = df.columns)
blogs = blogs.rename(columns={280: 'target'})
blogs.head()
# -
days = [day for day in range(271,277)]
days_features = df[days]
target = df[280]
# +
X_train, X_test, y_train, y_test = train_test_split(days_features, target)
ridge_reg = Ridge().fit(X_train, y_train)
preds = ridge_reg.predict(X_test)
mse = mean_squared_error(y_test, preds)
rmse = sqrt(mse)
score = ridge_reg.score(days_features, target)
print('Mean Squared Error: ', mse)
print('Root Mean Squared Error: ',rmse)
print('Train Score: ', ridge_reg.score(X_train,y_train))
print('Test Score: ', ridge_reg.score(X_test,y_test))
mse_l = []
alpha = []
for a in np.arange(0,500,1):
model = Ridge(alpha=a).fit(X_train,y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
alpha.append(a)
mse_l.append(mse)
plt.scatter(alpha, mse_l)
plt.show();
# -
blogs.head()
features = blogs.drop(columns='target')
target = blogs['target']
# +
X_train, X_test, y_train, y_test = train_test_split(features, target)
ridge_reg = Ridge().fit(X_train, y_train)
preds = ridge_reg.predict(X_test)
mse = mean_squared_error(y_test, preds)
rmse = sqrt(mse)
score = ridge_reg.score(features, target)
print('Mean Squared Error: ', mse)
print('Root Mean Squared Error: ',rmse)
print('Train Score: ', ridge_reg.score(X_train,y_train))
print('Test Score: ', ridge_reg.score(X_test,y_test))
mse_l = []
alpha = []
for a in np.arange(0,500,1):
model = Ridge(alpha=a).fit(X_train,y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
alpha.append(a)
mse_l.append(mse)
plt.scatter(alpha, mse_l)
plt.show();
# +
from sklearn.linear_model import RidgeCV
# ?RidgeCV
# -
features = df.drop(columns=[280])
target = df[280]
# +
ridgecv = RidgeCV(alphas=[1e-3,1e-2,1e-1,1,10,100,1000]).fit(features, target)
X_train, X_test, y_train, y_test = train_test_split(features, target, random_state=42)
preds = ridgecv.predict(X_test)
mse = mean_squared_error(y_test, preds)
rmse = sqrt(mse)
print('Mean Squared Error: ', mse)
print('Root Mean Squared Error: ',rmse)
print('Train Score: ', ridgecv.score(X_train,y_train))
print('Test Score: ', ridgecv.score(X_test,y_test))
print('RidgeCV Score: ',ridgecv.score(features, target))
# +
mse_l = []
alpha = []
for a in np.arange(0,500,1):
model = Ridge(alpha=a).fit(X_train,y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
alpha.append(a)
mse_l.append(mse)
print(mse_l,alpha)
# +
features = blogs.drop(columns='target')
target = blogs['target']
X_train, X_test, y_train, y_test = train_test_split(features, target)
alpha = []
mse_l = []
for a in np.arange(0,500,1):
model = Ridge(alpha=a).fit(X_train,y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
alpha.append(a)
mse_l.append(mse)
# -
def make_dataframe(x,y):
df = pd.DataFrame({'alpha': x, 'mse':y})
return df
make_dataframe(alpha,mse_l)
# ?Ridge
# + [markdown] colab_type="text" id="Onsn4B2tJ20X"
# # Resources and stretch goals
# + [markdown] colab_type="text" id="o_ZIP6O0J435"
# Resources:
# - https://www.quora.com/What-is-regularization-in-machine-learning
# - https://blogs.sas.com/content/subconsciousmusings/2017/07/06/how-to-use-regularization-to-prevent-model-overfitting/
# - https://machinelearningmastery.com/introduction-to-regularization-to-reduce-overfitting-and-improve-generalization-error/
# - https://towardsdatascience.com/ridge-and-lasso-regression-a-complete-guide-with-python-scikit-learn-e20e34bcbf0b
# - https://stats.stackexchange.com/questions/111017/question-about-standardizing-in-ridge-regression#111022
#
# Stretch goals:
# - Revisit past data you've fit OLS models to, and see if there's an `alpha` such that ridge regression results in a model with lower MSE on a train/test split
# - Yes, Ridge can be applied to classification! Check out [sklearn.linear_model.RidgeClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeClassifier.html#sklearn.linear_model.RidgeClassifier), and try it on a problem you previous approached with a different classifier (note - scikit LogisticRegression also automatically penalizes based on the $L^2$ norm, so the difference won't be as dramatic)
# - Implement your own function to calculate the full cost that ridge regression is optimizing (the sum of squared residuals + `alpha` times the sum of squared coefficients) - this alone won't fit a model, but you can use it to verify cost of trained models and that the coefficients from the equivalent OLS (without regularization) may have a higher cost
| module4-ridge-regression/LS_DS_234_Ridge_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit
# metadata:
# interpreter:
# hash: 31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6
# name: python3
# ---
import pandas as pd
photos = pd.read_csv('answers_photos.csv')
col1 = pd.DataFrame(photos['answer_id'])
col1 = col1.rename(columns={'answer_id': 'id'})
answers = pd.read_csv('answers_cleaned.csv')
col2 = pd.DataFrame(answers['id'])
col1
merged = col1.merge(col2, how='left', indicator=True)
merged2 = pd.DataFrame(photos).merge(pd.DataFrame(answers), left_on='answer_id', right_on='id', indicator=True)
merged2 = merged2.loc[merged2['_merge']=='both']
merged2
cleanedPhotos = merged2.drop(columns=['id_y', 'question_id', 'body', 'date_written', 'answerer_name', 'answerer_email', 'reported', 'helpful', '_merge'])
cleanedPhotos = cleanedPhotos.rename(columns={'id_x': 'id'})
cleanedPhotos.to_csv('answer_photos_cleaned.csv')
| ETL/cleanUp answer_photos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#hide
#skip
! [ -e /content ] && pip install -Uqq self-supervised # upgrade self-supervised on colab
# +
#default_exp vision.byol
# -
# # BYOL
#
# > **BYOL**: [Bootstrap Your Own Latent A New Approach to Self-Supervised Learning](https://arxiv.org/pdf/2006.07733.pdf)
#export
from fastai.vision.all import *
from self_supervised.augmentations import *
from self_supervised.layers import *
# ## Algorithm
# #### BYOL
# 
# **Abstract**: We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image
# representation learning. BYOL relies on two neural networks, referred to as online and target
# networks, that interact and learn from each other. From an augmented view of an image, we train
# the online network to predict the target network representation of the same image under a different
# augmented view. At the same time, we update the target network with a slow-moving average
# of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a
# new state of the art without them. BYOL reaches 74:3% top-1 classification accuracy on ImageNet
# using a linear evaluation with a ResNet-50 architecture and 79:6% with a larger ResNet. We
# show that BYOL performs on par or better than the current state of the art on both transfer and
# semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub.3
#export
class BYOLModel(Module):
"Compute predictions of v1 and v2"
def __init__(self,encoder,projector,predictor):
self.encoder,self.projector,self.predictor = encoder,projector,predictor
def forward(self,v1,v2):
"Symmetric predictions for symmetric loss calc"
q1 = self.predictor(self.projector(self.encoder(v1)))
q2 = self.predictor(self.projector(self.encoder(v2)))
return (q1,q2)
# You can either use `BYOLModel` module to create a model by passing predefined `encoder`, `projector` and `predictor` models or you can use `create_byol_model` with just passing predefined encoder and expected input channels.
#
# You may notice `projector/MLP` module defined here is different than the one defined in SimCLR, in the sense that it has a batchnorm layer. You can read this great [blog post](https://untitled-ai.github.io/understanding-self-supervised-contrastive-learning.html) for a better intuition on the effect of the batchnorm layer in `BYOL`.
#export
def create_byol_model(encoder, hidden_size=4096, projection_size=256, bn=True, nlayers=2):
"Create BYOL model"
n_in = in_channels(encoder)
with torch.no_grad(): representation = encoder(torch.randn((2,n_in,128,128)))
projector = create_mlp_module(representation.size(1), hidden_size, projection_size, bn=bn, nlayers=nlayers)
predictor = create_mlp_module(projection_size, hidden_size, projection_size, bn=bn, nlayers=nlayers)
apply_init(projector)
apply_init(predictor)
return BYOLModel(encoder, projector, predictor)
encoder = create_encoder("tf_efficientnet_b0_ns", n_in=3, pretrained=False, pool_type=PoolingType.CatAvgMax)
model = create_byol_model(encoder, hidden_size=2048, projection_size=128)
out = model(torch.randn((2,3,224,224)), torch.randn((2,3,224,224)))
out[0].shape, out[1].shape
# ## BYOL Callback
# The following parameters can be passed;
#
# - **aug_pipelines** list of augmentation pipelines List[Pipeline] created using functions from `self_supervised.augmentations` module. Each `Pipeline` should be set to `split_idx=0`. You can simply use `get_byol_aug_pipelines` utility to get aug_pipelines.
# - **m** is momentum for target encoder/model update, a similar idea to MoCo.
# BYOL algorithm uses 2 views of a given image, and `BYOL` callback expects a list of 2 augmentation pipelines in `aug_pipelines`.
#
# You can simply use helper function `get_byol_aug_pipelines()` which will allow augmentation related arguments such as size, rotate, jitter...and will return a list of 2 pipelines, which we can be passed to the callback. This function uses `get_multi_aug_pipelines` which then `get_batch_augs`. For more information you may refer to `self_supervised.augmentations` module.
#
# Also, you may choose to pass your own list of aug_pipelines which needs to be List[Pipeline, Pipeline] where Pipeline(..., split_idx=0). Here, `split_idx=0` forces augmentations to be applied in training mode.
#export
@delegates(get_multi_aug_pipelines)
def get_byol_aug_pipelines(size, **kwargs): return get_multi_aug_pipelines(n=2, size=size, **kwargs)
# +
#export
from copy import deepcopy
class BYOL(Callback):
order,run_valid = 9,True
def __init__(self, aug_pipelines, m=0.999, print_augs=False):
assert_aug_pipelines(aug_pipelines)
self.aug1, self.aug2 = aug_pipelines
if print_augs: print(self.aug1), print(self.aug2)
store_attr("m")
def before_fit(self):
"Create target model"
self.target_model = deepcopy(self.learn.model).to(self.dls.device)
for param_k in self.target_model.parameters(): param_k.requires_grad = False
self.learn.loss_func = self.lf
def before_batch(self):
"Generate 2 views of the same image and calculate target projections for these views"
v1,v2 = self.aug1(self.x), self.aug2(self.x.clone())
self.learn.xb = (v1,v2)
with torch.no_grad():
z1 = self.target_model.projector(self.target_model.encoder(v1))
z2 = self.target_model.projector(self.target_model.encoder(v2))
self.learn.yb = (z1,z2)
def _mse_loss(self, x, y):
x,y = F.normalize(x), F.normalize(y)
return 2 - 2 * (x * y).sum(dim=-1)
def lf(self, pred, *yb):
(q1,q2),(z1,z2) = pred,yb
return (self._mse_loss(q1,z2) + self._mse_loss(q2,z1)).mean()
@torch.no_grad()
def _momentum_update_target_encoder(self):
for param_q, param_k in zip(self.learn.model.parameters(), self.target_model.parameters()):
param_k.data = param_k.data * self.m + param_q.data * (1. - self.m)
def after_step(self):
"Momentum update target model"
self._momentum_update_target_encoder()
@torch.no_grad()
def show(self, n=1):
x1,x2 = self.learn.xb
bs = x1.size(0)
idxs = np.random.choice(range(bs),n,False)
x1 = self.aug1.decode(x1[idxs].to('cpu').clone()).clamp(0,1)
x2 = self.aug2.decode(x2[idxs].to('cpu').clone()).clamp(0,1)
images = []
for i in range(n): images += [x1[i],x2[i]]
return show_batch(x1[0], None, images, max_n=len(images), ncols=None, nrows=n)
# -
# ### Example Usage
path = untar_data(URLs.MNIST_TINY)
items = get_image_files(path)
tds = Datasets(items, [PILImageBW.create, [parent_label, Categorize()]], splits=GrandparentSplitter()(items))
dls = tds.dataloaders(bs=5, after_item=[ToTensor(), IntToFloatTensor()], device='cpu')
fastai_encoder = create_encoder('xresnet18', n_in=1, pretrained=False)
model = create_byol_model(fastai_encoder, hidden_size=4096, projection_size=256)
aug_pipelines = get_byol_aug_pipelines(size=28, rotate=False, jitter=False, bw=False, blur=False, stats=None, cuda=False)
learn = Learner(dls, model, cbs=[BYOL(aug_pipelines=aug_pipelines, print_augs=True), ShortEpochCallback(0.001)])
b = dls.one_batch()
learn._split(b)
learn('before_fit')
learn('before_batch')
axes = learn.byol.show(n=5)
learn.fit(1)
learn.recorder.losses
# ## Export -
#hide
from nbdev.export import notebook2script
notebook2script()
| nbs/12 - byol.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reinforcement Learning
# Here we'll look into what reinforcement learning is good at, then present one of the most important techniques in deep reinforcement learning: policy gradients and deep q-networks (DQN), including a discussion on markov decision process (MDP).
# ## 1. Learning to Optimize Rewards
# In rl, a software *agent* makes *observations* and takes *actions* within an *environment*, and in return recieves *rewards*. Its objective is to learn to act in a way that will maximize its expected long-term rewards, ie. the agent acts in the environment and learns by trial and error to maximize its pleasure and minimize its pain.
#
# Note that there may not be any positive rewards at all; for example, the agent may move around in a maze, getting a negative reward at every time step, so it better find the exit as quickly as possible. There are many other examples of tasks where rl is well suited, such as self-driving cars, or controlling where an image classification system should focus its attention.
# ## 2. Policy Search
# Generally, the algorithm used by the software agent to determine its actions is called its **policy**, for example, the policy could be a neural net taking observations as inputs and outputting the action to take, as shown below:
#
# 
#
# The policy can be any algorithm you can think of, and it doesn't even have to be deterministic. For example, consider a robotic vacuum cleaner whose reward is the amount of dust it picks up in 30 minutes. Its policy could be to move forward with some probability $p$ every second, or randomly rotate left/right with probability $1-p$. The rotation angle would be a random angle between $-r$ and $+r$, and since this policy involves randomness, it is called a **stochastic policy**. The robot will have an erratic trajectory, which guarantees that it will eventually get to any place it can reach and pick up all the dust.
#
# How would you train such a robot? There are jsut two *policy parameters* you can tweak: the probability *p* and the angle range *r*. One possible learning algorithm could be to try out many different values for these parameters, and pick the combination that performs best, which is an example of *policy search* (in this case a brute force approach). However, when the *policy space* is too large (which is generally the case), finding a good set of parameters this way is like searching for a needle in a huge haystack.
#
# Another way to explore the policy space is to use *genetic algorithms*, for example, you could randomly create a first generation of 100 policies and try to run them out, then "kill" the worst 80 worst policies and make the 20 survivors produce 4 offspring each. An offspring is just a copy of its parent plus some random variation, and the surviving policies plus their offspring together constitute the second generation. You can continue to iterate through generations this way, until you find a good policy.
#
# > It is often better to give the poor performers a slight chance of survival, to preserve some diversity in the "gene pool".
#
# Yet another approach is to use optimization techniques, by evaluating the gradients of the rewards with regards to the policy parameters, then tweaking these parameters by the following gradient toward higher rewards (*gradient ascent*). This approach is called *policy gradients* (PG), which we'll discuss later. For example, going back to the vacuum robot, you could slightly increase *p* and evaluate whether this increases the amount of dust picked up in 30 minutes; if it does, then increase *p* some more, or else reduce *p*. We'll implement a popular PG algorithm using tf, but before that introduce OpenAI Gym.
# ## 3. Introduction to OpenAI Gym
#
| _archived/tensorflow/tensorflow_5.ipynb |