code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Assignment template 1) Simulating the Gosper Glider Gun
#
# This assignment follows on from 'Practical 4. Modules and Functions - Building Conway's Game of Life'. The main objective is:
#
# <div class="alert alert-block alert-success">
# <b> Simulate the Gosper Glider gun from Conway's game of life </b>
#
# Overview: You are tasked with:
#
# - 1. Simulating the Gosper Glider gun from Conway's game of life
# - 2. Introducing a 'spaceship' specie into the simulation and 'assessing' the disruption.
#
# The first task requires you to initialise the 2D Universe of Conway's game of life and run the simulation. The second task is a little subjective but requires you to place a spaceship into the 2D Universe and draw some personal observations about the change in repetitive nature of the Gosper Glider Gun scenario. Based on the example given in class, we can break this exercise down into a number of steps:
#
# - Initialise a 2D 'Universe' which we will run our simulation over.
# - Define the shape and location of species for the Gosper Glider Gun simulation.
# - Introduce a new specie into the simulation.
# - Consider a time of simulation.
#
# <div class="alert alert-block alert-warning">
# <b>Please note:</b>
#
# We will discuss this in class, but aside from a working notebook we are also looking for the following:
#
# - An associated narrative with each operation. This includes the following sections:
#
# > Abstract
# - Summarise the project and main results
#
# > Introduction and methdology
# - What is the challenge and how are you solving it?
# - What modules/functions are you using?
#
# > Results
# - What is happening in each figure and/or your simulation?
#
# We also want to see adequate referencing around:
# - What is the original source of the theory and/or data?
# - Comments in the code boxes using the # symbol. Remember that someone might not know what each line of code does.
#
# You may also want to consider a broader discussion around this challenge. For example:
# - What could your software be improved?
# - How do you know your results are correct?
# - What if someone wanted to get in touch with you and re-use this code? Any restrictions on data?
#
# To start, we recommend you first get the code implementation working and then construct the narrative around it. Also please note that to add another code or markdown box, you can simple use the 'Insert' option on the main menu.
#
# Your Gosper Glider Gun simulation should resemble the following figure, before you add a 'spaceship':
#
# <tr>
# <td> <img src="images/Assessment1_output.png" alt="Drawing" style="width: 400px;"/> </td>
# </tr>
#
# </div>
#
# </div>
# ## Abstract
# ## Introduction
#
#
# ## Methodology
# +
#### -------- INSERT CODE HERE ----------
#### ------------------------------------
# -
# ## Results
#
#
# #### references
| assessments/Option_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import time
import math
import numpy as np
import tensorly as tl
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
from brokenaxes import brokenaxes
from online_tensor_decomposition import *
# +
# for sample video
from cv2 import VideoWriter, VideoWriter_fourcc, imshow, imwrite
def make_video(tensor, filename, isColor=True):
start = time.time()
height = tensor.shape[1]
width = tensor.shape[2]
FPS = 24
fourcc = VideoWriter_fourcc(*'MP42')
video = VideoWriter(filename, fourcc, float(FPS), (width, height), isColor)
for frame in tensor:
video.write(np.uint8(frame))
video.release()
print('created', filename, time.time()-start)
# -
try:
import cPickle as pickle
except ImportError: # Python 3.x
import pickle
results = {}
with open('results_0117.p', 'rb') as fp:
results = pickle.load(fp)
# E5
results = {}
with open('results_0127.p', 'rb') as fp:
results = pickle.load(fp)
results = {}
with open('results_0128.p', 'rb') as fp:
results = pickle.load(fp)
# +
def plot_acc(datasets, name):
colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a')
libs = ("dao", "dtd", "ocp", 'fcp')
patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx")
markers = ("o", "x", "s", "^", "4")
ticks = [e.split('-')[-1] for e in datasets]
index = np.arange(3)
# create plot
fig, axes = plt.subplots(2, 1, figsize = (2.5, 5), dpi = 150)
ax1, ax2 = axes
ax1.tick_params(axis='y')
ax1.set_xlabel('Rank', size=12)
ax1.set_xticks(index)
ax1.set_xticklabels(ticks)
ax1.set_ylabel('Global Fitness', size=12)
for i, (color, lib) in enumerate(zip(colors, libs)):
acc_list = [results[dataset][lib][0] for dataset in datasets]
ax1.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5)
ax2.tick_params(axis='y')
ax2.set_xlabel('Rank', size=12)
ax2.set_xticks(index)
ax2.set_xticklabels(ticks)
ax2.set_ylabel('Average of Local Fitness', size=12)
for i, (color, lib) in enumerate(zip(colors, libs)):
acc_list = [results[dataset][lib][1] for dataset in datasets]
ax2.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig(f'./plots/{name}.pdf', bbox_inches = 'tight', pad_inches = 0)
# plt.show()
plot_acc(('synthetic-20', 'synthetic-30', 'synthetic-40'), 'E1_synthetic')
plot_acc(('video-20', 'video-30', 'video-40'), 'E1_video')
plot_acc(('stock-15', 'stock-25', 'stock-35'), 'E1_stock')
plot_acc(('hall-30', 'hall-35', 'hall-40'), 'E1_hall')
plot_acc(('korea-30', 'korea-40', 'korea-50'), 'E1_korea')
# +
def plot_rt(datasets, name):
colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a')
libs = ("dao", "dtd", "ocp", 'fcp')
patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx")
markers = ("o", "x", "s", "^", "4")
ticks = [e.split('-')[-1] for e in datasets]
index = np.arange(3)
# create plot
fig, ax = plt.subplots(1, 1, figsize = (2.5, 2.5), dpi = 150)
plt.yscale('log')
ax.tick_params(axis='y')
ax.set_xlabel('Rank', size=12)
ax.set_xticks(index)
ax.set_xticklabels(ticks)
ax.set_ylabel('Local Running Time (s)', size=12)
for i, (color, lib) in enumerate(zip(colors, libs)):
acc_list = [results[dataset][lib][3] for dataset in datasets]
ax.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig(f'./plots/{name}.pdf', bbox_inches = 'tight', pad_inches = 0)
# plt.show()
plot_rt(('synthetic-20', 'synthetic-30', 'synthetic-40'), 'E2_synthetic')
plot_rt(('video-20', 'video-30', 'video-40'), 'E2_video')
plot_rt(('stock-20', 'stock-22', 'stock-24'), 'E2_stock')
plot_rt(('hall-30', 'hall-35', 'hall-40'), 'E2_hall')
plot_rt(('korea-30', 'korea-40', 'korea-50'), 'E2_korea')
# +
def plot_E5_error(dataset):
markers = ("+", "x", "1", "2")
colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a')
libs = ("dao", "dtd", "ocp", "fcp")
fig = plt.figure(figsize = (7, 3), dpi = 150,)
plt.ylabel('Local Error Norm', fontsize=12)
plt.xlabel('# of Stacked Slices', fontsize=12)
# ax1.xaxis.set_label_position('top')
split_points, refine_points = results[dataset]['dao'][6]
for p in refine_points:
plt.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='--')
for p in split_points:
plt.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='-')
for color, marker, lib in zip(colors, markers, libs):
verbose_list = results[dataset][lib][5]
plt.plot(verbose_list[:,0], verbose_list[:,2], linewidth=1, marker=marker, color=color)
plt.savefig('plots/E5_{}_error.pdf'.format(dataset), bbox_inches='tight', pad_inches=0)
plot_E5_error('video')
def plot_E5_rt(dataset):
markers = ("+", "x", "1", "2")
colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a')
libs = ("dao", "dtd", "ocp", "fcp")
plt.figure(figsize = (7, 3), dpi = 150,)
plt.yscale('log')
plt.ylabel('Local Running Time (s)', fontsize=12)
plt.xlabel('# of Stacked Slices', fontsize=12)
# ax1.xaxis.set_label_position('top')
split_points, refine_points = results[dataset]['dao'][6]
for p in refine_points:
plt.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='--')
for p in split_points:
plt.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='-')
for color, marker, lib in zip(colors, markers, libs):
verbose_list = results[dataset][lib][5]
plt.plot(verbose_list[:,0], verbose_list[:,1], linewidth=1, marker=marker, color=color)
plt.savefig('plots/E5_{}_rt.pdf'.format(dataset), bbox_inches='tight', pad_inches=0)
plot_E5_rt('video')
# +
from matplotlib import gridspec
def plot_E5(dataset):
markers = ("x", "1", "2", "+")
colors = ('mediumseagreen', 'hotpink', '#fba84a', 'dodgerblue')
libs = ("dtd", "ocp", "fcp", "dao")
fig = plt.figure(figsize = (9, 6), dpi = 150)
gs = gridspec.GridSpec(2, 1, height_ratios=[1.5, 1])
ax1 = plt.subplot(gs[0])
ax1.set_ylabel('Local Error Norm', fontsize=12)
# ax1.set_xlabel('# of Stacked Slices', fontsize=12)
# ax1.xaxis.set_label_position('top')
split_points, refine_points = results[dataset]['dao'][6]
for p in refine_points:
ax1.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='--')
for p in split_points:
ax1.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='-')
for color, marker, lib in zip(colors, markers, libs):
verbose_list = results[dataset][lib][5]
ax1.plot(verbose_list[:,0], verbose_list[:,2], linewidth=1, marker=marker, color=color)
ax2 = plt.subplot(gs[1], sharex = ax1)
ax2.set_yscale('log')
ax2.set_ylabel('Local Running\nTime (s)', fontsize=12)
ax2.set_xlabel('# of Stacked Slices', fontsize=12)
# ax1.xaxis.set_label_position('top')
split_points, refine_points = results[dataset]['dao'][6]
for p in refine_points:
ax2.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='--')
for p in split_points:
ax2.axvline(p, label='line: {}'.format(p), c='lightgray', linewidth=2, linestyle='-')
for color, marker, lib in zip(colors, markers, libs):
verbose_list = results[dataset][lib][5]
ax2.plot(verbose_list[:,0], verbose_list[:,1], linewidth=1, marker=marker, color=color)
plt.setp(ax1.get_xticklabels(), visible=False)
plt.subplots_adjust(hspace=.0)
# fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig('plots/E5_{}.svg'.format(dataset), bbox_inches='tight', pad_inches=0)
plt.show()
plot_E5('video')
# +
def plot_rt(datasets, name):
colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a')
libs = ("dao", "dtd", "ocp", 'fcp')
patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx")
markers = ("o", "x", "s", "^", "4")
ticks = [e.split('-')[-1] for e in datasets]
index = np.arange(3)
# create plot
fig, axes = plt.subplots(1, 2, figsize = (6, 3), dpi = 150)
ax1, ax2 = axes
ax1.tick_params(axis='y')
ax1.set_xlabel('Rank', size=12)
ax1.set_xticks(index)
ax1.set_xticklabels(ticks)
ax1.set_ylabel('Global Running Time (s)', size=12)
for i, (color, lib) in enumerate(zip(colors, libs)):
acc_list = [results[dataset][lib][2] for dataset in datasets]
ax1.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5)
ax2.tick_params(axis='y')
ax2.set_xlabel('Rank', size=12)
ax2.set_xticks(index)
ax2.set_xticklabels(ticks)
ax2.set_ylabel('Average of \nLocal Running Time (s)', size=12)
for i, (color, lib) in enumerate(zip(colors, libs[:-1])):
acc_list = [results[dataset][lib][3] for dataset in datasets]
ax2.plot(index, acc_list, color=colors[i], marker=markers[i], linewidth=1, markersize=8, markerfacecolor="None", markeredgewidth=1.5)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.savefig(f'./plots/{name}.svg', bbox_inches = 'tight', pad_inches = 0)
# plt.show()
# plot_rt(('synthetic-20', 'synthetic-30', 'synthetic-40'), 'rt_synthetic')
plot_rt(('video-20', 'video-30', 'video-40'), 'rt_video')
# plot_rt(('stock-20', 'stock-22', 'stock-24'), 'rt_stock')
# plot_rt(('hall-30', 'hall-35', 'hall-40'), 'rt_hall')
# plot_rt(('korea-30', 'korea-40', 'korea-50'), 'rt_korea')
# -
# ---
# # Experiment #2
# +
from matplotlib import colors
def make_rgb_transparent(color, alpha=0.6, bg_rgb=(1,1,1)):
rgb = colors.colorConverter.to_rgb(color)
return [alpha * c1 + (1 - alpha) * c2
for (c1, c2) in zip(rgb, bg_rgb)]
def plot_mem(datasets, name):
colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a')
libs = ("dao", "dtd", "ocp", 'fcp')
patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx")
markers = ("o", "x", "s", "^", "4")
index = np.arange(5)
bar_width = 0.2
# create plot
fig, ax1 = plt.subplots(figsize = (6, 4), dpi = 150)
plt.xticks(index + bar_width*1.5, ('Synthetic', 'Video', 'Stock', 'Hall', 'Korea'))
plt.rcParams['hatch.linewidth'] = 0.2
for i, (color, lib) in enumerate(zip(colors, libs)):
mem_list = [results[dataset][lib][4] for dataset in datasets]
rects1 = ax1.bar(index + bar_width*i, mem_list, bar_width, color=make_rgb_transparent(color, alpha=0.0), label=lib, edgecolor='black', hatch=patterns[i], linewidth=0.5)
ax1.set_xlabel('Datasets')
ax1.set_ylabel('Memory Usage (byte)')
ax1.set_yscale('log')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
for i, (color, lib) in enumerate(zip(colors, libs)):
acc_list = [results[dataset][lib][0] for dataset in datasets]
for j, acc in enumerate(acc_list):
if j == 4:
ax2.scatter(index[j] + bar_width*i, acc, 70, color=colors[i], marker=markers[j], linewidth=2)
elif j == 1:
ax2.scatter(index[j] + bar_width*i, acc, 50, color=colors[i], marker=markers[j], linewidth=2)
else:
ax2.scatter(index[j] + bar_width*i, acc, 50, color=colors[i], marker=markers[j], facecolors='none', linewidth=2)
ax2.tick_params(axis='y')
ax2.set_ylabel('Global Fitness', rotation=270, labelpad=15)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
# plt.savefig(f'./plots/{name}_mem.pdf', bbox_inches = 'tight', pad_inches = 0)
plot_mem(('synthetic-30', 'video-30', 'stock-20', 'hall-30', 'korea-40'), 'E2')
# +
from matplotlib import colors
def make_rgb_transparent(color, alpha=0.6, bg_rgb=(1,1,1)):
rgb = colors.colorConverter.to_rgb(color)
return [alpha * c1 + (1 - alpha) * c2
for (c1, c2) in zip(rgb, bg_rgb)]
def plot_mem(datasets, name):
colors = ('dodgerblue','mediumseagreen', 'hotpink', '#fba84a')
libs = ("dao", "dtd", "ocp", 'fcp')
patterns = ( "" , "\\\\\\\\\\" , "////" , "xxxx")
markers = ("o", "x", "s", "^", "4")
index = np.arange(5)
bar_width = 0.2
# create plot
fig, ax1 = plt.subplots(figsize = (6, 4), dpi = 150)
plt.xticks(index + bar_width*1.5, ('Synthetic', 'Video', 'Stock', 'Hall', 'Korea'))
plt.rcParams['hatch.linewidth'] = 0.2
for i, (color, lib) in enumerate(zip(colors, libs)):
mem_list = [results[dataset][lib][4] for dataset in datasets]
print(mem_list)
rects1 = ax1.bar(index + bar_width*i, mem_list, bar_width, color=make_rgb_transparent(color, alpha=0.7), label=lib, edgecolor='black', hatch=patterns[i], linewidth=0.5)
ax1.set_xlabel('Datasets')
ax1.set_ylabel('Memory Usage (byte)')
ax1.set_yscale('log')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
for i, dataset in enumerate(datasets):
acc_list = [results[dataset][lib][0] for lib in libs]
ax2.plot(i + bar_width*index[:4], acc_list, marker="o", color='black', zorder=1)
for i, (color, lib) in enumerate(zip(colors, libs)):
acc_list = [results[dataset][lib][0] for dataset in datasets]
ax2.scatter(index + bar_width*i, acc_list, 30, color='black', marker="o", facecolor=colors[i], linewidth=1.3, zorder=2)
ax2.tick_params(axis='y')
ax2.set_ylabel('Global Fitness', rotation=270, labelpad=13)
fig.tight_layout() # otherwise the right y-label is slightly clipped
# plt.show()
plt.savefig(f'./plots/{name}_mem.svg', bbox_inches = 'tight', pad_inches = 0)
print([results[dataset]['fcp'][4]/results[dataset]['dao'][4] for dataset in datasets])
plot_mem(('synthetic-30', 'video-30', 'stock-20', 'hall-30', 'korea-40'), 'E2')
| Proposed Method/.ipynb_checkpoints/7_speed_accuracy-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import package
import sys
import numpy as np
import pandas as pd
import csv
# # Read in training set
# +
raw_data = np.genfromtxt('train.csv', delimiter=',', encoding='utf8') ## train.csv
#raw_data = np.loadtxt('train.csv', delimiter=',')
data = raw_data[1:,3:]
where_are_NaNs = np.isnan(data)
data[where_are_NaNs] = 0
month_to_data = {} ## Dictionary (key:month , value:data)
# generate month_to_data(20 days data)
for month in range(12):
sample = np.empty(shape = (18 , 480))
for day in range(20):
for hour in range(24):
sample[:,day * 24 + hour] = data[18 * (month * 20 + day): 18 * (month * 20 + day + 1),hour]
month_to_data[month] = sample
# -
# # Preprocess
# +
x = np.empty(shape = (12 * 471 , 18 * 9),dtype = float)
y = np.empty(shape = (12 * 471 , 1),dtype = float)
for month in range(12):
for day in range(20):
for hour in range(24):
if day == 19 and hour > 14:
continue
x[month * 471 + day * 24 + hour,:] = month_to_data[month][:,day * 24 + hour : day * 24 + hour + 9].reshape(1,-1)
y[month * 471 + day * 24 + hour,0] = month_to_data[month][9 ,day * 24 + hour + 9]
print(x.shape)
print(y.shape)
# -
# # Normalization
mean = np.mean(x, axis = 0)
std = np.std(x, axis = 0)
for i in range(x.shape[0]):
for j in range(x.shape[1]):
if not std[j] == 0 :
x[i][j] = (x[i][j]- mean[j]) / std[j]
# # Training
# +
dim = x.shape[1] + 1
w = np.zeros(shape = (dim, 1 ))
x = np.concatenate((np.ones((x.shape[0], 1 )), x) , axis = 1).astype(float)
learning_rate = np.array([[200]] * dim)
adagrad_sum = np.zeros(shape = (dim, 1 ))
for T in range(10000):
if(T % 500 == 0 ):
print("T=",T)
print("Loss:",np.power(np.sum(np.power(x.dot(w) - y, 2 ))/ x.shape[0],0.5))
gradient = (-2) * np.transpose(x).dot(y-x.dot(w))
adagrad_sum += gradient ** 2
w = w - learning_rate * gradient / (np.sqrt(adagrad_sum) + 0.0005)
np.save('weight.npy',w) ## save weight
# +
# Read in testing set
# -
w = np.load('weight.npy') ## load weight
test_raw_data = np.genfromtxt('test.csv', delimiter=',') ## test.csv
test_data = test_raw_data[:, 2: ]
where_are_NaNs = np.isnan(test_data)
test_data[where_are_NaNs] = 0
# +
# Predict
# +
test_x = np.empty(shape = (240, 18 * 9),dtype = float)
for i in range(240):
test_x[i,:] = test_data[18 * i : 18 * (i+1),:].reshape(1,-1)
for i in range(test_x.shape[0]): ##Normalization
for j in range(test_x.shape[1]):
if not std[j] == 0 :
test_x[i][j] = (test_x[i][j]- mean[j]) / std[j]
test_x = np.concatenate((np.ones(shape = (test_x.shape[0],1)),test_x),axis = 1).astype(float)
answer = test_x.dot(w)
# +
# 存储文件 Write file
# -
f = open('result.csv',"w")
w = csv.writer(f)
title = ['id','value']
w.writerow(title)
for i in range(240):
content = ['id_'+str(i),answer[i][0]]
w.writerow(content)
| Homework/week3/PM2.5 Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Question
#
# ## [[link](https://stackoverflow.com/questions/65820541/how-to-download-g-suite-docs-sheets-to-pdf-xls-programatically)] How to Download G Suite docs/sheets to pdf/xls programatically?
#
# I'm trying to download a Google doc to PDF or Sheet to XLS given an ID programmatically from the CLI.
#
# Steps I've tried so far:
#
# 1. Contact support, but can't see a (?) help icon
# 1. Google for 10 minutes... I think Google Drive API does this (not sure)
# 1. Enable the [Google Drive API](https://developers.google.com/drive/api/v3/enable-drive-api)
# 1. Signed up for a GCP project
# 1. Navigated thought the UI to [enable the API](https://console.cloud.google.com/apis/library/drive.googleapis.com)
# 1. Trying the [GET API](https://developers.google.com/drive/api/v3/reference/about/get) results in 400 Invalid field selection using the fields for the ID of the document
#
# I'm a bit stuck now and I am not sure how to proceed. Any suggestions?
#
# ## Answer
# +
# Install Python bindings for the Google API as per https://developers.google.com/drive/api/v3/quickstart/python
# !pip3 install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
# -
from googleapiclient.discovery import build
from google.oauth2 import service_account
#
#
# since we're going to be exporting files via the google drive api, we need credentials for that scope as
# detailed in https://developers.google.com/drive/api/v3/reference/files/export#auth
#
# Choose an authentication method as detailed in https://developers.google.com/identity/protocols/oauth2#scenarios.
#
# Since you mention creating a GCP project, I assume you're interested in using a GCP service account
# as detailed in https://developers.google.com/identity/protocols/oauth2#serviceaccount
#
# You can create a service account at https://console.developers.google.com/apis/credentials
# or as explained in https://developers.google.com/identity/protocols/oauth2/service-account#creatinganaccount
#
# Make sure to enable domain-wide-delegation for that service account while creating it and grant it `https://www.googleapis.com/auth/drive` scope under https://admin.google.com/ac/owl/domainwidedelegation since you otherwise won't be able to impersonate other users, including yourself, and download their files.
#
#
# +
SCOPES = ['https://www.googleapis.com/auth/drive']
SERVICE_ACCOUNT_FILE = 'credentials.json'
# We use the SERVICE_ACCOUNT_FILE we just downloaded and the SCOPES we defined to create a Credentials object.
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
# Remember, you must have created credentials.json with domain-wide delegation!
credentials = credentials.with_subject('<EMAIL>')
# We build a drive_v3 service using the credentials we just created
service = build('drive', 'v3', credentials=credentials)
# -
# We access the files resource as shown in https://developers.google.com/drive/api/v3/reference/files/get
# and request the metadata of a file to which <EMAIL> has access:
# https://docs.google.com/document/d/fileId/edit
files = service.files()
service.files().get(fileId='1U3eMevKxTwDxzvOBUsqa36zvwBzKPVYOFgy3k_9vxb8').execute()
# We access the files resource again but this time to export the file as detailed in
# https://developers.google.com/resources/api-libraries/documentation/drive/v3/python/latest/drive_v3.files.html#export
# This could also be achieved using https://developers.google.com/drive/api/v3/manage-downloads.
#
# Valid MIME types are listed in https://developers.google.com/drive/api/v3/ref-export-formats.
# +
fconr = files.export(fileId='1U3eMevKxTwDxzvOBUsqa36zvwBzKPVYOFgy3k_9vxb8',
mimeType='application/vnd.openxmlformats-officedocument.wordprocessingml.document')
fcont = fconr.execute()
print('{}...'.format(fcont[:10]))
file = open("/tmp/sample.doc", "wb")
file.write(fcont)
file.close()
# -
# As can be seen, `fcont` contains a binary blob that corresponds to the document and of which I'm showing the first 10 bytes. Finally, the blob is saved to `sample.doc`.
# !ls -alh1 /tmp/sample.doc
#
# # By <NAME>
#
# ||[@jdsalaro](https://twitter.com/jdsalaro)|
# |-|:-|
# ||[https://linkedin.com/in/jdsalaro](http://linkedin.com/in/jdsalaro)|
# ||[https://jdsalaro.com](https://jdsalaro.com)|
| google_drive_api/stackoverflow-65820541-how-to-download-g-suite-docs-sheets-to-pdf-xls-programatically.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: drlnd
# language: python
# name: drlnd
# ---
# +
import Game_Utils.game
import Agents.multiple_agents
my_game = Game_Utils.game.Game(name="<NAME>", solve_score=1.5, state_dim = 24, action_dim=2,\
num_agents=2,num_steps_per_epoch = 1000)
ma = Agents.multiple_agents.Multiple_Agents(game = my_game,replay_buffer_size=50000,batch_size=256,\
load_mode=True,save_mode=True, episods_before_update=10)
ma.training(num_epochs = 3000,training_mode=True)
# -
| MASAC_tennis/src/launch_tennis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduccion
# La task de esta tarea corresponde a clasificar la intesidad de sentimientos en tweets para cuatro sentimientos distintos, anger, fear, joy y sandness.
#
# El dataset de la task consiste en cuatro tablas, una para cada sentimiento, en donde en cada una hay tweets que presentan el sentimiento correspondiente, junto con una etiqueta de la intensidad (high, medium y low) de dicho sentimiento.
#
# Para esta tarea se exploran distintas formas de representar los tweets (BoW, TfIdf, Embeddings, etc) y se combinan con distintos clasificadores (Naive Bayes y SVM) para realizar las predicciones.
# # Trabajo relacionado
#
# Este trabajo se basa levemente sobre la publicacion "Emotion Intensities in Tweet" por Mohammad et al.
#
# La tarea de clasificar sentimientos en tweets ha sido extensamente estudiada, pero determinar la intensidad de dichos sentimientos no ha sido explorado lo suficiente. Esto se debe a que hasta antes del trabajo mencionado en el paper no existia un dataset de tweets etiquetados por la intensidad de los sentimientos presentes.
#
# En la publicacion se trato de asignarle un nivel de intensidad de sentimiento a los tweets en una escala continua entre 0 y 1, es decir, corresponde a una task de regresion, pero la task que se busca resolver en este trabajo corresponde a una de clasificacion, que es una version simplificada de la original.
# # Algoritmos y representaciones
# Para realizar los experimentos se utilizaron distintas tecnicas para representar datos y para realizar las clasificaciones.
#
# Nuestros experimentos los basamos en el concepto de Pipelines de scikitlearn, en donde se define una lista de operaciones que se realizaran sobre los datos y luego se ejecutan todas las trasformaciones y reducciones correspondientes.
#
# En este contexto, describiremos las distintas etapas del pipeline y explicaremos los distintos metodos que utilizamos en cada una de las etapas. Finalmente se generaron pipelines con combinaciones de las distintas tecnicas mencionadas en cada etapa.
#
# ### Preproceso
# Para el preproceso utilizamos varias tecnicas, algunas mas estandar que otras. Probamos:
# * Removiendo *stop words*
# * Sacando caracteres especiales, como puntuacion no deseada
# * Sacando saltos de linea
# * Removiendo secuencias de escape html
#
# Ninguna de estas tecnicas mostro una mejora substancial por si sola.
#
# ### Tokenizacion
# Para el proceso de tokenizacion se utilizo la clase *TweetTokenizer* de nltk y se probaron distintas combinaciones de argumentos para realizar un filtrado de los tokens obtenidos. En este ambito se probo:
# * Remover handles
# * Transformar a minuscula los tokens
# * Remover caracteres consecutivos repetidos. Esto significa que cada vez que aparecen 3 o mas caracteres identicos repetidos, se reemplazan por tan solo 3 caracteres consecutivos.
# * Remover letras aisladas.
#
# En general estas tecnicas mejoraron el desempeño, ya que lograban efectivamente reducir la dimensionalidad de los tweets. A pesar de esto las mejoras no fueron estadisticamente significativas.
#
# ### Vectorizacion
# Para este paso del pipeline se utilizaron algunas de las tecnicas vistas en clases que supusimos iban a tener buen impacto en las metricas de evaluacion:
# * Bag of Words
# * Tfidf
# * Word n\*grams
# * Character n\*grams
# * Filtro de aparicion por minima frecuencia
#
# La mayoria de estos metodos no cambiaron sustancialmente los resultados, a excepcion del ultimo punto. Esto corresponde a un parametro de los vectorizadores de scikit-learn en donde solo las palabras que aparecen en una cantidad minima de documentos se dejan en la vectorizacion. Esto sumado a la remocion de las *stop words* deja una vectorizacion de alrededor de 200 features y esto mejoro el desempeño de los modelos levemente.
#
# ### Clasificacion
# Para la clasificacion se uso como baseline el modelo Naive Bayes Multinomial y se probo usando Suport Vector Machine con distintos kernels y distintos valores de penalizacion. Esta decision se tomo porque en la literatura se menciona que las SVM tienen desempeños buenos en tareas de NLP, y son sensibles alos cambios de los hiperparametros del modelo.
#
# Para SVM se exploraron los siguientes hiperparametros:
# * Penalizacion
# * Tipo de kernel
# * Valor de gamma
#
# Con lo anterior, al explorar distintos valores para la penalizacion se pudo determinar que el seteo de esta variable era significativo para el rendimiento del clasificador. Asi mismo, la estrategia de calculo del valor gamma tambien influyo en
# # Experimentos
# # Conclusiones
| ROBERTA_ES_TRAMPA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (15,10)
# %matplotlib inline
# +
#xt_timeseries = ds.sla.sel(latitude=35,longitude=210, method='nearest').load()
#xp_timeseries = ds.sla.sel(latitude=55,longitude=210, method='nearest').load()
#xt_timeseries_box = ds.sla.sel(latitude=slice(32.5,37.5),longitude=slice(207.5,212.5)).mean(dim=('latitude', 'longitude')).load()
#xp_timeseries_box = ds.sla.sel(latitude=slice(52.5,57.5),longitude=slice(207.5,212.5)).mean(dim=('latitude', 'longitude')).load()
# +
adir = 'F:/data/NASA_biophysical/aviso/'
xt_timeseries=xr.open_dataset(adir+'xt_ts2.nc')
xp_timeseries=xr.open_dataset(adir+'xp_ts2.nc')
xt_timeseries_box=xr.open_dataset(adir+'xt_ts_box2.nc')
xp_timeseries_box=xr.open_dataset(adir+'xp_ts_box2.nc')
xt_month_ave = xt_timeseries.resample(time='1M').mean('time')
xp_month_ave = xp_timeseries.resample(time='1M').mean('time')
xt_month_box_ave = xt_timeseries_box.resample(time='1M').mean('time')
xp_month_box_ave = xp_timeseries_box.resample(time='1M').mean('time')
# -
xt_month_ave.sla.plot()
xt_timeseries.sla.plot()
# ## need to .groupby month & subtract to create anomaly then 3 month boxcar
# +
#monthly
N=3
climatology = xt_month_ave.sla.groupby('time.month').mean('time')
xt_anomalies = xt_month_ave.sla.groupby('time.month') - climatology
xt_smoothed_anom = np.convolve(xt_anomalies, np.ones((N,))/N, mode='valid')
xt_smoothed_anom2 = np.convolve(xt_smoothed_anom, np.ones((N,))/N, mode='valid')
#show that smoothed timeseries aligned with unsmoothed timeseries
xt_anomalies.plot()
plt.plot(xt_anomalies.time[1:-1],xt_smoothed_anom,'r')
plt.plot(xt_anomalies.time[2:-2],xt_smoothed_anom2,'g')
plt.savefig(adir+'st_anomaly.png', transparent=False, format='png')
# +
climatology = xp_month_ave.sla.groupby('time.month').mean('time')
xp_anomalies = xp_month_ave.sla.groupby('time.month') - climatology
xp_smoothed_anom = np.convolve(xp_anomalies, np.ones((N,))/N, mode='valid')
xp_smoothed_anom2 = np.convolve(xp_smoothed_anom, np.ones((N,))/N, mode='valid')
#show that smoothed timeseries aligned with unsmoothed timeseries
xp_anomalies.plot()
plt.plot(xp_anomalies.time[1:-1],xp_smoothed_anom,'r')
plt.plot(xp_anomalies.time[2:-2],xp_smoothed_anom2,'g')
plt.savefig(adir+'sp_anomaly.png', transparent=False, format='png')
# -
# ## smoothed 5deg box
# <NAME> Cummings says that they use a 25 point spatial average of a 1 deg aviso SSH dataset. I'm guessing here, that they mean a 5degx5deg box average centered at point xt and xp
xt_month_box_ave
# +
#monthly BOX, smooth 3 month box car twice
N=3
climatology = xt_month_box_ave.sla.groupby('time.month').mean('time')
xt_anomalies = xt_month_box_ave.sla.groupby('time.month') - climatology
xt_smoothed_box_anom = np.convolve(xt_anomalies, np.ones((N,))/N, mode='valid')
xt_smoothed_box_anom2 = np.convolve(xt_smoothed_box_anom, np.ones((N,))/N, mode='valid')
climatology = xp_month_box_ave.sla.groupby('time.month').mean('time')
xp_anomalies = xp_month_box_ave.sla.groupby('time.month') - climatology
xp_smoothed_box_anom = np.convolve(xp_anomalies, np.ones((N,))/N, mode='valid')
xp_smoothed_box_anom2 = np.convolve(xp_smoothed_box_anom, np.ones((N,))/N, mode='valid')
#N=3
climatology = xt_month_box_ave.ugosa.groupby('time.month').mean('time')
xt_anomalies_u = xt_month_box_ave.ugosa.groupby('time.month') - climatology
xt_smoothed_box_anom_u = np.convolve(xt_anomalies_u, np.ones((N,))/N, mode='valid')
xt_smoothed_box_anom2_u = np.convolve(xt_smoothed_box_anom_u, np.ones((N,))/N, mode='valid')
climatology = xp_month_box_ave.ugosa.groupby('time.month').mean('time')
xp_anomalies_u = xp_month_box_ave.ugosa.groupby('time.month') - climatology
xp_smoothed_box_anom_u = np.convolve(xp_anomalies_u, np.ones((N,))/N, mode='valid')
xp_smoothed_box_anom2_u = np.convolve(xp_smoothed_box_anom_u, np.ones((N,))/N, mode='valid')
# -
#show that smoothed timeseries aligned with unsmoothed timeseries
xt_anomalies.plot()
plt.plot(xt_anomalies.time[1:-1],xt_smoothed_box_anom,'r')
# +
#plt.plot(xp_anomalies.time[2:-2],xt_smoothed_anom2-xp_smoothed_anom2)
fig, ax1 = plt.subplots()
ax1.plot(xp_anomalies.time[2:-2],xt_smoothed_box_anom2-xp_smoothed_box_anom2,'r')
xmin, xmax = ax1.set_xlim()
ax1.set_xlim(xmin,xmax-4500)
ax1.set_ylim(-.15,.15)
ax1.set_ylabel('$\Delta$ SSH (m)')
ax1.set_xlabel('Time')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
ax2.plot(xp_anomalies_u.time[2:-2],xt_smoothed_box_anom2_u-xp_smoothed_box_anom2_u,'b')
ax2.plot(xp_anomalies.time[2:-2],(xp_smoothed_anom2-xt_smoothed_anom2)*0)
ax2.set_xlim(xmin,xmax-4500)
ax2.set_ylim(-0.03,0.03)
ax2.set_ylabel('$\Delta$ horiz vel (m/s)')
ax2.set_xlabel('Time')
#plt.legend(['point','5 deg box'])
plt.savefig(adir+'ssh_cur_2006.png', transparent=False, format='png')
# +
#plt.plot(xp_anomalies.time[2:-2],xt_smoothed_anom2-xp_smoothed_anom2)
fig, ax1 = plt.subplots()
ax1.plot(xp_anomalies.time[2:-2],xt_smoothed_box_anom2-xp_smoothed_box_anom2,'r')
xmin, xmax = ax1.set_xlim()
#ax1.set_xlim(xmin,xmax-4500)
ax1.set_ylim(-.15,.15)
ax1.set_ylabel('$\Delta$ SSH (m)')
ax1.set_xlabel('Time')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
ax2.plot(xp_anomalies_u.time[2:-2],xt_smoothed_box_anom2_u-xp_smoothed_box_anom2_u,'b')
ax2.plot(xp_anomalies.time[2:-2],(xp_smoothed_anom2-xt_smoothed_anom2)*0)
#ax2.set_xlim(xmin,xmax-4500)
ax2.set_ylim(-0.03,0.03)
ax2.set_ylabel('$\Delta$ horiz vel (m/s)')
ax2.set_xlabel('Time')
#plt.legend(['point','5 deg box'])
plt.savefig(adir+'ssh_cur_2017.png', transparent=False, format='png')
# -
xp_anomalies.time[30]
# +
#T = C * [a,b]
#to 2006
#timeseries with zero mean
T1 = xt_smoothed_box_anom2[:13*12]-np.mean(xt_smoothed_box_anom2[:13*12])
T2 = xp_smoothed_box_anom2[:13*12]-np.mean(xp_smoothed_box_anom2[:13*12])
#T1 = xp_smoothed_box_anom2[:13*12]-np.mean(xp_smoothed_box_anom2[:13*12])
#T2 = xt_smoothed_box_anom2[:13*12]-np.mean(xt_smoothed_box_anom2[:13*12])
#all
#T1 = xt_smoothed_box_anom2
#T2 = xp_smoothed_box_anom2
a = (np.sqrt(2)/2)*(T1+T2)
b = (np.sqrt(2)/2)*(T1-T2)
#check that total variance is conserved
var = np.mean(T1**2)+np.mean(T2**2)
var2 = np.mean(a**2)+np.mean(b**2)
print('var',var,'=',var2)
#calculate the fraction of variance in the breathing
#and biforcation modes
R1 = np.mean(a**2) / var2 #breathing
R2 = np.mean(b**2) / var2 #biforcation
print('percent variance')
print('breathing:',R1)
print('biforcation:',R2)
plt.plot(xp_anomalies.time[2:13*12+2],a)
plt.plot(xp_anomalies.time[2:13*12+2],b)
plt.savefig(adir+'ab_2006.png', transparent=False, format='png')
# -
#T = C * [a,b]
#to 2006
#T1 = xt_smoothed_box_anom2[:13*12]
#T2 = xp_smoothed_box_anom2[13*12]
#all
T1 = xp_smoothed_box_anom2-np.mean(xp_smoothed_box_anom2)
T2 = xt_smoothed_box_anom2-np.mean(xt_smoothed_box_anom2)
a = (np.sqrt(2)/2)*(T1+T2)
b = (np.sqrt(2)/2)*(T1-T2)
#check that total variance is conserved
var = np.mean(T1**2)+np.mean(T2**2)
var2 = np.mean(a**2)+np.mean(b**2)
print('var',var,'=',var2)
#calculate the fraction of variance in the breathing
#and biforcation modes
R1 = np.mean(a**2) / var2 #breathing
R2 = np.mean(b**2) / var2 #biforcation
print('percent variance')
print('breathing:',R1)
print('biforcation:',R2)
plt.plot(xp_anomalies.time[2:-2],a)
plt.plot(xp_anomalies.time[2:-2],b)
plt.savefig(adir+'ab_2017.png', transparent=False, format='png')
#modal variance
r = np.mean(T1*T2)/(np.sqrt(np.mean(T1**2))*np.sqrt(np.mean(T2**2)))
print('corr coeff:',r)
#beta
beta = np.mean(T1**2)/np.mean(T2**2)
print('ratio of variance of SSH', beta)
#streamfunction
[x,y] = meshgrid(0:20,0:15); % This makes regular grid
[phi,psi] = flowfun(u,v); % Here comes the potential and streamfun.
def flowfun(u, v=None, flag=''):
"""Calculates the potential phi and the stream function psi of a
two-dimensional flow defined by the velocity components u and v, so
that
d(phi) d(psi) d(phi) d(psi)
u = ------ - ------; v = ------ + ------
dx dy dy dx
PARAMETERS
u, v (array like) :
Zonal and meridional velocity field vectors. 'v' can be
ommited if the velocity vector field U is given in complex
form, such that U = u + i*v.
flag (string, optional) :
If only the stream function is needed, the '-', 'psi' or
'streamfunction' flag should be used. For the velocity
potential, use '+', 'phi' or 'potential'.
RETURNS
EXAMPLES
phi, psi = flowfun(u, v)
psi = flowfun(u + i*v, '-')
REFERENCES
Based upon http://www-pord.ucsd.edu/~matlab/stream.htm
"""
# Checks input arguments
u = numpy.asarray(u)
if v == None:
v = u.imag
u = u.real
if u.shape != v.shape:
raise Exception, 'Error: matrices U and V must be of equal size'
isphi, ispsi = True, True
if flag in ['-', 'psi', 'streamfunction']:
isphi = False
if flag in ['+', 'phi', 'potential']:
ispsi = False
a, b = u.shape
# Now, the main computations. Integrates the velocity fields to get the
# velocity potential and stream function using Simpson rule summation
# The velocity potential (phi), non-rotating part
if isphi:
cx = simpson(u[0, :]) # Computes the x-integration constant
cy = simpson(v[:, 0]) # Computes the y-integration constant
phi = simpson(v) + cx * numpy.ones((a, 1))
phi = (phi + simpson(u.transpose()).transpose() +
(cy * numpy.ones((b, 1))).transpose()) / 2
# Compute streamfunction (psi), solenoidal part
if ispsi:
cx = simpson(v[0, :]) # Computes the x-integration constant
cy = simpson(u[:, 0]) # Computes the y-integration constant
psi = -simpson(u) + cx * numpy.ones((a, 1))
psi = (psi + simpson(v.transpose()).transpose() -
(cy * numpy.ones((b, 1))).transpose()) / 2
if isphi & ispsi:
return (phi, psi)
elif isphi:
return phi
elif ispsi:
return psi
else:
return None
| .ipynb_checkpoints/freeland_reproduce-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python for Psychologists - Session 9
#
# ## session8 recap & plotting
# Python offers multiple "plotting" libraries, each of them with different features.
#
# Today we want to cover two (probably the most common) libraries
# - matplotlib
# - seaborn
#
# A plot usually contains two main components, i.e., a figure and axes. Image the figure as a page on which you can draw whatever you like. Following that, a figure can contain multiple independent plots, a legend, color bar etc. The axes is the area where we plot our data and any labels are associated with. Each axes has a x and y -axis
# 
# ### matplotlib.
#
# - We can use basic matplotlib commands to easiliy create plots.
import matplotlib.pyplot as plt
# %matplotlib inline
# `%matplotlib inline`
#
# or
#
# `plt.show()` will show your plot instantly. The latter is particularly used outside jupyter notebooks
# +
import numpy as np
x = np.linspace(0,10,20) # generates 20 numbers between 0 and 10
y = x**2 # x square
# -
# Now that we got our first plot, let´s give it a name and label the x and y axis
# Now imagine you need more than one plot on your page. We can easily do this with `plt.subplot()`
#nrows #ncolums #plot_number
# - we could also plot by creating Figure objects in matplotlib
# Let´s create an empty figure object with `.figure()` , i.e. an object oriented approach. By setting `figsize=(a,b)` one could increase or decrease ones "canvas"
fig =
# Let´s add a blank set of axis using ``fig.add_axes([location_where_axes_should_be_located])``
ax1 = #left #bottom # widht #height
fig
# Remember that figure can contain more than just one plot. Let´s try to insert a second figure on our canvas. This will help us to understand the input `.add_axes([])` takes
ax2 =
fig
# Let´s plot our x and y arrays on our new blank axis and add x and y labels as well as a plot name. However, here we need to use e.g., `.set_xlabel` instead of just `.xlabel`
# +
fig
# -
# As for the first approach, we could also create multiple plots in the object oriented approach using `.subplots(nrows=, ncols=)` and **not** `.subplot()` as we did before!
#
# As we can see, we did create some overlap between our plots, no worris we can use `plt.tight_layout()` to solve this issue. Very conveniently, `plt.subplots()` will automatically add_axes based on the rows and colum input and you don´t have to specify it as we had to using `plt.figure()`
# Now we could try to plot our x & y arrays to specific subplots. We could do this by indexing ax! In some way, your subplot behaves as a single cell in your dataframe, i.e. we could index it easily by choosing [row/column]
# +
# changes color and linestyle
# changes the linewidth
#changes lower and upper bound of x axis
fig
# -
# ### seaborn
#
# seaborn is based on matplotlib, but usually works with less lines of codes and therefore provides a easy to handle vizualisation interface.
#
# For further information, see https://seaborn.pydata.org/
import pandas as pd
iris = pd.read_csv("iris.csv", sep=",")
iris.head()
# Let´s try to create a scatter plot with for sepal.length & sepal.width
# - matplotlib
# - seaborn
# +
# create a figure and axis
# scatter the sepal_length against the sepal_width
# set a title and labels
# +
import seaborn as sns
# -
# We could also group our scatterplot by variety using the ``hue`` argument, i.e. different groups will be colored in different numbers.
# We could easily plot a line chart using `sns.lineplot()`. The only argument that we need are the four numeric columns in our case.
# We could also use ``sns.boxplot(x=,y=,data=)`` or ``sns.barplot(x=,y=,data=)`` to plot some characteristic of our three categories. The standard solution comes with a 95% confidence intervall around your point estimate.
# A nice way to to get a first idea about your data (from a plotting perspective) is `sns.pairplot()`
# or `sns.heatmap()`
# As we can see, the output does not look that fine, here we can combine matplotlib and seaborn to customize our plot!
# We could also break our data up across multiple subplots (i.e. *faceting*) and combine it into one single figure. First we can create a multiplot grid (i.e. ``sns.FacetGrid``) which takes our column variety into account and hence creates three empty grids. Afterwards we can use the ``.map()`` function, that calls the specified function for each object of an iterable (i.e., the empty grids in our case)
#plot a univariat distribution of observations
# ## Controlling figure aesthetics
# +
def sinplot(flip=1):
x = np.linspace(0, 14, 100)
for i in range(1, 7):
plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip)
sinplot()
# -
# ```sns.set_style()``` changes the figure theme, go check it out by using "darkgrid" or "whitegrid" or "white" or "ticks" or "dark" as an argument
sns.set_style("ticks")
sinplot()
# We could also remove the top and right axis spine (only white or ticks thema can benefit from it) by using `sns.despine()`
sinplot()
sns.despine()
# We could also scale our plots for different context by using `sns.set_context()`. Go and try it for "paper", "notebook", "talk", and "poster".
sns.set_context("paper")
sinplot()
sns.despine()
# to switch back to the default seaborn settings, simply use `sns.set()`
sns.set()
| session9/session9_recap_and_vizualisation-empty.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import joblib
import numpy as np
import matplotlib.pyplot as plt
import math
spectra_train = joblib.load('cache/r20200406_234541_50.0sc_50.0sp_1_CPU/spectral/y_new_train.joblib')
spectra_test = joblib.load('cache/r20200406_234541_50.0sc_50.0sp_1_CPU/spectral/y_test.joblib')
labels_train = joblib.load('cache/r20200406_234541_50.0sc_50.0sp_1_CPU/spectral/x_new_train.joblib').reset_index()
labels_test = joblib.load('cache/r20200406_234541_50.0sc_50.0sp_1_CPU/spectral/x_test.joblib').reset_index().drop(columns = ["index"])
labels_train_smaller = joblib.load('cache/r20200406_234541_50.0sc_50.0sp_1_CPU/spectral/x_train.joblib').reset_index()
spectra_train_smaller = joblib.load('cache/r20200406_234541_50.0sc_50.0sp_1_CPU/spectral/y_train.joblib')
size_regression_rf = joblib.load("RF Size Regression.joblib")[1]
labels_test.iloc[7809]
plt.plot(spectra_test[7809])
inference_rf = joblib.load("inference_rf.joblib")
[np.asarray(labels_test.iloc[7809])]
log_emissivity = inference_rf.predict([np.asarray(labels_test.iloc[7809])])[0]
emissivity = []
for i in log_emissivity:
emissivity.append(math.exp(i))
plt.plot(emissivity)
au_ns = np.asarray(labels_test.iloc[7809])
au_ns
def predict_spectrum_from_size_Au_NS(diameter):
input_vector = [0,0,1,0,1,0,0]
area = 4*np.pi*(diameter/2)**2
volume = (4/3)*np.pi*(diameter/2)**3
input_vector.append(np.log(area/volume))
for i in range(0,3):
input_vector.append(diameter)
input_vector = [np.asarray(input_vector)]
log_emissivity = inference_rf.predict(input_vector)[0]
emissivity = []
for i in log_emissivity:
emissivity.append(math.exp(i))
return emissivity
def predict_size_from_spectrum(spectrum, original_prediction = None):
size_list = size_regression_rf.predict([spectrum])
shortest_dim = size_list[0][1]
if original_prediction != None:
prediction_change = np.abs(shortest_dim - original_prediction)
return(shortest_dim, prediction_change)
if original_prediction == None:
return(shortest_dim)
def generate_spheres(number_of_spheres, starting_diameter, increase_amount):
spheres = [starting_diameter]
for i in range(0, number_of_spheres):
sphere = spheres[i] + increase_amount
spheres.append(sphere)
return(spheres)
# +
# size_regression_rf.predict([emissivity_60nm])[0][1]
# -
decreasing.reverse()
def sensitivity_analysis(number_of_spheres, starting_diameter, increase_amount):
sphere_list = generate_spheres(number_of_spheres, starting_diameter, increase_amount)
prediction_change_list = []
original_prediction_spectrum = predict_spectrum_from_size_Au_NS(starting_diameter)
original_prediction_size = predict_size_from_spectrum(original_prediction_spectrum)
for sphere in sphere_list:
spectrum = predict_spectrum_from_size_Au_NS(sphere)
predictions = predict_size_from_spectrum(spectrum, original_prediction_size)
prediction_change_list.append([sphere, predictions[0], predictions[1]])
return prediction_change_list
increasing = sensitivity_analysis(10, 0.08, 0.001)
increasing
increasing[0][0]
inputted_size = []
for i in range(0, len(decreasing)-1):
inputted_size.append(decreasing[10-i][0]*1000)
for entry in increasing:
inputted_size.append(entry[0]*1000)
inputted_size
change_from_original = []
for i in range(0, len(decreasing)-1):
change_from_original.append(decreasing[10-i][2]*-1000)
for entry in increasing:
change_from_original.append(entry[2]*1000)
change_from_original
true_difference = list(np.arange(-10,11,1))
true_difference
plt.plot(inputted_size, change_from_original)
plt.plot(inputted_size, true_difference)
plt.scatter(80,0, color = 'k')
plt.xlabel("Size (nm)")
plt.ylabel("Change From Original Prediction")
plt.title("Sensitivity of Au NS Prediction")
decreasing = sensitivity_analysis(10, 0.08, -0.001)
decreasing
#sensitivity_decreasing = decreasing[0]
#prediction_size_decreasing = decreasing[1]
test_spectrum = predict_spectrum_from_size_Au_NS(0.079)
test_size = predict_size_from_spectrum(test_spectrum)
test_spectrum_2 = predict_spectrum_from_size_Au_NS(0.08)
test_spectrum_3 = predict_spectrum_from_size_Au_NS(0.078)
test_spectrum_4 = predict_spectrum_from_size_Au_NS(0.081)
test_spectrum_5 = predict_spectrum_from_size_Au_NS(0.082)
test_spectrum_6 = predict_spectrum_from_size_Au_NS(0.07)
test_spectrum_7 = predict_spectrum_from_size_Au_NS(0.09)
# +
plt.plot(test_spectrum_6, color = 'green', linestyle = '--')
plt.plot(test_spectrum_3, color = 'purple', linestyle = '--')
plt.plot(test_spectrum, color = 'blue', linestyle = '--')
plt.plot(test_spectrum_2, color = 'k')
plt.plot(test_spectrum_4, color = 'red')
plt.plot(test_spectrum_5, color = 'orange')
plt.plot(test_spectrum_7, color = 'pink')
plt.legend(labels = ["70 nm", "78 nm", "79 nm", "80 nm", "81 nm", "82 nm", "90 nm"]) # use list of legend names to create the legend
# -
test_spectrum_5 = predict_spectrum_from_size_Au_NS(0.6)
test_spectrum_6 = predict_spectrum_from_size_Au_NS(0.5)
test_spectrum_7 = predict_spectrum_from_size_Au_NS(0.4)
test_spectrum_70 = predict_spectrum_from_size_Au_NS(0.3)
test_spectrum_8 = predict_spectrum_from_size_Au_NS(0.7)
test_spectrum_9 = predict_spectrum_from_size_Au_NS(0.8)
test_spectrum_10 = predict_spectrum_from_size_Au_NS(0.9)
test_spectrum_11 = predict_spectrum_from_size_Au_NS(1)
plt.plot(test_spectrum_5, color = 'black')
plt.plot(test_spectrum_6, color = 'purple', linestyle = '--')
plt.plot(test_spectrum_7, color = 'green', linestyle = '--')
plt.plot(test_spectrum_70, color = 'cyan', linestyle = '--')
plt.plot(test_spectrum_8, color = 'blue')
plt.plot(test_spectrum_9, color = 'red')
plt.plot(test_spectrum_10, color = 'orange')
plt.legend(labels = ["600 nm", "500nm", "400 nm", "300 nm", "700 nm", "800 nm", "900 nm"]) # use list of legend names to create the legend
#diff = []
#for i in range(0, len(test_spectrum)):
#diff.append(np.abs(test_spectrum[i] - test_spectrum_2[i]))
#print(diff)
decreasing_2 = sensitivity_analysis(100, 0.08, -0.0001)
increasing_2 = sensitivity_analysis(100, 0.08, 0.0001)
inputted_size_2 = []
for i in range(0, len(decreasing_2)-1):
inputted_size_2.append(decreasing_2[100-i][0]*1000)
for entry in increasing_2:
inputted_size_2.append(entry[0]*1000)
inputted_size_2
change_from_original_2 = []
for i in range(0, len(decreasing_2)-1):
change_from_original_2.append(decreasing_2[100-i][2]*-1000)
for entry in increasing_2:
change_from_original_2.append(entry[2]*1000)
change_from_original_2
true_difference_2 = list(np.arange(-10,10.1,0.1))
plt.plot(inputted_size_2, change_from_original_2)
plt.plot(inputted_size_2, true_difference_2)
plt.scatter(80,0, color = 'k')
plt.xlabel("Size (nm)")
plt.ylabel("Change From Original Prediction")
plt.title("Sensitivity of Au NS Prediction")
decreasing_3 = sensitivity_analysis(100, 0.1, -0.0001)
increasing_3 = sensitivity_analysis(100, 0.1, 0.0001)
# +
inputted_size_3 = []
for i in range(0, len(decreasing_3)-1):
inputted_size_3.append(decreasing_3[100-i][0]*1000)
for entry in increasing_3:
inputted_size_3.append(entry[0]*1000)
inputted_size_3
change_from_original_3 = []
for i in range(0, len(decreasing_3)-1):
change_from_original_3.append(decreasing_3[100-i][2]*-1000)
for entry in increasing_3:
change_from_original_3.append(entry[2]*1000)
change_from_original_3
# -
plt.plot(inputted_size_3, change_from_original_3)
plt.plot(inputted_size_3, true_difference_2)
plt.scatter(100,0, color = 'k')
plt.xlabel("Size (nm)")
plt.ylabel("Change From Original Prediction")
plt.title("Sensitivity of Au NS Prediction")
decreasing_4 = sensitivity_analysis(100, 0.6, -0.005)
increasing_4 = sensitivity_analysis(100, 0.6, 0.005)
# +
inputted_size_4 = []
for i in range(0, len(decreasing_4)-1):
inputted_size_4.append(decreasing_4[100-i][0]*1000)
for entry in increasing_4:
inputted_size_4.append(entry[0]*1000)
change_from_original_4 = []
for i in range(0, len(decreasing_4)-1):
change_from_original_4.append(decreasing_4[100-i][2]*-1000)
for entry in increasing_4:
change_from_original_4.append(entry[2]*1000)
# -
true_difference_4 = np.arange(-500,505,5)
true_difference_4
plt.plot(inputted_size_4, change_from_original_4)
plt.plot(inputted_size_4, true_difference_4)
plt.scatter(600,0, color = 'k')
plt.xlabel("Size (nm)")
plt.ylabel("Change From Original Prediction")
plt.title("Sensitivity of Au NS Prediction")
| RF_notebooks/Sensitivity Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy import stats
from sklearn.linear_model import Ridge, RidgeCV
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error, make_scorer
def calculate_pearson(df):
correlations = {}
numerical_features = df.select_dtypes(exclude = ["object"]).columns
numerical_features = numerical_features.drop("cod_municipio")
for i in numerical_features:
corr = stats.pearsonr(df[i], df['ideb'])[0]
correlations[i] = corr
df_corr = pd.DataFrame(list(correlations.items()), columns=['feature', 'correlation_with_ideb'])
df_corr = df_corr.dropna()
return df_corr
def calculate_categorical_correlation(df):
categorical_features = df.select_dtypes(include = ["object"]).columns
return categorical_features
# # Puxa dados do CSV de cada integrante do grupo
# ### Dados Alexandre
path = '../../data/bcggammachallenge'
# +
#Dados iniciais
alexandre_inicio_2015 = pd.read_csv(path + '/bases_ale/anos iniciais/ideb_municipios_2015_ai.csv')
alexandre_inicio_2017 = pd.read_csv(path + 'bases_ale/anos iniciais/ideb_municipios_2017_ai.csv')
# Dados finais
alexandre_final_2015 = pd.read_csv(path + 'base_ale/anos finais/ideb_municipios_2015_af.csv')
alexandre_final_2017 = pd.read_csv(path + 'base_ale/anos finais/ideb_municipios_2017_af.csv')
# -
# ### Dados Lidia
# +
#Dados iniciais
lidia_inicio_2007 = pd.read_csv(path + 'bases_lidia/Anos iniciais/ideb_escola_2007_ai.csv')
lidia_inicio_2009 = pd.read_csv(path + 'bases_lidia/Anos iniciais/ideb_escola_2009_ai.csv')
lidia_inicio_2011 = pd.read_csv(path + 'bases_lidia/Anos iniciais/ideb_escola_2011_ai.csv')
lidia_inicio_2013 = pd.read_csv(path + 'bases_lidia/Anos iniciais/ideb_escola_2013_ai.csv')
lidia_inicio_2015 = pd.read_csv(path + 'bases_lidia/Anos iniciais/ideb_escola_2015_ai.csv')
lidia_inicio_2017 = pd.read_csv(path + 'bases_lidia/Anos iniciais/ideb_escola_2017_ai.csv')
# Dados finais
lidia_final_2007 = pd.read_csv(path + 'bases_lidia/Anos finais/ideb_escola_2007_af.csv')
lidia_final_2009 = pd.read_csv(path + 'bases_lidia/Anos finais/ideb_escola_2009_af.csv')
lidia_final_2011 = pd.read_csv(path + 'bases_lidia/Anos finais/ideb_escola_2011_af.csv')
lidia_final_2013 = pd.read_csv(path + 'bases_lidia/Anos finais/ideb_escola_2013_af.csv')
lidia_final_2015 = pd.read_csv(path + 'bases_lidia/Anos finais/ideb_escola_2015_af.csv')
lidia_final_2017 = pd.read_csv(path + 'bases_lidia/Anos finais/ideb_escola_2017_af.csv')
# -
# ### Dados William
# +
#Dados iniciais
william_inicio_2005 = pd.read_csv(path + 'base_william/ano inicial/dados2005_inic.csv')
william_inicio_2007 = pd.read_csv(path + 'base_william/ano inicial/dados2007_inic.csv')
william_inicio_2009 = pd.read_csv(path + 'base_william/ano inicial/dados2009_inic.csv')
william_inicio_2011 = pd.read_csv(path + 'base_william/ano inicial/dados2011_inic.csv')
william_inicio_2013 = pd.read_csv(path + 'base_william/ano inicial/dados2013_inic.csv')
william_inicio_2015 = pd.read_csv(path + 'base_william/ano inicial/dados2015_inic.csv')
william_inicio_2017 = pd.read_csv(path + 'base_william/ano inicial/dados2017_inic.csv')
# Dados finais
william_final_2005 = pd.read_csv(path + 'base_william/ano final/dados2005_fim.csv')
william_final_2007 = pd.read_csv(path + 'base_william/ano final/dados2007_fim.csv')
william_final_2009 = pd.read_csv(path + 'base_william/ano final/dados2009_fim.csv')
william_final_2011 = pd.read_csv(path + 'base_william/ano final/dados2011_fim.csv')
william_final_2013 = pd.read_csv(path + 'base_william/ano final/dados2013_fim.csv')
william_final_2015 = pd.read_csv(path + 'base_william/ano final/dados2015_fim.csv')
william_final_2017 = pd.read_csv(path + 'base_william/ano final/dados2017_fim.csv')
# -
# # Retirar dados NaN
# ### Lidia
# +
print(lidia_inicio_2007.shape)
print(lidia_inicio_2007.count()-lidia_inicio_2007.shape[0])
print(lidia_inicio_2009.shape)
print(lidia_inicio_2009.count()-lidia_inicio_2009.shape[0])
print(lidia_inicio_2011.shape)
print(lidia_inicio_2011.count()-lidia_inicio_2011.shape[0])
print(lidia_inicio_2013.shape)
print(lidia_inicio_2013.count()-lidia_inicio_2013.shape[0])
print(lidia_inicio_2015.shape)
print(lidia_inicio_2015.count()-lidia_inicio_2015.shape[0])
print(lidia_inicio_2017.shape)
print(lidia_inicio_2017.count()-lidia_inicio_2017.shape[0])
print(lidia_final_2007.shape)
print(lidia_final_2007.count()-lidia_final_2007.shape[0])
print(lidia_final_2009.shape)
print(lidia_final_2009.count()-lidia_final_2009.shape[0])
print(lidia_final_2011.shape)
print(lidia_final_2011.count()-lidia_final_2011.shape[0])
print(lidia_final_2013.shape)
print(lidia_final_2013.count()-lidia_final_2013.shape[0])
print(lidia_final_2015.shape)
print(lidia_final_2015.count()-lidia_final_2015.shape[0])
print(lidia_final_2017.shape)
print(lidia_final_2017.count()-lidia_final_2017.shape[0])
# +
print('antes', lidia_inicio_2007.shape)
lidia_inicio_2007 = lidia_inicio_2007.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_inicio_2007.shape)
lidia_inicio_2007 = lidia_inicio_2007.dropna()
print('dados limpo',lidia_inicio_2007.shape)
print('antes',lidia_inicio_2009.shape)
lidia_inicio_2009 = lidia_inicio_2009.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_inicio_2009.shape)
lidia_inicio_2009 = lidia_inicio_2009.dropna()
print('dados limpo',lidia_inicio_2009.shape)
print('antes',lidia_inicio_2011.shape)
lidia_inicio_2011 = lidia_inicio_2011.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_inicio_2011.shape)
lidia_inicio_2011 = lidia_inicio_2011.dropna()
print('dados limpo',lidia_inicio_2011.shape)
print('antes',lidia_inicio_2013.shape)
lidia_inicio_2013 = lidia_inicio_2013.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_inicio_2013.shape)
lidia_inicio_2013 = lidia_inicio_2013.dropna()
print('dados limpo',lidia_inicio_2013.shape)
print('antes',lidia_inicio_2015.shape)
lidia_inicio_2015 = lidia_inicio_2015.dropna(axis='columns',thresh=15000)
print(lidia_inicio_2015.shape)
lidia_inicio_2015 = lidia_inicio_2015.dropna()
print('dados limpo',lidia_inicio_2015.shape)
print('antes',lidia_inicio_2017.shape)
lidia_inicio_2017 = lidia_inicio_2017.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_inicio_2017.shape)
lidia_inicio_2017 = lidia_inicio_2017.dropna()
print('dados limpo',lidia_inicio_2017.shape)
print('antes', lidia_final_2007.shape)
lidia_final_2007 = lidia_final_2007.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_final_2007.shape)
lidia_final_2007 = lidia_final_2007.dropna()
print('dados limpo',lidia_final_2007.shape)
print('antes',lidia_final_2009.shape)
lidia_final_2009 = lidia_final_2009.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_final_2009.shape)
lidia_final_2009 = lidia_final_2009.dropna()
print('dados limpo',lidia_final_2009.shape)
print('antes',lidia_final_2011.shape)
lidia_final_2011 = lidia_final_2011.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_final_2011.shape)
lidia_final_2011 = lidia_final_2011.dropna()
print('dados limpo',lidia_final_2011.shape)
print('antes',lidia_final_2013.shape)
lidia_final_2013 = lidia_final_2013.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_final_2013.shape)
lidia_final_2013 = lidia_final_2013.dropna()
print('dados limpo',lidia_final_2013.shape)
print('antes',lidia_final_2015.shape)
lidia_final_2015 = lidia_final_2015.dropna(axis='columns',thresh=15000)
print(lidia_final_2015.shape)
lidia_final_2015 = lidia_final_2015.dropna()
print('dados limpo',lidia_final_2015.shape)
print('antes',lidia_final_2017.shape)
lidia_final_2017 = lidia_final_2017.dropna(axis='columns',thresh=15000)
print('sem algumas colunas',lidia_final_2017.shape)
lidia_final_2017 = lidia_final_2017.dropna()
print('dados limpo',lidia_final_2017.shape)
# -
# ### Alexandre
# +
print(alexandre_inicio_2015.shape)
print(alexandre_inicio_2015.count()-alexandre_inicio_2015.shape[0])
print(alexandre_inicio_2017.shape)
print(alexandre_inicio_2017.count()-alexandre_inicio_2017.shape[0])
print(alexandre_final_2015.shape)
print(alexandre_final_2015.count()-alexandre_final_2015.shape[0])
print(alexandre_final_2017.shape)
print(alexandre_final_2017.count()-alexandre_final_2017.shape[0])
# -
# ### William
# +
print(william_inicio_2007.shape)
print(william_inicio_2007.count()-william_inicio_2007.shape[0])
print(william_inicio_2009.shape)
print(william_inicio_2009.count()-william_inicio_2009.shape[0])
print(william_inicio_2011.shape)
print(william_inicio_2011.count()-william_inicio_2011.shape[0])
print(william_inicio_2013.shape)
print(william_inicio_2013.count()-william_inicio_2013.shape[0])
print(william_inicio_2015.shape)
print(william_inicio_2015.count()-william_inicio_2015.shape[0])
print(william_inicio_2017.shape)
print(william_inicio_2017.count()-william_inicio_2017.shape[0])
print(william_final_2007.shape)
print(william_final_2007.count()-william_final_2007.shape[0])
print(william_final_2009.shape)
print(william_final_2009.count()-william_final_2009.shape[0])
print(william_final_2011.shape)
print(william_final_2011.count()-william_final_2011.shape[0])
print(william_final_2013.shape)
print(william_final_2013.count()-william_final_2013.shape[0])
print(william_final_2015.shape)
print(william_final_2015.count()-william_final_2015.shape[0])
print(william_final_2017.shape)
print(william_final_2017.count()-william_final_2017.shape[0])
# -
# # Correlação
# ### Lidia
# +
lidia_corr__inicio_2007 = calculate_pearson(lidia_inicio_2007)
lidia_corr__inicio_2007 = lidia_corr__inicio_2007.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__inicio_2009 = calculate_pearson(lidia_inicio_2009)
lidia_corr__inicio_2009 = lidia_corr__inicio_2009.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__inicio_2011 = calculate_pearson(lidia_inicio_2011)
lidia_corr__inicio_2011 = lidia_corr__inicio_2011.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__inicio_2013 = calculate_pearson(lidia_inicio_2013)
lidia_corr__inicio_2013 = lidia_corr__inicio_2013.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__inicio_2015 = calculate_pearson(lidia_inicio_2015)
lidia_corr__inicio_2015 = lidia_corr__inicio_2015.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__inicio_2017 = calculate_pearson(lidia_inicio_2017)
lidia_corr__inicio_2017 = lidia_corr__inicio_2017.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__final_2007 = calculate_pearson(lidia_final_2007)
lidia_corr__final_2007 = lidia_corr__final_2007.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__final_2009 = calculate_pearson(lidia_final_2009)
lidia_corr__final_2009 = lidia_corr__final_2009.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__final_2011 = calculate_pearson(lidia_final_2011)
lidia_corr__final_2011 = lidia_corr__final_2011.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__final_2013 = calculate_pearson(lidia_final_2013)
lidia_corr__final_2013 = lidia_corr__final_2013.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__final_2015 = calculate_pearson(lidia_final_2015)
lidia_corr__final_2015 = lidia_corr__final_2015.sort_values(by=['correlation_with_ideb'], ascending=False)
lidia_corr__final_2017 = calculate_pearson(lidia_final_2017)
lidia_corr__final_2017 = lidia_corr__final_2017.sort_values(by=['correlation_with_ideb'], ascending=False)
# -
print(lidia_corr__inicio_2007)
print(lidia_corr__inicio_2009)
print(lidia_corr__inicio_2011)
print(lidia_corr__inicio_2013)
print(lidia_corr__inicio_2015)
print(lidia_corr__inicio_2017)
# print(lidia_corr__final_2007)
# print(lidia_corr__final_2009)
# print(lidia_corr__final_2011)
# print(lidia_corr__final_2013)
# print(lidia_corr__final_2015)
# print(lidia_corr__final_2017)
# ### Variaveis categoricas
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[0]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[3]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[7]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[8]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[9]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[10]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[11]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[12]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# +
var = calculate_categorical_correlation(lidia_inicio_2007)[6]
data = pd.concat([lidia_inicio_2007['ideb'], lidia_inicio_2007[var]], axis=1)
f, ax = plt.subplots(figsize=(10, 10))
fig = sns.boxplot( x=var,y="ideb", data=data)
fig.axis(ymin=0, ymax=10);
# -
print('Before getting dummys',lidia_inicio_2007.shape)
lidia_inicio_2007 = pd.get_dummies(lidia_inicio_2007)
print('after getting dummys',lidia_inicio_2007.shape)
y = lidia_inicio_2007['ideb']
x = lidia_inicio_2007.drop(columns=['ideb'])
# Partition the dataset in train + validation sets
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.3,random_state=1)
# +
# Define error measure for official scoring : RMSE
scorer = make_scorer(mean_squared_error, greater_is_better = False)
def rmse_cv_train(model):
rmse= np.sqrt(-cross_val_score(model, X_train, y_train, scoring = scorer, cv = 5))
return(rmse)
def rmse_cv_test(model):
rmse= np.sqrt(-cross_val_score(model, X_test, y_test, scoring = scorer, cv = 5))
return(rmse)
# -
# # Ridge Regression
ridge = RidgeCV(alphas = [0.01, 0.03, 0.06, 0.1, 0.3, 0.6, 1, 3, 6, 10, 30, 60])
ridge.fit(X_train, y_train)
alpha = ridge.alpha_
print("Best alpha :", alpha)
print("Try again for more precision with alphas centered around " + str(alpha))
ridge = RidgeCV(alphas = [alpha * .6, alpha * .65, alpha * .7, alpha * .75, alpha * .8, alpha * .85,
alpha * .9, alpha * .95, alpha, alpha * 1.05, alpha * 1.1, alpha * 1.15,
alpha * 1.25, alpha * 1.3, alpha * 1.35, alpha * 1.4],
cv = 10)
ridge.fit(X_train, y_train)
alphaRidge = ridge.alpha_
print("Best alpha :", alphaRidge)
ridge = make_pipeline(StandardScaler(), Ridge(alpha = alphaRidge, random_state=1))
ridge.fit(X_train, y_train)
print("Ridge RMSE on Training set :", rmse_cv_train(ridge).mean())
print("Ridge RMSE on Training error :", rmse_cv_train(ridge).std())
print("Ridge RMSE on Test set :", rmse_cv_test(ridge).mean())
print("Ridge RMSE on Test error :", rmse_cv_test(ridge).std())
y_train_rdg = ridge.predict(X_train)
y_test_rdg = ridge.predict(X_test)
# Plot residuals
plt.scatter(y_train_rdg, y_train_rdg - y_train, c = "blue", marker = "s", label = "Training data")
plt.scatter(y_test_rdg, y_test_rdg - y_test, c = "lightgreen", marker = "s", label = "Validation data")
plt.title("Linear regression with Ridge regularization")
plt.xlabel("Predicted values")
plt.ylabel("Residuals")
plt.legend(loc = "upper left")
plt.hlines(y = 0, xmin = 10.5, xmax = 13.5, color = "red")
plt.show()
# Plot important coefficients
coefs = pd.Series(ridge.coef_, index = X_train.columns)
print("Ridge picked " + str(sum(coefs != 0)) + " features and eliminated the other " + \
str(sum(coefs == 0)) + " features")
imp_coefs = pd.concat([coefs.sort_values().head(10),
coefs.sort_values().tail(10)])
imp_coefs.plot(kind = "barh")
plt.title("Coefficients in the Ridge Model")
plt.show()
| notebooks/eda/.ipynb_checkpoints/Compilar os dados v1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# ## Create nd array
#
array_empty=np.empty(shape=(2,3))
array_empty
array_zeros=np.zeros(shape=(2,3))
array_zeros
array_zeros=np.zeros(shape=(2,3),dtype=np.float32)
array_zeros
array_ones=np.ones(shape=(2,3))
array_ones
array_full=np.full(shape=(2,3),fill_value=9)
array_full
# ## _like function
matrix_A=np.array([[1,0,2,3,5],[4,7,8,9,10],[7,6,4,5,3]])
matrix_A
array_empty_like=np.empty_like(matrix_A)
array_empty_like
array_zero_like=np.zeros_like(matrix_A)
array_zero_like
# ## .arange function
list(range(30))
array_rng=np.arange(30)
array_rng
array_rng=np.arange(start=30)
array_rng
array_rng=np.arange(start=0,stop=30)
array_rng
array_rng=np.arange(start=0,stop=30,step=2)
array_rng
array_rng=np.arange(start=0,stop=30,step=2,dtype=np.float64)
array_rng
# ## Random Generators
# ### Defining
from numpy.random import Generator as gen
from numpy.random import PCG64 as pcg
array_RG=gen(pcg())
array_RG.normal(size=(5,5))
array_RG=gen(pcg(seed=365))
array_RG.normal(size=(5,5))
array_RG.normal(size=(5,5))
# ### Generating Integers, Probabilities and Random Choice
array_RG=gen(pcg(seed=365))
array_RG.integers(low=10,high=100,size=(5,5))
array_RG=gen(pcg(seed=365))
array_RG.random(size=(5,5))
array_RG=gen(pcg(seed=365))
array_RG.choice([1,2,3,4,5],size=(5,5))
# ### Generating Arrays From Known Distributions
array_RG=gen(pcg(seed=365))
array_RG.poisson(size=(5,5))
array_RG=gen(pcg(seed=365))
array_RG.poisson(lam=10,size=(5,5))
array_RG=gen(pcg(seed=365))
array_RG.binomial(n=100,p=0.5,size=(5,5))
array_RG=gen(pcg(seed=365))
array_RG.logistic(loc=10,scale=2,size=(5,5))
# ### Applications of Random Generator
array_RG=gen(pcg(seed=365))
array_column_1=array_RG.normal(loc=10,scale=2,size=(1000))
array_column_2=array_RG.normal(loc=15,scale=3,size=(1000))
array_column_3=array_RG.logistic(loc=10,scale=2,size=(1000))
array_column_4=array_RG.exponential(scale=4,size=(1000))
array_column_5=array_RG.geometric(p=0.5,size=(1000))
random_test_data=np.array([array_column_1,array_column_2,array_column_3,array_column_4,array_column_5])
random_test_data
random_test_data.shape
random_test_data.transpose
random_test_data
random_test_data
np.savetxt("RandomTest.csv",random_test_data,fmt="%s",delimiter=',')
np.genfromtxt("RandomTest.csv",delimiter=',')
| 365 DS/Random Variable.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import statsmodels.api as sm
data = sm.datasets.get_rdataset('dietox', 'geepack').data
data.to_csv('../example_data/pigs.csv')
sm.datasets.star98.load_pandas().data
| examples/example_data/dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python36
# language: python
# name: python36
# ---
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
from tensorflow import keras as keras
import tensorflow as tf
import csv
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
# + _uuid="c4e81433b08c15ddd097225bbf7ce45f902ef885"
IMAGE_WIDTH = 96
IMAGE_HEIGHT = 96
# -
# 获取训练数据输入Xtrain [96,96,1]
#
# 获取训练数据输出Ytrain [-1,1,row.length-1]
# + _uuid="16a9a99d0a52c3b73bb63c4db1049177f310a350"
def load_dataset():
'''
Load training dataset
'''
Xtrain = []
Ytrain = []
with open('/Users/szkfzx/datasets/FaceDetection/training.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
img = np.zeros((IMAGE_HEIGHT,IMAGE_WIDTH,1), dtype=np.float)
for i, val in enumerate(row["Image"].split(" ")):
img[i//IMAGE_WIDTH,i%IMAGE_WIDTH,0] = val
Yitem = []
failed = False
for coord in row:
if coord == "Image":
continue
if(row[coord].strip()==""):
failed = True
break
Yitem.append(float(row[coord]))
if not failed:
Xtrain.append(img)
Ytrain.append(Yitem)
return np.array(Xtrain), np.array(Ytrain, dtype=np.float)
# + _uuid="70ce218b6fb871851b6e631cb7ad92d0009ea48a"
# Load dataset
Xdata, Ydata = load_dataset()
Xtrain = Xdata[:]
Ytrain = Ydata[:]
# + _uuid="feb76f94370665f0d756937ef4332e463d34a6ad"
def show_image(X, Y):
img = np.copy(X)
for i in range(0,Y.shape[0],2):
if 0 < Y[i+1] < IMAGE_HEIGHT and 0 < Y[i] < IMAGE_WIDTH:
img[int(Y[i+1]),int(Y[i]),0] = 255
plt.imshow(img[:,:,0])
# + _uuid="27351e13ff3b2ddd7768b21bca01bbdd01670091"
# Preview dataset samples
show_image(Xtrain[1], Ytrain[1])
# + _uuid="caabccf3928b2dabdc0e29c0d762405c0d5f6e19"
# Configure Model
model = keras.Sequential([keras.layers.Flatten(input_shape=(IMAGE_HEIGHT, IMAGE_WIDTH,1)),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dropout(0.1),
keras.layers.Dense(64, activation="relu"),
keras.layers.Dense(30)
])
# + _uuid="4151057f0410f52f6f875206f76c626d4f81402b"
# Compile model
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='mse',
metrics=['mae'])
# + _uuid="cb1a37eda3b8f213a788ccfc126c0001d6c85867"
# Train model
model.fit(Xtrain, Ytrain, epochs=500)
# + _uuid="0f32d24cfa561ad3b53390ef8d0c3a7bf9218d51"
# Load test data
def load_testset():
Xtest = []
with open('/Users/szkfzx/datasets/FaceDetection/test.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
img = np.zeros((IMAGE_HEIGHT,IMAGE_WIDTH,1), dtype=np.float)
for i, val in enumerate(row["Image"].split(" ")):
img[i//IMAGE_WIDTH,i%IMAGE_WIDTH,0] = val
Xtest.append(img)
return np.array(Xtest)
Xtest = load_testset()
# + _uuid="028f793e10edae3f5e51d9c417f3ab1de1437ede"
# Preview results on test data
def show_results(image_index):
Ypred = model.predict(Xtest[image_index:(image_index+1)])
show_image(Xtest[image_index], Ypred[0])
# + _uuid="eacd1ec9f9d5906fdb01945d9f3b69c6c5e75cfb"
show_results(8)
| Kaggle/Playgroud/FacialDetection/basic-fully-connected-nn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from apportion import largest_remainder
import random
from tqdm import tqdm
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
# +
fig = plt.figure()
ax = fig.gca(projection='3d', zlabel='KL', xlabel='alpha', ylabel='beta')
x = np.linspace(1, 3)
y = np.linspace(0, 1)
print(x,y)
x, y=np.meshgrid(x,y)
z = (x**2 + y**2)
surf = ax.plot_surface(x, y, z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# +
# %load_ext autoreload
# %autoreload 2
import numpy as np
import pipeline
import logging
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
n_tests = 5
steps = 5
m = 20
n = 10000
seats = 100
electoral_threshold = 0.05
# poll_covid = 0.01
political_spectrum = np.array([10, 9, 11, 8, 12, 7, 13, 6, 14, 5, 15, 4, 16, 3, 17, 2, 18, 1, 19, 0])
# (gammas, betas), results = pipeline.test_gamma_beta(electoral_threshold, m, n, n_tests, political_spectrum, seats)
(gammas, betas), results2 = pipeline.test_gamma_beta_parallel(electoral_threshold, m, n, n_tests, political_spectrum, seats, steps)
# -
gammas
# +
# %matplotlib notebook
from matplotlib.ticker import FormatStrFormatter
fig = plt.figure()
ax = fig.gca(projection='3d', zlabel='$KL$', xlabel='$\gamma$', ylabel=r'$\beta$', title="sntv", zlim=(0, 3.5))
temp = np.log10(gammas)[0,:]
ax.set_xticks(temp)
ax.set_xticklabels(np.around(gammas[0,:], decimals=4))
# ax.xaxis.set_major_formatter(FormatStrFormatter('%.2f'))
surf = ax.plot_surface(np.around(temp, decimals=4), betas, results2['sntv'], cmap=cm.coolwarm,
linewidth=0, antialiased=True, label='sntv')
# +
# t = np.load('gamma-beta.npz')
# betas = t['beta']
# gammas = t['gamma']
# results = {'sntv-l': t['sntvl']}
# +
# %matplotlib notebook
fig = plt.figure()
ax = fig.gca(projection='3d', zlabel='$KL$', xlabel='$\gamma$', ylabel='$\\beta$', zlim=(0, 4))
temp = np.log10(gammas)[0,:]
ax.set_xticks(temp)
alt_gamma = np.where(gammas[0,:]<0.004, np.around(gammas[0,:], decimals=4), np.around(gammas[0,:], decimals=2))
ax.set_xticklabels(alt_gamma)
surf = ax.plot_surface(temp, betas, results2['sntv-l'], cmap=cm.coolwarm,
linewidth=0, antialiased=True, label='sntv-l')
fig.colorbar(surf, shrink=0.5, aspect=5)
# -
fig.savefig('ag-sntv-liars.pdf')
# +
# # %matplotlib notebook
# fig = plt.figure()
# ax = fig.gca(projection='3d', zlabel='$KL$', xlabel='$\gamma$', ylabel='$\\beta$', title="sntv liars")
# temp = np.log10(gammas)[0,:]
# ax.set_xticks(temp)
# ax.set_xticklabels(gammas[0,:])
# surf = ax.plot_surface(temo, gammas, (gammas+2)*betas, cmap=cm.coolwarm,
# linewidth=0, antialiased=True, label='sntv-l')
# +
# %matplotlib notebook
fig = plt.figure()
ax = fig.gca(projection='3d', zlabel='$KL$', xlabel=r'$\gamma$', ylabel=r'$\beta$', title="stv", zlim=(0, 3.5))
temp = np.log10(gammas)[0,:]
ax.set_xticks(temp)
ax.set_xticklabels(np.around(gammas[0,:], decimals=4))
surf = ax.plot_surface(temp, betas, results2['stv'] ,cmap=cm.coolwarm,
linewidth=1, antialiased=True, label='stv')
# +
# np.savez('gamma-beta2.npz', gamma=gammas, beta=betas, stv=results2['stv'], sntv=results2['sntv'], sntvl=results2['sntv-l'])
# -
results2['stv'].std()
| giovanni_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="JbM-dwi4KV5e"
# INITIALIZING TPU
# + id="XQTIJL7hKXuf" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1622451044793, "user_tz": -330, "elapsed": 65532, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}} outputId="cc28eb4f-0e8b-4628-99b2-86179e3a91c5"
# creating TPU environment to create model architecture and initialize architecture's variable on TPU
import os
import tensorflow as tf
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
# This is the TPU initialization code that has to be at the beginning.
tf.tpu.experimental.initialize_tpu_system(resolver)
# create a distribution stratagy
strategy = tf.distribute.TPUStrategy(resolver)
# + [markdown] id="E97O3IKTK8tf"
# IMPORTING IMPORTANT MODULES
# + id="8GY3J5zrK9mV" executionInfo={"status": "ok", "timestamp": 1622451044796, "user_tz": -330, "elapsed": 56, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}}
import tensorflow as tf
from tensorflow.keras.datasets import fashion_mnist
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras import layers
import time
from tensorflow.keras.models import Sequential, load_model
# + [markdown] id="0BbzinbALCTh"
# LOADING DATASET
# + id="rX5tapq2LFop" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1622451047842, "user_tz": -330, "elapsed": 3097, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}} outputId="955a9c55-c821-4ff2-a358-8740b7c68ea3"
(x,_),(_,_) = fashion_mnist.load_data()
x = x.reshape(x.shape[0], 28, 28, 1).astype('float32')
x = x/np.float32(255)
print(x.shape)
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(x).batch(BATCH_SIZE)
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="eskcbOzBODeb" executionInfo={"status": "ok", "timestamp": 1622451047847, "user_tz": -330, "elapsed": 66, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}} outputId="ac409bf2-29dd-406b-d064-cc6a7e4ac2e3"
for i in train_dataset:
print(i[0].shape)
plt.imshow(i[0][:,:,0]*255, cmap='gray')
break
# + id="vqcBWvs5CHsQ" executionInfo={"status": "ok", "timestamp": 1622451047849, "user_tz": -330, "elapsed": 34, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}}
GLOBAL_BATCH_SIZE = BATCH_SIZE * strategy.num_replicas_in_sync
# + [markdown] id="eQeiMZ5VLP-X"
# MODEL CREATION
# + id="-a-xd1FGLRur" executionInfo={"status": "ok", "timestamp": 1622451047853, "user_tz": -330, "elapsed": 36, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "13501727943914436615"}}
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1, activation="softmax"))
return model
# + [markdown] id="peON40M7Lftg"
# CUSTOM TRAINING METHOD (OVER-RIDING FIT METHOD OF KERAS)
# + id="mWHvKzVGLmEJ" executionInfo={"status": "ok", "timestamp": 1622451047858, "user_tz": -330, "elapsed": 40, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}}
class GAN(tf.keras.Model):
def __init__(self, discriminator, generator, latent_dim):
super(GAN, self).__init__()
self.discriminator = discriminator
self.generator = generator
self.latent_dim = latent_dim
def compile(self, d_optimizer, g_optimizer, loss_fn):
super(GAN, self).compile()
self.d_optimizer = d_optimizer
self.g_optimizer = g_optimizer
self.loss_fn = loss_fn
self.d_loss_metric = tf.keras.metrics.Mean(name="d_loss")
self.g_loss_metric = tf.keras.metrics.Mean(name="g_loss")
@property
def metrics(self):
return [self.d_loss_metric, self.g_loss_metric]
def train_step(self, real_images):
# Sample random points in the latent space
batch_size = tf.shape(real_images)[0]
#****************************************--DISCRIMINATOR HANDLING--*******************
random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
# Decode them to fake images
generated_images = self.generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat([ tf.zeros((batch_size, 1)), tf.ones((batch_size, 1))], axis=0)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(tf.shape(labels))
# Train the discriminator
with tf.GradientTape() as tape:
predictions = self.discriminator(combined_images)
d_loss = tf.nn.compute_average_loss(self.loss_fn(labels, predictions), global_batch_size=GLOBAL_BATCH_SIZE)
grads = tape.gradient(d_loss, self.discriminator.trainable_weights)
self.d_optimizer.apply_gradients(zip(grads, self.discriminator.trainable_weights))
#***************************************--GENERATOR HANDLING--***********************
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.ones((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = self.discriminator(self.generator(random_latent_vectors))
g_loss = tf.nn.compute_average_loss(self.loss_fn(misleading_labels, predictions), global_batch_size=GLOBAL_BATCH_SIZE)
grads = tape.gradient(g_loss, self.generator.trainable_weights)
self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))
# Update metrics
self.d_loss_metric.update_state(d_loss)
self.g_loss_metric.update_state(g_loss)
return {"d_loss": self.d_loss_metric.result(),"g_loss": self.g_loss_metric.result()}
# + [markdown] id="-nF1xODuMgyn"
# CALLBACK METHOD OVERRIDE
#
# + id="iUUHpTtJMkj4" executionInfo={"status": "ok", "timestamp": 1622451583257, "user_tz": -330, "elapsed": 903, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}}
noise_dim = 100
num_examples_to_generate = 16
# You will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
class GANMonitor(tf.keras.callbacks.Callback):
def __init__(self, seed):
self.seed = seed
def on_epoch_end(self, epoch, logs=None):
# generated_images = self.model.generator(random_latent_vectors)
# generated_images *= 255
# generated_images.numpy()
# for i in range(self.num_img):
# img = keras.preprocessing.image.array_to_img(generated_images[i])
# img.save("generated_img_%03d_%d.png" % (epoch, i))
# img.show()
predictions = self.model.generator(seed)
fig = plt.figure(figsize=(6, 6))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
#plt.show()
# + [markdown] id="OYPqtJw5MvVM"
# INITIALIZING MODEL
# + id="aodO8vmjMu-1" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1622451588419, "user_tz": -330, "elapsed": 1294, "user": {"displayName": "<NAME>", "photoUrl": "https://<KEY>", "userId": "13501727943914436615"}} outputId="14d4b091-30fc-4c84-ee61-642204cf1443"
with strategy.scope():
#create generator and discriminator model modal
#generator = make_generator_model()
#discriminator = make_discriminator_model()
#use below 2 lines of code if model is already trained and saved somewhere
generator = load_model('/content/drive/MyDrive/Colab Notebooks/gan_/generator_tpu.h5')
discriminator = load_model('/content/drive/MyDrive/Colab Notebooks/gan_/discriminator_tpu.h5')
gan = GAN(discriminator=discriminator, generator=generator, latent_dim=100)
gan.compile(
d_optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001, beta_1=0.5),
g_optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001, beta_1=0.5),
loss_fn=tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
)
# + [markdown] id="c_Mt1l29M3uG"
# TRAINING.......................................
# + id="r1kiM6IQMoSz"
gan.fit(train_dataset, epochs=50, batch_size=BATCH_SIZE, callbacks=[GANMonitor(seed=seed)])
# + [markdown] id="ZKzlU3_yUnO8"
# SAVING MODEL
# + id="9cEBDhkmH4yx"
generator.save('/content/drive/MyDrive/Colab Notebooks/gan_/generator_tpu.h5')
discriminator.save('/content/drive/MyDrive/Colab Notebooks/gan_/discriminator_tpu.h5')
# + [markdown] id="b9gmxh7xm8FT"
# VISUALIZING THE GENERATED IMAGES PER EPOCH
# + id="pu2ErocfnCjx" executionInfo={"status": "ok", "timestamp": 1622452335985, "user_tz": -330, "elapsed": 1183, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}}
import glob
import imageio
anim_file = '/content/drive/MyDrive/Colab Notebooks/gan_/dcgan_tpu.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
# + colab={"base_uri": "https://localhost:8080/", "height": 449, "output_embedded_package_id": "1ITX48Sx9yrMVXbN0oeIN-K_pH4tsk5I2"} id="7J_qxCD9nGy6" executionInfo={"status": "ok", "timestamp": 1622452351516, "user_tz": -330, "elapsed": 4968, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}} outputId="a0b46816-2c67-4deb-f41c-b499047718ba"
from IPython.display import Image
Image(open(anim_file,'rb').read())
# + [markdown] id="q1atgL8TuEqu"
# CHANGING FPS TO VISUALIZE MORE BETTER
# + id="dw2XpU_7s1fp" executionInfo={"status": "ok", "timestamp": 1622452913258, "user_tz": -330, "elapsed": 1469, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}}
gif = imageio.mimread(anim_file)
speed_up_gif = '/content/drive/MyDrive/Colab Notebooks/gan_/dcgan_tpu_fps.gif'
imageio.mimsave(speed_up_gif, gif, fps=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 449, "output_embedded_package_id": "1HAokHdDQcQIFIxVyaRvN8CouOUObfUpv"} id="t0nhECYptKqO" executionInfo={"status": "ok", "timestamp": 1622452918545, "user_tz": -330, "elapsed": 4610, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}} outputId="b0612bf0-8560-4cb3-bfb2-4565f9bd8073"
from IPython.display import Image
Image(open(speed_up_gif,'rb').read())
# + [markdown] id="2nQfzVBt0W3D"
# DOWNLOADING DATA FROM KAGGLE
# + colab={"base_uri": "https://localhost:8080/"} id="qCgfhnBTwMMb" executionInfo={"status": "ok", "timestamp": 1622454036820, "user_tz": -330, "elapsed": 1719, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}} outputId="d3e46111-fbd9-4147-95d2-02ef43340e68"
#way to download kaggle dataset in google drive
#config
import os
os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/MyDrive/Colab Notebooks/gan_" #path to where kaggle.json is located
#download dataset
# !kaggle datasets download -d soumikrakshit/anime-faces
# + id="y7JlpPtgzHmG" executionInfo={"status": "ok", "timestamp": 1622454363428, "user_tz": -330, "elapsed": 8299, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvgMvfgTvDMbzLoQz5W-DJoVe1anM15Y0Cj5DDQqo=s64", "userId": "13501727943914436615"}}
from zipfile import ZipFile
with ZipFile("anime-faces.zip", "r") as zipobj:
zipobj.extractall("anime_faces")
# + id="Pn9nlqg7zIgp"
| gan_TPU.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Flight Delay Demo - Deep Learning & Labeling
#
# ## Install prerequisites
#
# Before running the notebook, make sure the correct versions of these libraries are installed.
# !pip install --upgrade tensorflow-gpu==1.13.2 tensorflow==1.13.2
# +
import warnings
warnings.filterwarnings("ignore")
import logging
logging.basicConfig(level = logging.ERROR)
# -
# ## Import open source Python libraries
#
# Import open source dependencies and modules that will be used through out this notebook.
import os
import requests
import utils
import numpy as np
from matplotlib import pyplot as plt
import environment_definition
import string_int_label_map_pb2
# ## Import Azure Machine Learning Python SDK
#
# Import Azure Machine Learning SDK modules.
# +
from azureml.core import Workspace, Experiment
from azureml.core.model import Model
from azureml.core.run import Run
from azureml.widgets import RunDetails
from azureml.core.image import ContainerImage
from azureml.train.dnn import TensorFlow
from azureml.core.runconfig import AzureContainerRegistry, DockerEnvironment, EnvironmentDefinition, PythonEnvironment
from azureml.core.compute import AksCompute, ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.webservice import Webservice, AksWebservice
from azureml.core.image import Image
# -
# ## Connect to Azure Machine Learning workspace
#
# In the next cell, we will create a new Workspace config object using the `<subscription_id>`, `<resource_group_name>`, and `<workspace_name>`. This will fetch the matching Workspace and prompt you for authentication. Please click on the link and input the provided details.
#
# For more information on **Workspace**, please visit: [Microsoft Workspace Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py)
#
# `<subscription_id>` = You can get this ID from the landing page of your Resource Group.
#
# `<resource_group_name>` = This is the name of your Resource Group.
#
# `<workspace_name>` = This is the name of your Workspace.
# +
from azureml.core.workspace import Workspace
try:
# Get instance of the Workspace and write it to config file
ws = Workspace(
subscription_id = '<subscription_id>',
resource_group = '<resource_group>',
workspace_name = '<workspace_name>')
# Writes workspace config file
ws.write_config()
print('Library configuration succeeded')
except Exception as e:
print(e)
print('Workspace not found')
# -
# ## Collect and prepare training data
#
# Let's take a look at a subset of images used for training our model.
# <HTML>
# <TR>
# <TD><img src="./images/train/default/000.jpg" /></TD>
# <TD><img src="./images/train/default/001.jpg" /></TD>
# <TD><img src="./images/train/default/002.jpg" /></TD>
# <TD><img src="./images/train/default/003.jpg" /></TD>
# </TR>
# <TR>
# <TD><img src="./images/train/default/004.jpg" /></TD>
# <TD><img src="./images/train/default/005.jpg" /></TD>
# <TD><img src="./images/train/default/006.jpg" /></TD>
# <TD><img src="./images/train/default/007.jpg" /></TD>
# </TR>
# </HTML>
# ## Keras: Data augmentation
#
# The `tf.keras.preprocessing.image.ImageDataGenerator` function generates batches of tensor image data with real-time data augmentation.
# +
import tensorflow as tf
import os
augmented_folder = './images/augmented'
# Working directory
if not os.path.exists(augmented_folder):
os.makedirs(augmented_folder)
gen = tf.keras.preprocessing.image.ImageDataGenerator(rotation_range=17, width_shift_range=0.12,
height_shift_range=0.12, zoom_range=0.12, horizontal_flip=True)
path = 'images/'
i = 0
for batch in gen.flow_from_directory('images/train', target_size=(224,224),
class_mode=None, shuffle=False, batch_size=32,
save_to_dir=augmented_folder, save_prefix='hi'):
i += 1
if i > 10:
break
# -
# ## Keras: Augmented dataset sample
#
# Review images generated by Keras.
# +
from IPython.display import Image, display
from glob import glob
listofImageNames = glob(path+'augmented/*.png', recursive=True)
for imageName in listofImageNames[:1]:
display(Image(filename=imageName))
print(imageName)
# +
import shutil
shutil.rmtree(augmented_folder)
# -
# ## Create Azure Machine Learning experiment
#
# The Experiment constructor allows to create an experiment instance. The constructor takes in the current workspace, which is fetched by calling `Workspace.from_config()` and an experiment name.
#
# For more information on **Experiment**, please visit: [Microsoft Experiment Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py)
experiment_name = 'flight-delay-tf'
experiment = Experiment(workspace=ws, name=experiment_name)
# ## Create auto-scaling AML Compute GPU cluster
#
# Firstly, check for the existence of the cluster. If it already exists, we are able to reuse it. Checking for the existence of the cluster can be performed by calling the constructor `ComputeTarget()` with the current workspace and name of the cluster.
#
# In case the cluster does not exist, the next step will be to provide a configuration for the new AML cluster by calling the function `AmlCompute.provisioning_configuration()`. It takes as parameters the VM size and the max number of nodes that the cluster can scale up to. After the configuration has executed, `ComputeTarget.create()` should be called with the previously configuration object and the workspace object.
#
# For more information on **ComputeTarget**, please visit: [Microsoft ComputeTarget Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.computetarget?view=azure-ml-py)
#
# For more information on **AmlCompute**, please visit: [Microsoft AmlCompute Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.akscompute?view=azure-ml-py)
#
#
# **Note:** Please wait for the execution of the cell to finish before moving forward.
# +
# Choose a name for your GPU cluster
cluster_name = "gpucluster"
# Verify that cluster does not exist already
try:
gpu_cluster = ComputeTarget(workspace = ws, name = cluster_name)
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
min_nodes=0,
max_nodes=4,
admin_username="theadmin",
admin_user_password="<PASSWORD>")
gpu_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
gpu_cluster.wait_for_completion(show_output=True)
# -
# ## Upload training data to Azure Machine Learning Data Store
#
# To register our training data with our Workspace we need to get the data into the data store. The Workspace will already have a default data store. The function `ws.get_default_datastore()` returns an instance of the data store associated with the Workspace, to which files will be uploaded by calling `ds.upload()`.
#
# For more information on **Datastore**, please visit: [Microsoft Datastore Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore?view=azure-ml-py)
# Prepare data
ds = ws.get_default_datastore()
ds.upload('./data')
# +
env_def = EnvironmentDefinition()
env_def.docker = environment_definition.docker_config
env_def.python = environment_definition.python_config
print('Base docker image: ' + env_def.docker.base_image)
# -
# ## Train model using TensorFlow estimator on the GPU cluster
#
# Create the TensorFlow estimator and submit the experiment. The TensorFlow instance takes in as parameters the `compute_target` that will be our GPU Cluster created previously, `entry_script` that points to our main training script: `train.py`. On the other hand `inputs` and `environment_definitions` take care of mounting the data store to our remote training cluster and of the dependencies requiered on this cluster for the training to start.
# +
script_params = {
'--model_dir': './outputs',
'--pipeline_config_path': './faster_rcnn_resnet101_bird.config'
}
tf_est = TensorFlow(source_directory = './train/src',
script_params=script_params,
compute_target=gpu_cluster,
entry_script='train.py',
inputs=[ds.as_download(path_on_compute='/data')],
environment_definition=env_def
)
run = experiment.submit(tf_est)
# -
# ## Display train.py
#
# Let's take a look at our training script. As you can see, it's a standard TensorFlow training script.
f = open("./train/src/train.py", "r")
print(f.read())
# ## Experiment run details
#
# While the experiment is running, we can monitor it through the AML widget.
run = Run(experiment=experiment, run_id=run.id)
RunDetails(run).show()
# + [markdown] nteract={"transient": {"deleting": false}}
# ## Start TensorBoard
#
# The `export_to_tensorboard` function exports experiment run history to Tensorboard logs ready for Tensorboard visualization.
#
# For more information on ***tensorboard Package***, please visit: [Microsoft tensorboard Package Documentation](https://docs.microsoft.com/en-us/python/api/azureml-tensorboard/azureml.tensorboard?view=azure-ml-py)
# +
# Export Run History to Tensorboard logs
from azureml.tensorboard.export import export_to_tensorboard
from azureml.tensorboard import Tensorboard
import os
logdir = 'exportedTBlogs'
log_path = os.path.join(os.getcwd(), logdir)
try:
os.stat(log_path)
except os.error:
os.mkdir(log_path)
export_to_tensorboard(run, logdir)
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([], local_root=logdir, port=6006)
# If successful, start() returns a string with the URI of the instance.
tb.start()
# -
# ## Stop TensorBoard
#
# The `Tensorboard.stop()` function stops the Tensorboard instance.
tb.stop()
# ## Hyperparameter training
#
# Hyperparameters are adjustable parameters for model training that guide the training process. The HyperDrive package helps automate choosing these parameters.
#
# The `choice` function specifies a discrete set of options to sample from.
#
# The `HyperDriveConfig Class` is a configuration that defines a HyperDrive run. HyperDrive configuration includes information about hyperparameter space sampling, termination policy, primary metric, resume from configuration, estimator, and the compute target to execute the experiment runs on.
#
# The `normal` function specifies a real value that is normally-distributed with mean mu and standard deviation sigma.
#
# The `PrimaryMetricGoal Enum` defines supported metric goals for hyperparameter tuning. A metric goal is used to determine whether a higher value for a metric is better or worse. Metric goals are used when comparing runs based on the primary metric. For example, you may want to maximize accuracy or minimize error.
#
# The `RandomParameterSampling Class` defines random sampling over a hyperparameter search space.
#
# For more information on ***HyperDrive Package***, please visit: [Microsoft hyperDriver Package Documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive?view=azure-ml-py)
# +
from azureml.train.hyperdrive import BanditPolicy, choice, HyperDriveConfig, normal, PrimaryMetricGoal, RandomParameterSampling
param_sampling = RandomParameterSampling({
"--batch_size": choice(1, 4, 8, 16),
"--learning_rate": normal(0.0002, 0.0006)
})
hyperdrive_run_config = HyperDriveConfig(estimator=tf_est,
hyperparameter_sampling=param_sampling,
primary_metric_name="loss",
primary_metric_goal=PrimaryMetricGoal.MINIMIZE,
max_total_runs=10,
max_concurrent_runs=4)
hyperdrive_run = experiment.submit(hyperdrive_run_config)
# +
from azureml.widgets import RunDetails
hyperdrive_run = Run(experiment, run_id=hyperdrive_run.id)
RunDetails(hyperdrive_run).show()
# -
# ## Register model
#
# After the experiment has ended successfully we will need to download the outputs of it in order for us to register the model against our Azure Machine Learning workspace.
#
# The `get_file_names()` function lists the files that are stored in association with the run.
#
# The `download_file()` function downloads an associated file from storage. As parameters it receives the `name` of the artifact to be downloaded, and the `output_file_path` which is the local path where to store the artifact.
#
# For more information on ***Run Class***, please visit: [Microsoft Run Class Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run.run?view=azure-ml-py#download-file-name--output-file-path-none---validate-checksum-false-)
#
files = run.get_file_names()
results = [file for file in files if ('outputs/model' in file or 'outputs/checkpoint' in file or 'outputs/events' in file or 'outputs/graph' in file or 'outputs/frozen_inference_graph' in file)]
run.download_file('outputs/frozen_inference_graph.pb', './outputs/frozen_inference_graph.pb')
# Next, register the model obtained from the best run. In order to register the model, the function `register_model()` should be called. This will take care of registering the model obtained from the best run.
#
# The `Model.register()` function registers a model with the provided workspace.
# register the model for deployment
model = Model.register(model_path = "./outputs/frozen_inference_graph.pb",
model_name = "frozen_inference_graph.pb",
description = "Flight Delay Image",
workspace = ws)
# # Deployment
# ## Fetch Azure Kubernetes Cluster
#
# Let's get a reference to our already existing AKS Cluster `flight-delay-aks`.
# +
from azureml.core.compute import AksCompute
from azureml.core.compute import ComputeTarget
from azureml.exceptions import ComputeTargetException
prov_config = AksCompute.provisioning_configuration(location='westus2')
try:
aks_target = AksCompute(ws, 'flight-delay-aks')
except ComputeTargetException:
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = 'flight-delay-aks',
provisioning_configuration = prov_config)
aks_target.wait_for_completion(True)
# -
# Now that the AKS cluster has been deployed, it’s time to create an `InferenceConfig` object by calling its constructor and passing the runtime type, the path to the `entry_script` (score.py), and the `conda_file` (the previously created file that holds the environment dependencies).
#
# Next, define the configuration of the web service to deploy. This is done by calling `AksWebservice.deploy_configuration()` and passing along the number of `cpu_cores` and `memory_gb` that the service needs.
#
# Finally, in order to deploy the model and service to the created AKS cluster, the function `Model.deploy()` should be called, passing along the workspace object, a list of models to deploy, the defined inference configuration, deployment configuration, and the AKS object created in the step above.
#
# For more information on **InferenceConfig**, please visit: [Microsoft InferenceConfig Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py)
#
# For more information on **AksWebService**, please visit: [Microsoft AksWebService Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.akswebservice?view=azure-ml-py)
#
# For more information on **Model**, please visit: [Microsoft Model Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py)
#
#
# **Note:** Please wait for the execution of the cell to finish before moving forward.
# +
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AksWebservice
from azureml.core.model import Model
from azureml.exceptions import WebserviceException
# Create an inference config object based on the score.py and myenv.yml from previous steps
inference_config = InferenceConfig(runtime= "python",
entry_script="score.py",
conda_file="score.yml")
deployment_config = AksWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1)
try:
service = AksWebservice(ws, 'fd-image-service')
print(service.state)
except WebserviceException:
service = Model.deploy(ws,
'fd-image-service',
[model],
inference_config,
deployment_config,
aks_target)
service.wait_for_deployment(show_output = True)
print(service.state)
# -
# ## Test the service
#
# Now with test data, we can get it into a suitable format to consume the web service. First an instance of the web service should be obtained by calling the constructor `Webservice()` with the Workspace object and the service name as parameters. Finally, call the service via POST using the `requests` module. `requests.post()` will call the deployed web service. It takes for parameters the service URL, the test data, and a headers dictionary that contains the authentication token.
#
# For more information on **Webservice**, please visit: [Microsoft Webservice Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice?view=azure-ml-py)
# +
# Test the service
test_image = './images/train/default/000.JPG'
image = open(test_image, 'rb')
input_data = image.read()
image.close()
aks_service_name = 'fd-image-service'
aks_service = AksWebservice(workspace=ws, name=aks_service_name)
auth = 'Bearer ' + aks_service.get_keys()[0]
uri = aks_service.scoring_uri
res = requests.post(url=uri,
data=input_data,
headers={'Authorization': auth, 'Content-Type': 'application/octet-stream'})
results = res.json()
# -
# Let's parse the response received from the Webservice.
# +
#import utils
from PIL import Image
# Show the results
image = Image.open(test_image)
image_np = utils.load_image_into_numpy_array(image)
category_index = utils.create_category_index_from_labelmap('./score/samples/label_map.pbtxt', use_display_name=True)
utils.visualize_boxes_and_labels_on_image_array(
image_np,
np.array(results['detection_boxes']),
np.array(results['detection_classes']),
np.array(results['detection_scores']),
category_index,
instance_masks=results.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=(24, 16))
plt.imshow(image_np)
| notebooks/flight-delay-dl/flight-delay-tf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### load mnist
# +
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# -
train_images.shape
# ### NN Model
# +
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(512, activation='relu', input_shape=(28*28, )))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
# -
model.compile(loss="categorical_crossentropy", metrics=['accuracy'], optimizer='rmsprop')
train_images = train_images.reshape((60000, 28*28)).astype('float32')/255
test_images = test_images.reshape((10000, 28*28)).astype('float32')/255
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
model.fit(train_images, train_labels, epochs=5, batch_size=128)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Loss: {test_loss}\nAccuracy: {test_acc}')
# ### Tensor expression
import numpy as np
x = np.array(12)
x.ndim
x = np.array([1, 2, 3, 4, 5])
x.ndim
x = np.array([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]])
x.ndim
x = np.array([[[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]],
[[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]])
x.ndim
# ### mnist圖
digit = train_images[4]
import matplotlib.pyplot as plt
plt.imshow(digit.reshape(28, 28), cmap=plt.cm.binary)
plt.show()
# ### Slice
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
my_slice = train_images[10:100]
my_slice.shape
my_slice = train_images[:, 7:-7, 7:-7] # 中間pixel的取法
my_slice.shape
# ### Broadcasting
# #### 用在不同形狀的張量相加,較小的張量將進行擴張
# 1. 較小的張量會加入新的軸,以匹配較大的張量
# 2. 較小的張量在這些新的軸上重複寫入元素,以匹配較大張量的shape
| Ch2.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.1.1
# language: julia
# name: julia-1.1
# ---
using Pkg; Pkg.status()
# # Nonlinear regression using automatic differentiation
# Nonlinear regression or nonlinear least squares fits the parameters of a _model function_ to a vector of _observed responses_ and corresponding values of _covariates_. Typically the data are organized in a table such as a `DataFrame`.
#
# A simple example from enzyme kinetics relates the rate or velocity, `v`, of an enzymatic reaction to the concentration, `c`, of the substrate according to the _Michaelis-Menten_ model
# \begin{equation}
# v = \frac{V_m\,c}{K+c}
# \end{equation}
# The parameters of the model are $V_m$, the maximum velocity, and $K$, the Michaelis parameter. Data from such an experiment is available in the `RDatasets` package as the `Puromycin` data set. This contains data from two experiments. We will concentrate on the data for the treated cells.
using BenchmarkTools, DataFrames, ForwardDiff, LinearAlgebra
using RDatasets, StatsPlots, Tables, Zygote
gr();
Pur = groupby(RDatasets.dataset("datasets", "Puromycin"), :State)
@df Pur[1] scatter(:Conc, :Rate) # Data plot for the treated cells
# As can be seen from the plot, the maximum velocity is around 200 and the Michaels parameter, `K`, which is the concentration at which the velocity is half the maximum, is about 0.05.
# ## Evaluating the mean vector and the Jacobian
# In the statistical model the response vector, $\mathbf{y}$, is the realization of a random variable, $\mathcal{Y}$, that has a "spherical" Gaussian distribution with mean, $\mathbf{\mu}$, determined from the mean function, $\mathbf{\mu}(\mathbf{\theta},\mathbf{C})$. Here $\mathbf{\theta}$ is the parameter vector and $\mathbf{C}$ is the table of covariate values. The "spherical" multivariate Gaussian (or "normal") distribution means that the covariance matrix is a multiple of the identity, in which case contours of constant probability density are spheres centered at $\mathbf{\mu}$.
# \begin{equation}
# \mathcal{Y}\sim\mathcal{N}\left(\mathbf{\mu}(\mathbf{\theta},\mathbf{C}),\sigma^2\mathbf{I}\right)
# \end{equation}
#
# The maximum likelihood estimates of $\mathbf{\theta}$ are the values that minimize the sum of squared residuals
# \begin{equation}
# \widehat{\mathbf{\theta}}=\arg\min_{\mathbf{\theta}} \|\mathbf{y}-\mathbf{\mu}(\mathbf{\theta},\mathbf{C})\|^2
# \end{equation}
#
# A simple iterative scheme for this _nonlinear least squares_ problem is the _Gauss-Newton_ method of successive linear approximation. Writing $\mathbf{\mu}^{(i)}$ for the value of the mean vector at the $i$ iterate, $\mathbf{\theta}^{(i)}$, the increment is the linear least squares solution of the residual, $\mathbf{r}^{(i)}=\mathbf{y}-\mathbf{\mu}^{(i)}$, on the Jacobian, $\mathbf{J}^{(i)}=\partial\mathbf{\mu}(\mathbf{\theta},\mathbf{C})/\partial\mathbf{\theta}|_{\mathbf{\theta}^{(i)}}$
# \begin{equation}
# \mathbf{\mathbf{\delta}^{(i)}}=\arg\min_{\mathbf{\delta}}\|\mathbf{r}^{(i)}-\mathbf{J}^{(i)}\mathbf{\delta}\|^2
# \end{equation}
# ### Vector evaluation
# In the Michaelis-Menten example the mean function could be written for vector evaluation using dot-broadcast fusion
μ(Vm, K, conc) = @. Vm * conc / (K + conc)
# producing the residual
r = Pur[1].Rate - μ(200., 0.05, Pur[1].Conc)
# The `ForwardDiff` package can provide the Jacobian but it requires a unary function on which to operate. This means that the parameters must be passed as an `AbstractVector` and the covariates must be in the closure of the function passed to, say, `ForwardDiff.jacobian!`. Some applications require reuse of the model function for different sets of covariates, such as the treated and untreated cells here, and I think that would entail using a reference to the data table when defining the model function.
const dataref = Ref(first(Pur))
function μ₁(pars)
Vm, K = pars
conc = dataref[].Conc
@. Vm * conc / (K + conc)
end
const θ = [200., 0.05]
const results = DiffResults.JacobianResult(dataref[].Conc, θ)
const cfg = ForwardDiff.JacobianConfig(μ₁, θ);
ForwardDiff.jacobian!(results, μ₁, θ, cfg)
adjoint(results.value)
results.derivs[1]
map!(-, results.value, dataref[].Rate, results.value) # replace μ by residual
sum(abs2, results.value) # current sum of squared residuals
δ = qr(results.derivs[1])\results.value
θ .+= δ
ForwardDiff.jacobian!(results, μ₁, θ, cfg)
sum(abs2, map!(-, results.value, dataref[].Rate, results.value))
# To change to a new data table, reassign `dataref[]` and create a new `JacobianResult`.
dataref[] = last(Pur)
const res2 = DiffResults.JacobianResult(dataref[].Conc, θ)
ForwardDiff.jacobian!(res2, μ₁, θ, cfg)
adjoint(res2.value)
res2.derivs[1]
# Of course, these computations can be made cleaner by defining a data structure and operating on the struct but it seems to be tricky to get the closure of the model function right, if the model function is to be an element of the struct. Because of lexical scoping the closure is defined at the time the function is defined.
# ## Scalar evaluation
# As all the cool kids are switching to the `Zygote` package for automatic differentiation, I read up on it. As far as I can see, Zygote is primarily used to evaluate gradients of scalar functions, which in this application would mean iterating over the rows of the data table. To do so it is worthwhile converting the `DataFrame`, which is column-oriented, to a "row table".
#
# The sum of squared residuals can be accumulated during the loop. Also a Cholesky factor can be used instead of a QR factorization of the Jacobian to evaluate the increment. It is convenient to zero out the Cholesky factor and use `LinearAlgebra.lowrankupdate!` for the update. If the residual value is appended to the gradient in the update vector, `v`, then the Cholesky factor can be solved for the increment in one step.
const rt = rowtable(first(Pur))
const ch = cholesky(zeros(3, 3) + I)
const v = zeros(3);
function updatech!(ch::Cholesky{T, Matrix{T}}, θ::AbstractVector{T}, rowtbl, v::Vector{T}) where {T}
rss = zero(T) # residual sum of squares
fill!(ch.factors, false)
Vm, K = θ
for r in rowtbl
copyto!(v, gradient((Vm, K) -> μ(Vm, K, r.Conc), Vm, K))
resid = r.Rate - μ(Vm, K, r.Conc)
rss += abs2(resid)
v[end] = resid
lowrankupdate!(ch, v)
end
rss
end
updatech!(ch, θ, rt, v) # returns sum of squared residuals at current θ
ch
ldiv!(LowerTriangular(view(ch.factors, 1:2, 1:2)), copyto!(δ, view(ch.factors, 1:2, 3)))
# As I understand it, `Zygote.forwarddiff` could be a better choice of AD algorithm but I don't quite understand how to use it. I thought it could work like this but it doesn't
function forwardupdate!(ch::Cholesky{T, Matrix{T}}, θ::AbstractVector{T}, rowtbl, v::Vector{T}) where {T}
rss = zero(T) # residual sum of squares
fill!(ch.factors, false) # zero out the Cholesky factor
for r in rowtbl
copyto!(v, Zygote.forwarddiff(θ -> μ(θ[1], θ[2], r.Conc), θ))
resid = r.Rate - μ(θ[1], θ[2], r.Conc)
rss += abs2(resid)
v[end] = resid
lowrankupdate!(ch, v)
end
rss
end
forwardupdate!(ch, θ, rt, v)
ch
# Of course the `updatech!` function is sufficiently fast that it is not a problem to use the reverse mode AD. I am just concerned that I may be using a sledgehammer to
@benchmark updatech!($ch, $θ, $rt, $v)
| NonlinearRegression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
import sklearn
from sklearn.preprocessing import LabelEncoder
# +
#read file
data=pd.read_csv("/Users/Nandu/Documents/OneDrive/Documents/new era/PinNPark/ML/train_data.csv")
# -
data.head()
# +
#Seperate labels and features
y=LabelEncoder().fit_transform(data["Loan_Status"])
x=data.drop('Loan_Status',1)
y=LabelEncoder().fit_transform(data["Loan_Status"])
# +
#y
# -
x.head()
y.shape
# +
def conv_to_num(data):
#convert to categories
x=data
for columns in data.columns:
x[columns]=data[columns].astype('category')
#convert to numerical
for columns in data.columns:
x[columns]=data[columns].cat.codes #missing values??
return x
# -
x.shape
x=conv_to_num(x)
x.head()
x.shape
print(x.dtypes)
x.head()
# +
from sklearn import tree
# -
clf=tree.DecisionTreeClassifier()
clf=clf.fit(x,y)
x.shape
data_test=pd.read_csv("/Users/Nandu/Documents/OneDrive/Documents/new era/PinNPark/ML/test_data.csv")
data_test.head()
data_test.shape
# +
#data_num=data_test
#data_test
# -
temp=data_test.copy()
#data_test
x_test=conv_to_num(temp)#NOt a good move
#temp
x_test.shape
x_test.head()
y_test=clf.predict(x_test)
y_test.dtype
# +
y_result=[]
i=0;
for i in range(len(y_test)):
if y_test[i]==0:
y_result.append("N")
# print"test"
elif y_test[i]==1:
y_result.append("Y")
# print(y_test[i])
#print"test"
# -
y_result=pd.DataFrame(y_result)
y_result.columns=["Loan_Status"]
y_result.head()
y_result.shape
data_test.head()
data_result=pd.concat([data_test,y_result],1)
data_result.head()
data_result.to_csv("/Users/Nandu/Documents/OneDrive/Documents/new era/PinNPark/ML/train_result.csv")
from sklearn.model_selection import train_test_split
xv_train,xv_test,yv_train,yv_test=train_test_split(x,y,test_size=.5)
xv_train.shape
clf=clf.fit(xv_train,yv_train)
from sklearn.metrics import accuracy_score
print accuracy_score(clf.predict(xv_test),yv_test)
from sklearn.naive_bayes import GaussianNB
model_nb = GaussianNB()
# +
model_nb=model_nb.fit(x,y)
# -
print accuracy_score(model_nb.predict(xv_test),yv_test)
from sklearn.linear_model import LogisticRegression
model_lg=LogisticRegression()
model_lg=model_lg.fit(x,y)
print accuracy_score(model_lg.predict(xv_test),yv_test)
| Loan_optimised.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:namicAnacondaEnv]
# language: python
# name: conda-env-namicAnacondaEnv-py
# ---
## Boiler plate code common to many notebooks. See the TestFilesCommonCode.ipynb for details
from __future__ import print_function
# %run TestFilesCommonCode.ipynb
img_labels_filename = '/Volumes/G-RAID1/Ali/catptmr_fmri/input_atlas/felineAtlasFilledIncludeBoneRevised_HD_approved_01162017_bckforcotxmeasure.nii'
img_labels = sitk.ReadImage(img_labels_filename)
img_labels = sitk.Cast(img_labels,sitk.sitkInt16)
myshow(sitk.LabelToRGB(img_labels))
exclusionLabels=((img_labels == 1) +
(img_labels == 6) +
(img_labels == 7) +
(img_labels == 39) +
(img_labels == 40) +
(img_labels == 45) +
(img_labels == 46) +
(img_labels == 78) +
(img_labels == 79) +
(img_labels == 80))
myshow(sitk.LabelToRGB(exclusionLabels))
important_labels=img_labels*(1-sitk.Cast(exclusionLabels,sitk.sitkInt16))
size = important_labels.GetSize()
#myshow(sitk.Expand(sitk.LabelToRGB(important_labels[size[0]//2,:,::-1]),[3,3,3]))
myshow(sitk.LabelToRGB(important_labels))
sitk.WriteImage(important_labels,'/Volumes/G-RAID1/Ali/catptmr_fmri/input_atlas/felineAtlas_for_correlationSeed.nii')
| Invicro_ModifyLabelMap_cat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import urllib.request
import xarray as xr
import numpy as np
from datetime import datetime, date, time, timedelta
import urllib
import requests
import json
import smtplib
# +
#not useing this right now but consider putting instance here
def get_key(file_name):
myvars = {}
with open(file_name) as myfile:
for line in myfile:
name, var = line.partition("=")[::2]
myvars[name.strip()] = str(var).rstrip()
return myvars
file_key = "C:/Users/gentemann/Google Drive/f_drive/secret_keys/saildrone.txt"
saildrone_key = get_key(file_key)
file_key = "C:/Users/gentemann/Google Drive/f_drive/secret_keys/gmail_login.txt"
email_key = get_key(file_key)
# ## Use restful API to get USV locations
endtime = datetime.today().strftime('%Y-%m-%d')
starttime = (datetime.today() + timedelta(days=-5)).strftime('%Y-%m-%d')
#all_usv = ['1041','1033','1034','1035','1036','1037']
all_usv = ['1034','1035','1036','1037']
#get token
payload={'key': saildrone_key['key'], 'secret':saildrone_key['secret']}
headers={'Content-Type':'application/json', 'Accept':'application/json'}
url = 'https://developer-mission.saildrone.com/v1/auth'
res = requests.post(url, json=payload, headers=headers)
json_data = json.loads(res.text)
names=[]
inum_usv = len(all_usv)
ilen = 500 #len(usv_data['data'])
usv_lats = np.empty((ilen,inum_usv))*np.nan
usv_lons = np.empty((ilen,inum_usv))*np.nan
usv_time = np.empty((ilen,inum_usv))*np.nan
for iusv in range(inum_usv):
str_usv = all_usv[iusv]
url = 'https://developer-mission.saildrone.com/v1/timeseries/'+str_usv+'?data_set=vehicle&interval=5&start_date='+starttime+'&end_date='+endtime+'&order_by=desc&limit=500&offset=0'
payload = {}
headers = {'Accept':'application/json','authorization':json_data['token']}
res = requests.get(url, json=payload, headers=headers)
usv_data = json.loads(res.text)
#print(usv_data.data)
for i in range(ilen):
usv_lons[i,iusv]=usv_data['data'][i]['gps_lng']
usv_lats[i,iusv]=usv_data['data'][i]['gps_lat']
usv_time[i,iusv]=usv_data['data'][i]['gps_time']
names.append(str_usv)
xlons = xr.DataArray(usv_lons,coords={'time':usv_time[:,0],'trajectory':names},dims=('time','trajectory'))
xlats = xr.DataArray(usv_lats,coords={'time':usv_time[:,0],'trajectory':names},dims=('time','trajectory'))
ds_usv = xr.Dataset({'lon': xlons,'lat':xlats})
# -
msg_body=[]
for i in range(1):
for j in range(inum_usv):
dt = datetime.fromtimestamp(ds_usv.time[i].data)
s = dt.strftime('%Y-%m-%d %H:%M:%S')
msg = all_usv[j]+" "+s+" lon :{0:5.2f}, lat :{1:5.2f} ".format(ds_usv.lon[i,j].data,ds_usv.lat[i,j].data)
msg_body.append(msg)
# +
try:
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()
except:
print ('Something went wrong...')
sent_from = email_key['key']
to = ['<EMAIL>', '<EMAIL>']
subject = 'Daily Saildrone Position Update'
body = msg_body
email_text = """\
From: %s
To: %s
Subject: %s
%s
""" % (sent_from, ", ".join(to), subject, body)
try:
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.ehlo()
server.login(email_key['key'], email_key['secret'])
server.sendmail(sent_from, to, email_text)
server.close()
print('Email sent!')
except:
print('Something went wrong...')
# -
| arctic_cruise/.ipynb_checkpoints/Email_current_locations-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/WuilsonEstacio/github-para-estadistica/blob/main/Estadistica3_y_prueba_de_hipotesis_py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="reE5Rs4OOJLl"
import pandas as pd
import numpy as np
from scipy.stats import norm
# + id="gMqgD-5k5lAI"
# Create a Population DataFrame with 10 data
data = pd.DataFrame()
data['Population'] = [47, 48, 85, 20, 19, 13, 72, 16, 50, 60]
# + id="DgUys2qo6B1B" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="a35e310d-2a27-407e-d07a-05d7a4a301a6"
# Extraer muestra con reemplazo, tamaño = 5 de Población
a_sample_with_replacement = data['Population'].sample(5, replace=True)
print(a_sample_with_replacement)
# + id="pMV22q_D6CB5" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="a5948018-08de-42fb-b2d6-aa806abb3ed9"
# Draw sample without replacement, size=5 from Population
a_sample_without_replacement = data['Population'].sample(5, replace=False)
print(a_sample_without_replacement)
# + [markdown] id="NLU0yECSGiVi"
#
# # Parameters and Statistics
# + id="yg1FqFhm6CH9" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="7fecca1d-3b6a-4a74-ab1d-ddd859a48589"
# Calculate mean and variance
population_mean = data['Population'].mean() # mean() la utilizamos para calcular la media
population_var = data['Population'].var(ddof=0) # var la utlilizamos para calcular la varianza
print('Population mean is ', population_mean)
print('Population variance is', population_var)
# + id="9Y4sX1sv6CFS" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="e722ed75-f68f-47f9-bc9a-6e2e55e28b44"
# Calcular la media muestral y la desviación estándar muestral, tamaño = 10
# Obtendrá diferentes medias y variaciones cada vez que ejecute el siguiente código
a_sample = data['Population'].sample(10, replace=True)
sample_mean = a_sample.mean()
sample_var = a_sample.var()
print('Sample mean is ', sample_mean)
print('Sample variance is', sample_var)
# + [markdown] id="JOZBNSRLL4Mb"
# # Average of an unbiased estimator
# + id="Hgdty5BkLziK"
sample_length = 500
sample_variance_collection=[data['Population'].sample(10, replace=True).var(ddof=1) for i in range(sample_length)]
# + [markdown] id="De8qtMPPMMB6"
# # Variation of Sample
# + id="jQlxVnsxMI2L"
import pandas as pd
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="900sQL71MVUM" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="90b18035-9478-42d3-8cec-6cfaf12e1e11"
# La media de la muestra y la SD siguen cambiando, pero siempre dentro de un rango determinado
Fstsample = pd.DataFrame(np.random.normal(10, 5, size=30))
print('sample mean is ', Fstsample[0].mean())
print('sample SD is ', Fstsample[0].std(ddof=1))
# + [markdown] id="itvObMvXM44b"
# # Empirical Distribution of mean
# + id="zpPH6MaqM1SE"
meanlist = []
for t in range(10000):
sample = pd.DataFrame(np.random.normal(10, 5, size=30)) # 10 es la media , 5 es la std desviacion estandar
meanlist.append(sample[0].mean())
# + id="fisinzK0NJrk"
collection = pd.DataFrame()
collection['meanlist'] = meanlist
# + id="ksf4DcceNMTg" colab={"base_uri": "https://localhost:8080/", "height": 772} outputId="543f55fb-30cc-4734-839c-3b953f024c72"
collection['meanlist'].hist(bins=100, normed=1,figsize=(15,8))
# + [markdown] id="uX6MgHrjN3CD"
# # Sampling from arbritary distribution
# + id="Yu2GALxlNz0t" colab={"base_uri": "https://localhost:8080/", "height": 806} outputId="5d35d40c-9146-48fe-f912-49d65b66a3fe"
# See what central limit theorem tells you...the sample size is larger enough,
# the distribution of sample mean is approximately normal
# apop is not normal, but try to change the sample size from 100 to a larger number. The distribution of sample mean of apop
# becomes normal.
sample_size = 100
samplemeanlist = []
apop = pd.DataFrame([1, 0, 1, 0, 1])
for t in range(10000):
sample = apop[0].sample(sample_size, replace=True) # small sample size
samplemeanlist.append(sample.mean())
acollec = pd.DataFrame()
acollec['meanlist'] = samplemeanlist
acollec.hist(bins=100, normed=1,figsize=(15,8))
# + id="qvU-2nkqOC0t"
from scipy.stats import norm
# + id="BoeSBtOzOFc2" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="5b566e85-4ca4-4773-a559-35a1a2b15750"
ms = pd.read_csv('/content/microsoft.csv',index_col = 0)
ms.head()
# + [markdown] id="8jttY0unOTC1"
#
# # Estimate the average stock return with 90% Confidence Interval
# + id="_Akhg8F7OU7D"
# we will use log return for average stock return of Microsoft
ms['logReturn'] = np.log(ms['Close'].shift(-1)) - np.log(ms['Close'])
# + [markdown] id="SeK1xlIr2Vvz"
# De forma predeterminada, norm.ppfusa mean = 0 y stddev = 1, que es la distribución normal "estándar". Puede utilizar una media y una desviación estándar diferentes especificando los argumentos locy scale, respectivamente.
# Si observa el código fuente de scipy.stats.norm, encontrará que el ppfmétodo finalmente llama scipy.special.ndtri. Entonces, para calcular la inversa de la CDF de la distribución normal estándar, puede usar esa función directamente:
# + id="aSaHA-98OdWl"
# Lets build 90% confidence interval for log return
# Construyamos un intervalo de confianza del 90% para el retorno de registros
sample_size = ms['logReturn'].shape[0] # shapees una tupla que le da una indicación del número de dimensiones en la matriz.
sample_mean = ms['logReturn'].mean()
sample_std = ms['logReturn'].std(ddof=1) / sample_size**0.5
# left and right quantile
z_left = norm.ppf(0.1)
z_right = norm.ppf(0.9)
# upper and lower bound
interval_left = sample_mean+z_left*sample_std
interval_right = sample_mean+z_right*sample_std
# + id="_IUFvV1eOdeU" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9c7f5b75-a676-41aa-dace-6f0146a1d594"
# 90% confidence interval tells you that there will be 90% chance that the average stock return lies between "interval_left"
# and "interval_right".
print('90% confidence interval is ', (interval_left, interval_right))
# + [markdown] id="yqz4ivN2OqwX"
#
# # Hypothesis testing
# + id="uOVtX3Z8O_Ju"
# import microsoft.csv, and add a new feature - logreturn
ms = pd.read_csv('microsoft.csv', index_col = 0)
ms['logReturn'] = np.log(ms['Close'].shift(-1)) - np.log(ms['Close'])
# + id="MV4laFlIO_hT" colab={"base_uri": "https://localhost:8080/", "height": 490} outputId="ee863157-f095-465e-a131-c17dc6057bb9"
# Log return goes up and down during the period
# El retorno de registros sube y baja durante el período
ms['logReturn'].plot(figsize=(20, 8))
plt.axhline(0, color='red')
plt.show()
# + [markdown] id="jTcXc3VGPf2l"
#
# # Steps involved in testing a claim by hypothesis testing
# # Step 1:
# Set hypothesis
# $H_0 : \mu = 0$ $H_a : \mu \neq 0$
#
# # H0 means the average stock return is 0 H1 means the average stock return is not equal to 0
# + [markdown] id="VaLn6oawQF5G"
# Step 2: Calculate test statistic
# + [markdown] id="O0Qc7u3jAe-k"
# Si Ha: mu no es igual a 0, es una prueba de dos colas y un valor p = 2 (1-norm.cdf (np.abs (z), 0, 1))
#
# si Ha: mu> 0, es la prueba de la cola superior y el valor p = 1-norm.cdf (z, 0,1)
#
# si Ha: mu <0, es una prueba de cola inferior y valor p = norm.cdf (z, 0,1)
# + id="bKkdtOOgO_n1" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="126c537f-4815-4c27-da0b-1977b6f8f5eb"
# aqui estaremos una z distribucion la cual se utiliza para muestras grandes.
sample_mean = ms['logReturn'].mean() # sample_mean es la media muestral, el rendimiento diario promedio.
sample_std = ms['logReturn'].std(ddof=1) # sample_std es la desviación estándar de la muestra, n es el tamaño de la muestra.
n = ms['logReturn'].shape[0]
# if sample size n is large enough, we can use z-distribution, instead of t-distribtuion
# mu = 0 under the null hypothesis
zhat = (sample_mean - 0)/(sample_std/n**0.5) # z=(x-mu)/sigma/sqrt(n) aqui asumimos que mu es cero
print(zhat)
# + [markdown] id="PMicFsgUQP59"
# # Step 3: Set desicion criteria
# + id="HqYE2dYbQQKI" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9c5ced39-ef8f-43e9-c9b1-d7e3f50f78f4"
# confidence level nivel de confianza
alpha = 0.05
zleft = norm.ppf(alpha/2, 0, 1)
zright = -zleft # z-distribution is symmetric
print(zleft, zright)
# + [markdown] id="gWZ4yZR7QQUl"
# # Step 4: Make decision - shall we reject H0?
# + id="J2yYd15UQQc3" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="020a9fc0-b340-42a3-8791-7088816df082"
print('At significant level of {}, shall we reject: {}'.format(alpha, zhat>zright or zhat<zleft))
# + [markdown] id="kevsiI1a-8qJ"
# # An alternative method: p-value
# + id="dz8aemSB-806"
# step 3 (p-value)
p = 1 - norm.cdf(zhat, 0, 1)
print(p)
# + id="6kU5dqaV-8_P"
# step 4
print('At significant level of {}, shall we reject: {}'.format(alpha, p < alpha))
| Estadistica3_y_prueba_de_hipotesis_py.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Two and Three Dimensional Grids
#
# In this lesson, we discuss the details of how to extend what we have learned to two and three dimensional grids. Throughout the lessons so far, we have been careful to avoid making too many assumptions that were specific to the fact that the cases were one-dimensional. It is therefore quite straightforward to exend what we have done so far to higher dimensional grids, provided they have an ordered, orthogonal structure.
#
# This leads us to a distinction between what are called "structured" and "unstructured" grids. It is important to note that these labels are primarily associated with how the grid cells are labelled, not their actual topologies (although structured grids are restricted by the topologies that can be represented with structured storage schemes). The figure below shows two identical grids where one is stored as a structured grid (i.e. with ordered row and column labelling) and one that is stored as an unstructured grid (i.e. with arbitrary cell labelling).
#
# 
#
# Clearly, for grids that are composed of abitrary polyhedra with no inherent structure, there is no option other than storing the grid with an unstructured labelling scheme, as shown below.
#
# 
#
# In this lesson, we will first look at how to work with higher dimensional structured grids, then we will discuss some of the details of unstructured grids.
# ## Structured Grids
#
# ### General Discretization
#
# We will now discretize a generic transport equation over a two-dimensional structured grid. The extension to three dimensions from this is then almost trivial. From Lesson 1, recall the generic transport equation:
#
# $$
# \frac{\partial{\phi}}{{\partial t}} + \nabla\cdot\left(\mathbf{u}\phi\right) + \nabla\cdot\mathbf{J}_\phi = S_\phi
# $$
#
# Carrying through the space-time integration, we arrive at:
#
# $$
# \frac{\phi^{t+\Delta t/2}-\phi^{t-\Delta t/2}}{\Delta t}
# + \sum_{i=0}^{N_{ip}-1} \mathbf{u}_{ip}\cdot\mathbf{n}_{ip}\phi_{ip} A_{ip}
# = \sum_{i=0}^{N_{ip}-1} \mathbf{J}_{\phi,ip}\cdot\mathbf{n}_{ip}A_{ip}
# + S_\phi V_P
# $$
#
# Suppose we want to make this general form represent the conservation of mass equation, then we set $\phi=\rho$, $\mathbf{J}_{\phi} = 0$ and $ S_\phi = 0$. If we then label the integration points as $w$ (west), $e$ (east), $s$ (south), and $n$ (north), we arrive at:
#
# $$
# \frac{\rho^{t+\Delta t/2}-\rho^{t-\Delta t/2}}{\Delta t}
# + \dot{m}_e - \dot{m}_w + \dot{m}_n - \dot{m}_s
# = 0
# $$
#
# Assuming density is constant we arrive at:
#
# $$
# \dot{m}_e - \dot{m}_w + \dot{m}_n - \dot{m}_s = 0
# $$
#
# To make the general form represent the conservation of momentum equation in the $x$-direction, we set $\phi = \rho u$,
# $\mathbf{J}_{\phi} = \mu \nabla u$, and $S_\phi = -\partial p/\partial x$
#
# $$
# \frac{\phi^{t+\Delta t/2}-\phi^{t-\Delta t/2}}{\Delta t}
# + \dot{m}_e u_e - \dot{m}_w u_w + \dot{m}_n u_n - \dot{m}_s u_s
# = \mu \left.\frac{\partial u}{\partial x}\right|_e - \mu \left.\frac{\partial u}{\partial x}\right|_w
# + \mu \left.\frac{\partial u}{\partial y}\right|_n - \mu \left.\frac{\partial u}{\partial y}\right|_s
# - \frac{\partial p}{\partial x} V_P
# $$
#
# Similarly, we can derive the momentum equation for the $y$ component of velocity. To complete the discretization of the momentum equation, we should subtract the mass equation, multipled by the appropriate velocity component at cell $P$, from each momentum equation. Then, we choose a time integration scheme to complete the transient term, choose an advection scheme to complete the advection term (although we will still linearize based on UDS), and approximate the derivative terms using finite differences.
#
# Without going through all of the details, the momentum equations in the $x$ and $y$ directions can be written in terms of their linearization coefficients (similar to Lesson 5):
#
# $$
# a_P u_P = - a_W u_W - a_E u_E - a_S u_S - a_N u_N + b_u - \frac{p_E - p_W}{2\Delta x}V_P
# $$
#
# $$
# a_P v_P = - a_W v_W - a_E v_E - a_S v_S - a_N v_N + b_v - \frac{p_N - p_S}{2\Delta y}V_P
# $$
#
# Similar to the one-dimensional case, an oscillatory pressure field can be detected as smooth if we are not careful. In two dimensions, the situation is actually worse because oscillatory modes can develop in both directions. The diagram below shows an example that would be accepted by the solver as a smooth pressure field:
#
# 
#
# As in one-dimension, this problem can be overcome by (i) using a staggered grid or (ii) using a collocated grid with different advected and advecting velocities. Since the staggered grid become more complicated as the number of dimensions increases, we will only consider the collocated approach. The derivation follows a very similar pattern to what was shown in Lesson 5. The resulting expressions for the advecting velocities in each direction are:
#
# $$
# \hat{u}_e = \frac{1}{2}\left(u_P + u_E \right)
# - \hat{d}_e^u\left[\left.\frac{dp}{dx}\right|_e - \frac{1}{2}\left(\left.\frac{dp}{dx}\right|_P + \left.\frac{dp}{dx}\right|_E \right)\right]
# $$
#
# $$
# \hat{v}_n = \frac{1}{2}\left(v_P + v_N \right)
# - \hat{d}_n^v\left[\left.\frac{dp}{dy}\right|_n - \frac{1}{2}\left(\left.\frac{dp}{dy}\right|_P + \left.\frac{dp}{dy}\right|_N \right)\right]
# $$
#
# where the superscript on $\hat{d}$ denotes the equation with which it is associated. Similar to one-dimension, the coupling can either be done in a direct or segregated method (e.g. SIMPLE or SIMPLEC).
#
# ### False Diffusion
#
# We have already discussed false diffusion in one dimension and found that although a Taylor series analysis shows it is a serious problem, it is not as bad as the analysis would indicate. We found that using UDS for linearization and correcting the advective fluxes with a higher order method was an effective method of getting good results in these cases.
#
# In two (and three) dimensions the problem of false diffusion comes from a different source than in one dimension, and is associated with cases where the flow streamlines are not well aligned with the grid lines. For steady advection of a scalar quantity with no sources and negligible real diffusion in comparison to advection, the discrete transport equation is given as:
#
# $$
# \dot{m}_e \phi_e - \dot{m}_w \phi_w + \dot{m}_n \phi_n - \dot{m}_s \phi_s = 0
# $$
#
# If we use UDS for advection, and we assume a positive flow in both the $x$ and $y$ directions, we get:
#
# $$
# \dot{m}_e \phi_P - \dot{m}_w \phi_W + \dot{m}_n \phi_P - \dot{m}_s \phi_S = 0
# $$
#
# Solving for $\phi_P$ we get:
#
# $$
# \phi_P = \frac{\dot{m}_w}{\dot{m}_e + \dot{m}_n} \phi_W + \frac{\dot{m}_s}{\dot{m}_e + \dot{m}_n} \phi_S
# $$
#
# If we consider a flow at 45 degrees to the $x$ axis, then $\dot{m}_e = \dot{m}_w = \dot{m}_n = \dot{m}_s$ and the solution becomes:
#
# $$
# \phi_P = \frac{1}{2} \phi_W + \frac{1}{2} \phi_S
# $$
#
# If a value of 0 is advected from the bottom surface of the domain and a value of 1 is advected from the left surface of the domain, the exact solution is a step profile at any cross-section perpendicular to the flow direction, as shown below:
#
# 
#
# The code cell below shows the actual solution to the problem:
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Create an array to hold the solution
phi = np.zeros((7, 7))
# Fill in the advected values on the left
phi[:,0] = 1
# Compute the solution starting from the bottom left
for j in reversed(range(phi.shape[0]-1)):
for i in range(1,phi.shape[1]):
phi[j,i] = 0.5*phi[j,i-1] + 0.5*phi[j+1,i]
# Print the solution matrix
print(phi)
# Plot the solution along the diagonal cross-section
sol = np.diag(phi)
x = np.array([_ for _ in range(sol.size)]) # Scale of axis is arbitrary
plt.plot(x, sol, label='Solution')
# Plot the best possible numerical solution based on this grid
best = np.where(x < x.size/2.0, 1, 0)
plt.plot(x, best, label='Best Numerical')
# Plot the exact solution based on a fine grid
x_exact = np.linspace(0, x.size, 1000)
exact = np.where(x_exact < x.size/2.0, 1, 0)
plt.plot(x_exact, exact, label='Exact')
# Show the plot with legend
plt.legend()
plt.show()
# -
# It can clearly be seen that the solution looks quite diffusive. Since there is no actual diffusion, all of this represents false diffusion. In order to get a good solution the false diffusion coefficient $\Gamma_{false}$ should be much less than the real diffusion coefficient, $\Gamma_{real}$.
#
# An approximate expression for the false diffusion coefficient in two dimensions can be found to be:
#
# $$
# \Gamma_{false} = \frac{\rho |\mathbf{u}| \Delta x \Delta y \sin(2\theta)}{4 (\Delta y \sin^3(\theta) + \Delta x \cos^3(\theta))}
# $$
#
# where $\Delta x$ and $\Delta y$ are the grid spacings in each direction and $\theta$ is the angle that the velocity makes with the $x$ axis.
#
# Below, we plot the value of $\Gamma_{false}$ for different angles and grid spacings (assuming equal grid spacings in $x$ and $y$).
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Assume unit values of density and velocity magnitudes
# Set the parameters of study
delta = [0.01, 0.05, 0.1]
theta = np.linspace(0, np.pi/2, 100)
# Calculate the false diffusion coefficients
for d in delta:
gamma = d*d*np.sin(2*theta)/4/(d*np.power(np.sin(theta), 3) + d*np.power(np.cos(theta),3))
plt.plot(theta, gamma, label="dx = " + str(d))
# Show the plot
plt.xlabel(r"$\theta$")
plt.ylabel(r"$\Gamma_{false}$")
plt.legend()
plt.show()
# -
# It can be seen in the plot above that false diffusion is most severe when the flow is at 45 degrees to the grid lines and is essentially zero when the flow is parallel to the grid lines. It is also shown that refining the grid spacing reduces false diffusion.
#
# In order to improve accuracy we can use higher order advection schemes to reduce the effects of false diffusion as much as possible, while also ensuring the grid is fine enough.
# ## Non-Orthogonal Unstructured Grids
#
# One of the differences between structured and unstructured grids is that there is no natural ordering available in an unstructured grid. Unlike a structured grid, where the neighbouring control volumes can be found by an appropriate shifting of the given cell index, unstructured grids must store a map of the cell connectivity. That is to say, for each cell, there must be a list of all of its neighbouring cells.
#
# Other issues that are faced in non-orthogonal unstructured grids are the calculation of grid geometry, interpolation, and gradient reconstruction. Those topics will be discussed below.
#
# ### Grid Geometry
#
# In general, an unstructured grid will be defined by a set of points representing the corners of the control volume. These points are connected by edges which define a set of faces. Each face then belongs to two control volumes, one on each side. To calculate the grid geometry, we start with the faces and then build the volumes. Here we will assume that the faces are arbitrary polygons that combine to make arbitrary polyhedral control volumes.
#
# To calculate the face geometry, we start by choosing a single (arbitrary) corner node and connect it with each of the other corner nodes, creating a set of triangular faces. Using cross products, we can calculate the area of each triangle.
#
# 
#
# For the example above:
#
# $$
# A_0 = \frac{1}{2} \| (\mathbf{x}_1 - \mathbf{x}_0) \times (\mathbf{x}_2 - \mathbf{x}_0) \|
# $$
#
# $$
# A_1 = \frac{1}{2} \| (\mathbf{x}_2 - \mathbf{x}_0) \times (\mathbf{x}_3 - \mathbf{x}_0) \|
# $$
#
# $$
# A_2 = \frac{1}{2} \| (\mathbf{x}_3 - \mathbf{x}_0) \times (\mathbf{x}_4 - \mathbf{x}_0) \|
# $$
#
# Or, in general for the triangle with index $i$:
#
# $$
# A_i = \frac{1}{2} \| (\mathbf{x}_{i+1} - \mathbf{x}_0) \times (\mathbf{x}_{i+2} - \mathbf{x}_0) \|
# $$
#
# In a general case with $N_c$ corner nodes, the total area of the face associated with the integration point $ip$ can be calculated as:
#
# $$
# A_{ip} = \sum_{i=0}^{N_c-2} A_i
# = \frac{1}{2} \sum_{i=0}^{N_c-2} \| (\mathbf{x}_{i+1} - \mathbf{x}_0) \times (\mathbf{x}_{i+2} - \mathbf{x}_0) \|
# $$
#
# We also need to find the centroid of the face, which is the position of the integration point (recall Lesson 1, where we showed this positioning is critical for second order accuracy). To find the centroid, we use the area-weighted average of the centroids of each of the sub-divided triangular faces, defined above. The centroid of a triangle is defined by the average of its corner positions:
#
# $$
# \mathbf{x}_{c,i} = \frac{1}{3} (\mathbf{x}_i + \mathbf{x}_{i+1} + \mathbf{x}_{i+2})
# $$
#
# where $i = 0,1,...,N_c-2$.
#
# The integration point location is therefore:
#
# $$
# \mathbf{x}_{ip} = \frac{1}{A_{ip}} \sum_{i=0}^{N_c-2} A_i \mathbf{x}_{c,i}
# $$
#
# We will also need the normal vector from the face, which can be calculated similarly to the face area, since the cross products define vectors normal to each triangular sub-face. To obtain a unit normal vector, each vector is scaled by its magnidude, i.e.:
#
# $$
# \mathbf{n}_i = \frac{(\mathbf{x}_{i+1} - \mathbf{x}_0) \times (\mathbf{x}_{i+2} - \mathbf{x}_0)}{\| (\mathbf{x}_{i+1} - \mathbf{x}_0) \times (\mathbf{x}_{i+2} - \mathbf{x}_0) \|}
# $$
#
# Then, similar to the centroid, an area-weighted average is used to compute the normal vector at the integration point:
#
# $$
# \mathbf{n}_{ip} = \frac{1}{A_{ip}} \sum_{i=0}^{N_c-2} A_i \mathbf{n}_{i}
# $$
#
# Note that this assumes the face is nearly planar. If there is a chance that the face is highly warped, it may be a good idea to repeat this process for each possible choice of $\mathbf{x}_0$ and average all of the results to get the best possible estimate of the normal vector.
#
# Now that all of the face geometry is defined, we can calculate the geometry of the cell. The volume of the cell is defined as:
#
# $$
# V_P = \int_V dV
# $$
#
# Ideally, we want to relate this integral to the face geometry. An interesting trick can be applied to this integral by noting the following:
#
# $$
# \nabla\cdot\mathbf{x}
# = \frac{\partial x}{\partial x} + \frac{\partial y}{\partial y} + \frac{\partial z}{\partial z}
# = 3
# $$
#
# As a result, the volume integral can be re-written as:
#
# $$
# V_P = \frac{1}{3} \int_V \nabla\cdot\mathbf{x} dV
# $$
#
# Essentially, we have just multiplied and divided the equation by the value 3, which has no effect. However, this has introduced a divergence operator into the volume integral that can be transformed into a surface integral by Gauss' theorem:
#
# $$
# V_P = \frac{1}{3} \int_S \mathbf{x}\cdot\mathbf{n} dS
# $$
#
# where $\mathbf{n}$ is the unit normal vector directed away from the surface of the control volume. This can then be replaced by a discrete summation over all of the integration points:
#
# $$
# V_P = \frac{1}{3} \sum_{ip=0}^{N_{ip}-1} \mathbf{x}_{ip,i}\cdot\mathbf{n}_{ip,i} A_{ip,i}
# $$
#
# All of the quantites in the summation are properties of the face geometry that are known. Therefore, the volume of the cell can be calculated in this way.
#
# The definition of the centroid of the volume $P$ is given as:
#
# $$
# \mathbf{x}_P = \frac{1}{V_P} \int_V \mathbf{x} dV
# $$
#
# Once again, we would like to express this as the divergence of a vector such that we can convert it into a surface integral depending on quantities at the integration point. Once again, we perform a particular manipulation to the equation to accomplish this. In this case, consider the following:
#
# $$
# \nabla\cdot(\mathbf{x}\mathbf{x})
# = \mathbf{x}\nabla\cdot\mathbf{x} + \mathbf{x}\cdot\nabla\mathbf{x}
# $$
#
# In the first term on the right side of the equation above, we have $\nabla\cdot\mathbf{x}$, which we have already shown is equal to 3. Therefore, we can say:
#
# $$
# \nabla\cdot(\mathbf{x}\mathbf{x})
# = 3\mathbf{x} + \mathbf{x}\cdot\nabla\mathbf{x}
# $$
#
# Expanding the second term on the right side of the equation above:
#
# $$
# \mathbf{x}\cdot\nabla\mathbf{x}
# = \left(x\frac{\partial}{\partial x} + y\frac{\partial}{\partial y} + z\frac{\partial}{\partial z}\right)\mathbf{x}
# = \left(x\frac{\partial \mathbf{x}}{\partial x} + y\frac{\partial \mathbf{x}}{\partial y} + z\frac{\partial \mathbf{x}}{\partial z}\right)
# = \left(x\left[\begin{matrix} 1 \\ 0 \\ 0\end{matrix}\right] + y\left[\begin{matrix} 0 \\ 1 \\ 0\end{matrix}\right] + z\left[\begin{matrix} 0 \\ 0 \\ 1\end{matrix}\right]\right)
# = \left[\begin{matrix} x \\ y \\ z\end{matrix}\right]
# = \mathbf{x}
# $$
#
# Therefore:
#
# $$
# \nabla\cdot(\mathbf{x}\mathbf{x}) = 4\mathbf{x}
# $$
#
# We can then rewrite the expression for the cell centroid as:
#
# $$
# \mathbf{x}_P
# = \frac{1}{4 V_P} \int_V \nabla\cdot(\mathbf{x}\mathbf{x}) dV
# = \frac{1}{4 V_P} \int_S (\mathbf{x}\mathbf{x})\cdot\mathbf{n} dS
# $$
#
# Expressing as a discrete summation over the integration point faces:
#
# $$
# \mathbf{x}_P
# = \frac{1}{4 V_P} \sum_{ip=0}^{N_{ip}-1} \mathbf{x}_{ip,i}\mathbf{x}_{ip,i}\cdot\mathbf{n}_{ip,i} A_{ip,i}
# $$
#
# This defines all of the required face and cell geometry required for unstructured grid calculations.
#
# ### Interpolations
#
# In order to perform interpolations on the grid, we define the following points associated with a particular control volume face:
#
# Label | Description |
# :-----:| :-----------------------------------------------------------:|
# $P$ | Control volume under consideration |
# $nb$ | Neighbouring control volume sharing the face containing $ip$ |
# $ip$ | Integration point location (face centroid) |
# $f$ | Point along the vector connecting $P$ to $nb$ |
#
# Then, we define the following displacement vectors:
#
# Label | Description |
# :------------------:| :-----------------------------------:|
# $\mathbf{D}_{P,nb}$ | Displacement vector from $P$ to $nb$ |
# $\mathbf{D}_{f,ip}$ | Displacement vector from $f$ to $ip$ |
#
# These are illustrated further in the diagram below:
#
# 
#
# In practice, $f$ could be located anywhere along the vector $\mathbf{D}_{P,nb}$, but the best practice is to place it such that $\mathbf{D}_{f,ip}$ is perpendicular to $\mathbf{D}_{P,nb}$, i.e. $\mathbf{D}_{f,ip}\cdot\mathbf{D}_{P,nb}=0$. This minimizes the size of $\mathbf{D}_{f,ip}$, which minimizes the size of the gradient correction term, which will be shown below. Also, if the grid happens to be orthogonal, this ensures that $\mathbf{D}_{f,ip}$ is exactly zero.
#
# Based on the placement of $f$ we define the quantity $f_{ip}$ to represent the location of $f$ as a function of $\mathbf{D}_{P,nb}$:
#
# $$
# \mathbf{x}_f = \mathbf{x}_P + f_{ip}\mathbf{D}_{P,nb}
# $$
#
# A general second order interpolation of a value $\phi$ to the integration point can be formulated as:
#
# $$
# \phi_{ip} = (1-f_{ip})\phi_P + f_{ip}\phi_{nb}
# + \mathbf{D}_{f,ip}\cdot\left[(1-f_{ip})\left.\nabla\phi\right|_P + f_{ip}\left.\nabla\phi\right|_{nb}\right]
# $$
#
# where the first term is an inverse distance interpolation to the point $f$ and the second term is a non-orthogonal correction from $f$ to $ip$. In the non-orthogonal correction term, the gradient at $f$ is estimated based on an inverse distance interpolation along $\mathbf{D}_{P,nb}$.
#
# It is now clear that we need to know the gradients of all variables in order to perform interpolations on the grid. This will be considered next.
#
# ### Gradient Reconstruction
#
# There are several different ways that the gradient can be reconstructed. Here we will focus on Gauss-based methods, since they are relatively simple to explain. However, it is worth noting that there are methods based on least-squares that are both popular and effective.
#
# The Gauss-based gradient reconstruction method is based on Gauss' theorem, which allows us to write:
#
# $$
# \int_V \nabla\phi dV = \int_S \phi\mathbf{n} dS
# $$
#
# If we assume $\nabla\phi$ to be piecewise constant in each cell, the equation above can be re-written for the cell $P$ as:
#
# $$
# \left.\nabla\phi\right|_P V_P = \sum_{ip=0}^{N_{ip}-1} \phi_{ip}\mathbf{n}_{ip} A_{ip}
# $$
#
# where the surface integral has been replaced with a discrete summation. Solving for the gradient:
#
# $$
# \left.\nabla\phi\right|_P
# = \frac{1}{V_P} \sum_{ip=0}^{N_{ip}-1} \phi_{ip}\mathbf{n}_{ip} A_{ip}
# $$
#
# The problem with the expression above is that the interpolation of $\phi_{ip}$ depends on the gradient, making this a non-linear system. There are a few solutions to this problem:
#
# - Ignore the gradients in the calculation of $\phi_{ip}$ and simply use the inverse distance approximation based on $\phi_P$ and $\phi_{nb}$. This is not very accurate, since the gradients will be at most first order accurate. This is the current default behaviour in ANSYS Fluent's "Green-Gauss Cell-Based" method (in fact they assume $f_{ip}=0.5$, so the values are not even accurate on an orthogonal grid).
# - Include the most recent gradients in the calculation of $\phi_{ip}$, and solve the system iteratively until it converges. This method can be computationally expensive, since the gradients may need to be updated many times. The convergence of this method has been observed to be quite poor (it may diverge easily), so it is not recommended.
# - Calculate the gradients in multiple stages, first using no gradient corrections. This will give a low-order estimate of the gradients. Then, repeat the process using the first-pass gradients in the correction term. Optionally, the process can be repeated a few times with the previous gradient solution being used in the gradient correction term. For convergence of this method, it is important to always use the gradients from the last complete gradient evaluation. Since the order of update can be quite random, it is better to always use the older values that use some new and some old, which causes instability in the solution. This method is fairly efficient if the number of gradient updates is kept low (e.g. a maximum of two or three updates afer the initial pass). The accuracy is much better than the option of ignoring the gradient correction altogether.
# - Create a linear system so that all of the gradients can be solved simultaneously. This method is effective, but requires a significant amount of additional memory, additional coding complexity, and can be computationally expensive.
#
# ### Discretization Details
#
# The transient and source terms (including pressure source terms) can be treated similarly to structured grids, since they are volume-based terms. Advection and diffusion terms are surface terms, and therefore must be considered differently for non-orthogonal unstructured grids. These details will be discussed briefly below.
#
# ### Advection Terms
#
# The discretized advection term from Lesson 1 was given as:
#
# $$
# \int_V \nabla\cdot\left(\mathbf{u}\phi\right) dV
# = \sum_{i=0}^{N_{ip}-1} \mathbf{u}_{ip}\cdot\mathbf{n}_{ip}\phi_{ip} A_{ip}
# $$
#
# In general, the value $\mathbf{u}_{ip}$ is replaced by the advecting velocity $\hat{\mathbf{u}}_{ip}$, as discussed Lesson 5. Although it won't be discussed here, a similar definition for $\hat{\mathbf{u}}_{ip}$ can be developed for unstructured grids. Interpolation of $\phi_{ip}$ carries some of the same stability constraints as in one-dimension. Therefore is is common to use a second-order upwind interpolation, defined by:
#
# $$
# \phi_{ip} = \phi_u + \left.\nabla\phi\right|_u\cdot\mathbf{D}_{ip,u}
# $$
#
# where $u$ represents the upwind cell, i.e.:
#
# $$
# u = \left\{ \begin{matrix} P \text{ if } \dot{m}_{ip} \ge 0 \\ nb \text{ if } \dot{m}_{ip} < 0\end{matrix} \right.
# $$
#
# Linearization is carried out with respect to $\phi_u$, which ensures the linearization coefficients have the correct sign, similar to the UDS scheme discussed previously. There are of course many other possible advection schemes, including CDS and QUICK which can be adapted to general unstructured grids. It should be noted that sometimes a "flux limiter" needs to be applied to ensure the integration point value is bounded by the surrounding cell values. This is particularly important for flows with discontinities (e.g. shocks), but is also useful to compensate for the fact that gradients may not always be accurate and could cause unbounded face values to occur.
#
#
# ### Diffusion Terms
#
# From Lesson 1, the discretized diffusion term was given as:
#
# $$
# \int_V\nabla\cdot\mathbf{J}_\phi dV
# = \sum_{i=0}^{N_{ip}-1} \mathbf{J}_{\phi,ip}\cdot\mathbf{n}_{ip}A_{ip}
# $$
#
# Typically, the diffusive flux $\mathbf{J}$ is proportional to the gradient of $\phi$, e.g. Fourier's law or Fick's law. Therefore we assume:
#
# $$
# \mathbf{J}_{\phi,ip} = -\Gamma_P\left.\nabla\phi\right|_{ip}
# $$
#
# Calculation of the gradient at the integration point is based on the enforcement of a continuous flux across all faces. The continuity of diffusive flux at a face is expressed mathematically as
#
# $$
# \Gamma_P \left. \nabla \phi \right|_{ip,P} \cdot \mathbf{n}_{ip}
# = \Gamma_{nb} \left. \nabla \phi \right|_{ip,nb} \cdot \mathbf{n}_{ip}
# $$
#
# The derivatives normal to the integration point are computed by extrapolating from the cell-centers to a point located on a line that intersects $ip$ and is normal to the control surface, as shown in the figure below.
#
# 
#
# A finite-difference approximation can then be used to evaluate the normal derivative along this line as:
#
# $$
# \left. \nabla \phi \right|_{ip,P} \cdot \mathbf{n}_{ip}
# = \frac{\phi_{ip} - \left[ \phi_P + \left. \nabla \phi \right|_P
# \cdot (\mathbf{D}_{P,ip} - (\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip})\mathbf{n}_{ip} ) \right]}{\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip}}
# $$
#
# Forming a similar expression for the control volume $nb$ and equating the two through the flux balance expression, results in the following expression for the integration point value that satisfies the heat flux from both sides of the control surface:
#
# $$
# \phi_{ip} =
# \frac{\Gamma_{nb} (\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip})}
# {\Gamma_{nb} (\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip}) - \Gamma_P (\mathbf{D}_{nb,ip} \cdot \mathbf{n}_{ip})} \phi_{nb}
# - \frac{\Gamma_P (\mathbf{D}_{nb,ip} \cdot \mathbf{n}_{ip})}
# {\Gamma_{nb} (\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip}) - \Gamma_P (\mathbf{D}_{nb,ip} \cdot \mathbf{n}_{ip})} \phi_P \\
# + \frac{\Gamma_{nb} (\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip})(\mathbf{D}_{nb,ip} - (\mathbf{D}_{nb,ip}\cdot\mathbf{n}_{ip})\mathbf{n}_{ip})}
# {\Gamma_{nb} (\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip}) - \Gamma_P (\mathbf{D}_{nb,ip} \cdot \mathbf{n}_{ip})} \cdot \left. \nabla \phi \right|_{nb} \\
# - \frac{\Gamma_P (\mathbf{D}_{nb,ip} \cdot \mathbf{n}_{ip})(\mathbf{D}_{P,ip} - (\mathbf{D}_{P,ip}\cdot\mathbf{n}_{ip})\mathbf{n}_{ip})}
# {\Gamma_{nb} (\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip}) - \Gamma_P (\mathbf{D}_{nb,ip} \cdot \mathbf{n}_{ip})} \cdot \left. \nabla \phi \right|_P
# $$
#
# Substituting the expression above back into the equatino for the normal derivative results in the following expression for the normal derivative, in terms of the cell-centered values, which ensures a flux balance across the control surface:
#
# $$
# \left. \nabla \phi \right|_{ip,P} \cdot \mathbf{n}_{ip} =
# \frac{\phi_{nb} - \phi_P}{(\mathbf{D}_{P,ip}\cdot\mathbf{n}_{ip}) - \frac{\Gamma_P}{\Gamma_{nb}}(\mathbf{D}_{nb,ip}\cdot\mathbf{n}_{ip})}
# + \frac{(\mathbf{D}_{nb,ip}-(\mathbf{D}_{nb,ip}\cdot\mathbf{n}_{ip})\mathbf{n}_{ip})}
# {(\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip}) - \frac{\Gamma_P}{\Gamma_{nb}}(\mathbf{D}_{nb,ip} \cdot \mathbf{n}_{ip})} \cdot\left.\nabla \phi \right|_{nb} \\
# - \frac{(\mathbf{D}_{P,ip}-(\mathbf{D}_{P,ip}\cdot\mathbf{n}_{ip})\mathbf{n}_{ip})}
# {(\mathbf{D}_{P,ip} \cdot \mathbf{n}_{ip}) - \frac{\Gamma_P}{\Gamma_{nb}}(\mathbf{D}_{nb,ip} \cdot \mathbf{n}_{ip})} \cdot\left.\nabla \phi \right|_P
# $$
#
# This expression can then be used to calculate the diffusive flux from the $P$ cell, with a similar expression being used for $nb$, which ensures the flux is consistent. The two gradient terms in the equation above account for non-orthogonality in the grid. In the limit of a completely orthogonal grid the equation above reduces to the harmonic mean formulation of Patankar. In the limit of an orthogal grid with constant diffusivity, $\Gamma$, this expression is identical to that used in our one-dimensional codes.
# ## Summary
#
# With this lesson, you now have most of the information required if you were to extend our one dimensional code to higher dimensions. You also now have an understanding of the additional considerations that come into play in unstructured CFD codes.
| cfdcourse/Lessons/6-TwoAndThreeDimensionalGrids.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
library(ggplot2)
library(tibble)
library(microbenchmark)
# ### Data generator
generate_data <- function(n=200, d=100, s=2, seed =42)
{
set.seed(seed)
beta <- numeric(length = d)
beta [1:s] <- 1
X <- matrix(rnorm(n*d), nrow=n, ncol=d)
y <- c(X %*% beta) + rnorm(n)
list(X=X, y=y, beta=beta)
}
data <- generate_data(n=1000, d=400, s=4, seed =58)
# ### Helpers
linspace <- function(x1, x2, n=100) {
stopifnot(is.numeric(x1), is.numeric(x2), length(x1)==1, length(x2)==1)
n <- floor(n)
if (n <= 1) x2
else seq(x1, x2, length.out=n)
}
hess.loss.max <- function(X)
{
hess.loss = 2 * (t(X) %*% X)
hess.ev = eigen(hess.loss)$values
return(max(hess.ev))
}
l1.norm <- function(a) {sum(abs(a))}
l2.norm <- function(a) {sqrt(sum(a*a))}
loss.lasso <- function(X, y, beta, lam){0.5*l2.norm(X%*%beta-y)**2+lam*l1.norm(beta)}
backtrack <- function(f, grad, x, eta, k)
{
i <- 1
alpha <- c(1)
search_vec <- -grad
t <- -k * t(search_vec)%*%grad
while (f(x)-f(x+alpha[i]*search_vec)<alpha[i]*t)
{
alpha <- c(alpha, eta*alpha[i])
i <- i+1
}
return(tail(alpha, 1))
}
# ### GD steps
gd_step.sub_grad <- function(X, y, beta_, t, lam) {beta_-t* (t(X)%*%(X%*%beta_-y)+lam*sign(beta_))}
prox.l1 <- function(x, lam){sign(x)*pmax(abs(x)-lam, 0)}
gd_step.prox <- function(X, y, beta_, t, lam) {prox.l1(beta_-t*t(X)%*%(X%*%beta_-y), lam*t)}
gd_step.acc_prox <- function(X, y, beta_, beta_prev, t, lam, k)
{
v <- beta_+(k/(k+3))*(beta_-beta_prev)
return(prox.l1(v-t*t(X)%*%(X%*%v-y), lam*t))
}
# ### Fitters
fit.lasso <- function(alg, X, y,
beta_true=NULL, lam=3, max_iter=50, early_stop=0, eta=0.1, k=0.3, back_track=FALSE, t=1)
{
d <- dim(X)[2]
beta = numeric(length=d)
betas = list(beta, beta)
loss <- NULL
error <- NULL
iter <- 0
t<-t/hess.loss.max(X)
while(iter<max_iter)
{
iter <- iter+1
y_hat <- X%*%beta
if(back_track)
{t<-backtrack(function(b){loss.lasso(X, y, b, lam)}, t(X)%*%(X%*%beta-y)+lam*sign(beta), beta, eta, k)}
if(alg=='prox'){beta <- gd_step.prox(X, y, beta, t, lam)}
else if(alg=='sub_grad'){beta <- gd_step.sub_grad(X, y, beta, t, lam)}
else if(alg=='acc_prox')
{
betas <- append(betas, list(gd_step.acc_prox(X, y, rev(betas)[[1]], rev(betas)[[2]], t, lam, iter)))
if(length(betas)>2){betas<-betas[-1]}
beta <- rev(betas)[[1]]
}
else {stop('invalid algorithm')}
loss <- c(loss, loss.lasso(X, y, beta, lam))
if(!is.null(beta_true))
{
log_loss <- log(tail(loss, 1))-log(loss.lasso(X, y, beta_true, lam))
error <- c(error, log_loss)
}
if(iter==early_stop){break}
}
return(list(y_pred=X%*%beta, beta=beta, loss=loss/length(y), error=error))
}
data <- generate_data(n=400, d=400, s=300, seed =58)
model.prox <- fit.lasso('prox', data$X, data$y, data$beta)
model.sub_grad <- fit.lasso('sub_grad', data$X, data$y, data$beta)
model.acc_prox <- fit.lasso('acc_prox', data$X, data$y, data$beta)
prox.overfit <- which.max(model.prox$error<0)
sub_grad.overfit <- which.max(model.sub_grad$error<0)
acc_prox.overfit <- which.max(model.acc_prox$error<0)
df <- tibble(n.iter = 1:length(model.prox$error) ,
prox.error = model.prox$error ,
sub_grad.error = model.sub_grad$error,
acc_prox.error = model.acc_prox$error)
# pdf('30conv4.pdf', width=10, height =6)
ggplot(data=df, aes(x=n.iter)) +
geom_line(aes(y=prox.error , color='proximal'), lwd=1.5) +
geom_line(aes(y=sub_grad.error , color='subgradient'), lwd=1.5) +
geom_line(aes(y=acc_prox.error, color='accelerated proximal'), lwd=1.5) +
ylab('Log error') +
xlab('epoch ') +
theme_bw() +
theme(legend.position='bottom', text = element_text(size = 16))
dev.off()
L <- hess.loss.max(data$X)
1/L
n.reps <- 1000
bench <- summary(microbenchmark(
gd_step.prox(data$X, data$y, numeric(length=dim(data$X)[2]), 0.01, 1),
gd_step.sub_grad(data$X, data$y, numeric(length=dim(data$X)[2]), 0.01, 1),
gd_step.acc_prox(data$X, data$y, numeric(length=dim(data$X)[2]), numeric(length=dim(data$X)[2])+1/dim(data$X)[2],0.01, 0.1, 2),
unit='ms',
times=n.reps))
print(tibble(iter=bench$mean ,
overfit=c(prox.overfit , sub_grad.overfit, acc_prox.overfit),
total=c(prox.overfit *bench$mean[1],
sub_grad.overfit * bench$mean [2], acc_prox.overfit *bench$mean[3])))
| ComputationalStatistics/Lasso_R.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2교시 기본 연산자 다루기
#
# > 스파크의 "기본 연산자" 와 "데이터프레임"에 대해 학습합니다
#
# ## 목차
# * [1. 기본 연산자](#1.-기본-연산자)
# - [1.1 데이터 프레임 함수](#1.1-데이터-프레임-x함수)
# - [1.2 컬럼 함수](#1.2-컬럼-함수)
# - [1.3 기타 함수](#1.3-기타-함수)
# * [2. RDD 의 특징](#3.-RDD-의-특징)
# - [2.1 RDD 통한 데이터 변환](#3.1-RDD-통한-데이터-변환)
# - [2.2 구조화 API 통한 데이터 변환](#3.2-구조화-API-통한-데이터-변환)
# * [3. 데이터 타입](#4.-데이터-타입)
# * [4. 핵심 데이터 프레임 연산자](#2.-핵심-데이터-프레임-연산자)
# - [4.1 파일로 부터 테이블 만들어 사용하기](#2.1-파일로-부터-테이블-만들어-사용하기)
# - [4.2 특정 컬럼 선택 (select, selectExpr)](#2.2-특정-컬럼-선택-(select,-selectExpr))
# - [4.3 상수값 사용하기](#2.3-상수값-사용하기)
# - [4.4 컬럼 추가하기](#2.4-컬럼-추가하기)
# - [4.5 컬럼명 바꾸기](#2.5-컬럼명-바꾸기)
# - [4.6 컬럼 제거하기](#2.6-컬럼-제거하기)
# - [4.7 컬럼의 데이터 타입 변경하기](#2.7-컬럼의-데이터-타입-변경하기)
# - [4.8 레코드 필터링](#2.8-레코드-필터링)
# - [4.9 유일값 (DISTINCT)](#2-9-유일값-(DISTINCT))
# - [4.10 정렬 (SORT)](#2.10-정렬-(SORT))
# - [4.11 로우 수 제한 (LIMIT)](#2.11-로우-수-제한-(LIMIT))
# * [5. 기타 데이터 프레임 연산자](#5.-기타-데이터-프레임-연산자)
# - [5.1 사전에 스키마를 정의하는 장점](#5.1-사전에-스키마를-정의하는-장점)
# - [5.2 스키마를 정의하는 두 가지 방법](#5.2-스키마를-정의하는-두-가지-방법)
# - [5.3 중첩된 배열 스키마](#5.3-중첩된-배열-스키마)
# - [5.4 컬럼과 표현식](#5.4-컬럼과-표현식)
# - [5.5 로우 생성 및 다루기](#5.5-로우-생성-및-다루기)
# - [5.6 파케이 파일 혹은 테이블 저장](#5.6-파케이-파일-혹은-테이블-저장)
# - [5.7 프로젝션과 필터](#5.7-프로젝션과-필터)
# - [5.8 날짜 관련 함수](#5.8-날짜-관련-함수)
# * [6. 데이터셋 API](#6.-데이터셋-API)
# - [6.1 데이터셋과 데이터프레임 비교](#6.1-데이터셋과-데이터프레임-비교)
# - [6.2 데이터셋 데이터프레임 그리고 RDD](#6.2-데이터셋-데이터프레임-그리고-RDD)
# * [7. 카탈리스트 옵티마이저](#7.-카탈리스트-옵티마이저)
# - [7.1 분석 (Analysis)](#7.1-분석-(Analysis))
# - [7.2 논리 최적화 (Logical Optimization)](#7.2-논리-최적화-(Logical-Optimization))
# - [7.3 물리 계획 (Physical Planning)](#7.3-물리-계획-(Physical-Planning))
# - [7.4 코드 생성 (Code Generation)](#7.4-코드-생성-(Code-Generation))
# * [8. 실습 문제](#8.-실습-문제)
# * [참고자료](#참고자료)
# +
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark.sql.types import *
from IPython.display import display, display_pretty, clear_output, JSON
spark = (
SparkSession
.builder
.config("spark.sql.session.timeZone", "Asia/Seoul")
.getOrCreate()
)
# 노트북에서 테이블 형태로 데이터 프레임 출력을 위한 설정을 합니다
spark.conf.set("spark.sql.repl.eagerEval.enabled", True) # display enabled
spark.conf.set("spark.sql.repl.eagerEval.truncate", 100) # display output columns size
# -
# ## 1. 기본 연산자
# ---
# ### 1.1 데이터 프레임 함수
# | 함수 | 설명 | 기타 |
# | - | - | - |
# | df.printSchema() | 스키마 정보를 출력합니다. | - |
# | df.schema | StructType 스키마를 반환합니다 | - |
# | df.columns | 컬럼명 정보를 반환합니다 | - |
# | df.show(n) | 데이터 n 개를 출력합니다 | - |
# | df.first() | 데이터 프레임의 첫 번째 Row 를 반환합니다 | - |
# | df.head(n) | 데이터 프레임의 처음부터 n 개의 Row 를 반환합니다 | - |
# | df.createOrReplaceTempView | 임시 뷰 테이블을 생성합니다 | - |
# | df.union(newdf) | 데이터프레임 간의 유니온 연산을 수행합니다 | - |
# | df.limit(n) | 추출할 로우수 제한 | T |
# | df.repartition(n) | 파티션 재분배, 셔플발생 | - |
# | df.coalesce() | 셔플하지 않고 파티션을 병합 | 마지막 스테이지의 reduce 수가 줄어드는 효과로 성능저하에 유의해야 합니다 |
# | df.collect() | 모든 데이터 수집, 반환 | A |
# | df.take(n) | 상위 n개 로우 반환 | A |
#
# ---
# ### 1.2 컬럼 함수
# | 함수 | 설명 | 기타 |
# | - | - | - |
# | df.select | 컬럼이나 표현식 사용 | - |
# | df.selectExpr | 문자열 표현식 사용 = df.select(expr()) | - |
# | df.withColumn(컬럼명, 표현식) | 컬럼 추가, 비교, 컬럼명 변경 | - |
# | df.withColumnRenamed(old_name, new_name) | 컬럼명 변경 | - |
# | df.drop() | 컬럼 삭제 | - |
# | df.where | 로우 필터링 | - |
# | df.filter | 로우 필터링 | - |
# | df.sort, df.orderBy | 정렬 | - |
# | df.sortWithinPartitions | 파티션별 정렬 | - |
#
# ---
# ### 1.3 기타 함수
# | 함수 | 설명 | 기타 |
# | - | - | - |
# | expr("someCol - 5") | 표현식 | - |
# | lit() | 리터럴 | - |
# | cast() | 컬럼 데이터 타입 변경 | - |
# | distinct() | unique row | - |
# | desc(), asc() | 정렬 순서 | - |
#
# ## 2. RDD 의 특징
#
# | 특징 | 설명 | 기타 |
# |---|---|---|
# | dependencies | resilency | 리니지를 통해 의존성 정보를 유지함으로써 언제든 다시 수행할 수 있는 회복력을 가집니다 |
# | partitions | parallelize computation | 파티션 단위로 데이터를 저장 관리하므로써 병렬 처리를 가능하게 합니다 |
# | compute function | Iterator\[T\] | RDD로 저장되는 모든 데이터는 반복자를 통해 함수를 적용할 수 있습니다 |
#
# * 반면에 compute function 의 내부를 spark 가 알 수 없기 때문에 오류를 찾아내가 어려우며, Python 과 같은 스크립트 언어는 generic object 로만 인식이 되므로 호환하기 어려우며, T 타입의 객체는 직렬화되어 전달되기만 할 뿐 스파크는 해당 데이터 타입 T 에 대해 알 수 없습니다
#
# > RDD 를 통해 데이터 처리하는 방법과, 구조화된 API 를 통해 처리하는 방법을 비교해 보고, 이러한 고수준의 DSL 연산자를 통해 보다 단순하게 표현이 가능합니다.
#
# ### 2.1 RDD 통한 데이터 변환 (참고용)
dataRDD = spark.sparkContext.parallelize([("Cat", 30), ("Dog", 28), ("Monkey", 28), ("Cat", 24), ("Dog", 10)])
agesRDD = (
dataRDD.map(lambda x: (x[0], (x[1], 1)))
.reduceByKey(lambda v1, v2: (v1[0] + v2[0], v1[1] + v2[1]))
.map(lambda v: (v[0], v[1][0]/v[1][1]))
)
agesRDD.toDF(["Name", "Age"]).show()
# ### 2.2 구조화 API 통한 데이터 변환
spark = SparkSession.builder.appName("동물의 평균 수명").getOrCreate()
animal = spark.createDataFrame([("Cat", 30), ("Dog", 28), ("Monkey", 28), ("Cat", 24), ("Dog", 10)], ["Name", "Age"])
ages = animal.select("Name", "Age").groupBy("Name").agg(avg("Age").alias("Age"))
ages.show(truncate=False)
# ## 3. 데이터 타입
# > immutable 하며, 모든 transformation 들의 lineage 를 유지합니다. 또한 컬럼을 변경, 추가 등을 통해 새로운 데이터프레임을 생성합니다.
#
# | python | scala |
# |---|---|
# |  |  |
# |  |  |
#
# ## 4 핵심 데이터 프레임 연산자
#
# ### 4.1 파일로 부터 테이블 만들어 사용하기
# +
print("# 원시 데이터로 부터 읽거나, Spark SQL 통한 결과는 항상 데이터프레임이 생성됩니다")
df = spark.read.json("data/flight-data/json/2015-summary.json")
df.createOrReplaceTempView("2015_summary")
sql_result = spark.sql("SELECT * FROM 2015_summary").show(5)
# -
# ### 4.2 특정 컬럼 선택 (select, selectExpr)
# > 아래의 모든 예제에서 컬럼 선택 시에 select(col("컬럼명")) 으로 접근할 수도 있지만 **selectExpr("컬럼명") 이 간결하기 때문에 앞으로는 가능한 표현식으로 사용**하겠습니다 <br>
# 컬럼 표현식의 경우 반드시 하나의 컬럼은 하나씩 표현되어야만 합니다. <br>
# 잘된예 : "컬럼1", "컬럼2" <br>
# 잘못된예: "컬럼1, 컬럼2"
# +
from pyspark.sql.functions import *
print("# select 표현은 컬럼만 입력이 가능하며, 함수나 기타 표현식을 사용할 수 없습니다. 사용하기 위해서는 functions 를 임포트 하고, 개별 함수의 특징을 잘 이해하고 사용해야 합니다")
df.select(upper(col("DEST_COUNTRY_NAME")), "ORIGIN_COUNTRY_NAME").show(2)
print("# selectExpr 별도의 임포트 없이, 모든 표현식을 사용할 수 있습니다")
df.selectExpr("upper(DEST_COUNTRY_NAME)", "ORIGIN_COUNTRY_NAME").show(2)
# +
print("# 컬럼의 앨리어스 혹은 전체 컬럼을 위한 * 도 사용할 수 있습니다")
df.selectExpr("DEST_COUNTRY_NAME as newColmnName", "DEST_COUNTRY_NAME").show(2)
df.selectExpr("*", "(DEST_COUNTRY_NAME = ORIGIN_COUNTRY_NAME) as withinCountry").show(2)
# -
# ### <font color=green>1. [기본]</font> "data/flight-data/json/2015-summary.json" 경로의 JSON 데이터를 읽고
# #### 1. 스키마를 출력하세요
# #### 2. 10건의 데이터를 출력하세요
# #### 3. selectExpr 구문 혹은 spark sql 구문을 이용하여 DEST_COUNTRY_NAME 컬럼은 대문자로, ORIGIN_COUNTRY_NAME 컬럼을 소문자로 2개의 컬럼을 조회하세요
#
# <details><summary>[실습1] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
#
# ```python
# df1 = (
# spark
# .read
# .option("header", "true")
# .option("inferSchema", "true")
# .json("data/flight-data/json/2015-summary.json")
# )
#
# df1.printSchema()
# answer = df1.createOrReplaceTempView("2015_summary")
# spark.sql("select upper(DEST_COUNTRY_NAME), lower(ORIGIN_COUNTRY_NAME) from 2015_summary").show(10)
# ```
#
# </details>
#
# +
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df1 = (
spark
.read
.option("header", "true")
.option("inferSchema", "true")
.json("data/flight-data/json/2015-summary.json")
)
df1.printSchema()
answer = df1.createOrReplaceTempView("2015_summary")
spark.sql("select upper(DEST_COUNTRY_NAME), lower(ORIGIN_COUNTRY_NAME) from 2015_summary").show(10)
# -
# ### 4.3 상수값 사용하기
# +
# 리터럴(literal)을 사용한 리터럴 상수 값 컬럼 추가
from pyspark.sql.functions import lit
# df.select(expr("*"), lit(1).alias("One")).show(2)
df.selectExpr("*", "1 as One").show(2)
# -
# ### 4.4 컬럼 추가하기
print("# withColumn(컬럼명, 표현식) 으로 컬럼 추가")
df.withColumn("numberOne", lit(1)).show(2)
print("# 컬럼의 대소 비교를 통한 불리언 값 반환")
df.withColumn("withinCountry", expr("ORIGIN_COUNTRY_NAME == DEST_COUNTRY_NAME")).show(2)
# +
print("# 존재하는 컬럼을 표현식을 통해 새로운 컬럼 생성, 기존 컬럼을 삭제")
before = df
before.printSchema()
after = before.withColumn("Destination", expr("DEST_COUNTRY_NAME"))
after.printSchema()
# -
# ### <font color=green>2. [기본]</font> "data/flight-data/json/2015-summary.json" 경로의 JSON 데이터를 읽고
# #### 1. 스키마를 출력하세요
# #### 2. 10건의 데이터를 출력하세요
# #### 3. 기존의 컬럼은 그대로 두고, ORIGIN_COUNTRY_NAME, DEST_COUNTRY_NAME 2개의 컬럼을 각각 소문자, 대문자로 변경한 컬럼 ORIGIN_COUNTRY_NAME_LOWER, DEST_COUNTRY_NAME_UPPER 총 4개의 컬럼을 출력하세요
#
# <details><summary>[실습2] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
#
# ```python
# df2 = (
# spark
# .read
# .option("header", "true")
# .option("inferSchema", "true")
# .json("data/flight-data/json/2015-summary.json")
# )
#
# df2.printSchema()
# answer = df2.createOrReplaceTempView("2015_summary")
# spark.sql("""select
# ORIGIN_COUNTRY_NAME,
# DEST_COUNTRY_NAME,
# lower(ORIGIN_COUNTRY_NAME) as ORIGIN_COUNTRY_NAME_LOWER,
# upper(DEST_COUNTRY_NAME) as DEST_COUNTRY_NAME_UPPER
# from 2015_summary"""
# ).show(10)
# ```
#
# </details>
#
# +
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df2 = (
spark
.read
.option("header", "true")
.option("inferSchema", "true")
.json("data/flight-data/json/2015-summary.json")
)
df2.printSchema()
answer = df2.createOrReplaceTempView("2015_summary")
spark.sql("""select
ORIGIN_COUNTRY_NAME,
DEST_COUNTRY_NAME,
lower(ORIGIN_COUNTRY_NAME) as ORIGIN_COUNTRY_NAME_LOWER,
upper(DEST_COUNTRY_NAME) as DEST_COUNTRY_NAME_UPPER
from 2015_summary"""
).show(10)
# -
# ### 4.5 컬럼명 바꾸기
print("# 컬럼 명 변경하기")
df.withColumnRenamed("DEST_COUNTRY_NAME", "Destination").columns
# ### 4.6 컬럼 제거하기
print("# 특정 컬럼을 제거합니다")
df.printSchema()
df.drop("ORIGIN_COUNTRY_NAME").columns
# +
print("# 기본적으로 스파크는 대소문자를 가리지 않지만, 옵션을 통해서 구분이 가능합니다")
spark.conf.set('spark.sql.caseSensitive', True)
caseSensitive = df.drop("dest_country_name")
caseSensitive.printSchema()
spark.conf.set('spark.sql.caseSensitive', False)
caseInsensitive = df.drop("dest_country_name")
caseInsensitive.printSchema()
# -
print("# 한 번에 여러 컬럼도 삭제할 수 있습니다")
df.printSchema()
df.drop("ORIGIN_COUNTRY_NAME", "DEST_COUNTRY_NAME").columns # 여러 컬럼을 지우기
# ### <font color=blue>3. [중급]</font> "data/flight-data/json/2015-summary.json" 경로의 JSON 데이터를 읽고
# #### 1. ORIGIN_COUNTRY_NAME 컬럼은 Origin 으로 이름을 변경하고
# #### 2. DEST_COUNTRY_NAME 컬럼은 DestUpper 으로 대문자로 변경한 컬럼을 추가하고
# #### 3. DEST_COUNTRY_NAME 컬럼은 Drop 하고, Origin, DestUpper 2개의 컬럼만 남은 DataFrame 의 데이터를 출력하세요
# #### 4. 최종 스키마를 출력하세요
#
# <details><summary>[실습3] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
#
# ```python
# df3 = (
# spark
# .read
# .option("header", "true")
# .option("inferSchema", "true")
# .json("data/flight-data/json/2015-summary.json")
# )
#
# df3.printSchema()
# answer = (df3
# .withColumnRenamed("ORIGIN_COUNTRY_NAME", "Origin")
# .withColumn("DestUpper", upper("DEST_COUNTRY_NAME"))
# .drop("DEST_COUNTRY_NAME", "count")
# )
# answer.show()
# answer.printSchema()
# ```
#
# </details>
#
# +
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df3 = (
spark
.read
.option("header", "true")
.option("inferSchema", "true")
.json("data/flight-data/json/2015-summary.json")
)
df3.printSchema()
answer = (df3
.withColumnRenamed("ORIGIN_COUNTRY_NAME", "Origin")
.withColumn("DestUpper", upper("DEST_COUNTRY_NAME"))
.drop("DEST_COUNTRY_NAME", "count")
)
answer.show()
answer.printSchema()
# -
# ### 4.7 컬럼의 데이터 타입 변경하기
# +
print("# 컬럼의 데이터 유형을 변경합니다")
df.printSchema()
int2str = df.withColumn("str_count", col("count").cast("string"))
int2str.show(5)
int2str.printSchema()
str2int = int2str.withColumn("int_count", col("str_count").cast("int"))
str2int.show(5)
str2int.printSchema()
# -
# ### 4.8 레코드 필터링
# +
print("# Where 와 Filter 는 동일합니다")
df.where("count < 2").show(2)
df.filter("count < 2").show(2)
print("# 같은 표현식에 여러 필터를 적용하는 것도 가능합니다")
df.where(col("count") < 2).where(col("ORIGIN_COUNTRY_NAME") != "Croatia").show(2)
# -
# ### 4.9 유일 값 (DISTINCT)
# +
""" distinct 함수 """
print(df.select("ORIGIN_COUNTRY_NAME", "DEST_COUNTRY_NAME").distinct().count())
print(df.select("ORIGIN_COUNTRY_NAME").distinct().count())
# # distinctcount?
# -
# ### <font color=green>4. [기본] </font> "data/flight-data/json/2015-summary.json" 경로의 JSON 데이터를 읽고
# #### 1. 스키마를 출력하세요
# #### 2. 데이터 10건을 출력하세요
# #### 3. count 가 5000 이상, 100000 보다 미만인 ORIGIN_COUNTRY_NAME 를 출력하되 중복을 제거해 주세요
#
# <details><summary>[실습4] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
#
# ```python
# df4 = (
# spark
# .read
# .option("header", "true")
# .option("inferSchema", "true")
# .json("data/flight-data/json/2015-summary.json")
# )
#
# df4.printSchema()
# df4.show(10)
# df4.selectExpr("min(count)", "max(count)").show()
# answer = df4.where(expr("count >= 5000 and count < 100000")).select("ORIGIN_COUNTRY_NAME")
# answer.distinct()
# ```
#
# </details>
#
# +
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df4 = (
spark
.read
.option("header", "true")
.option("inferSchema", "true")
.json("data/flight-data/json/2015-summary.json")
)
df4.printSchema()
df4.show(10)
df4.selectExpr("min(count)", "max(count)").show()
answer = df4.where(expr("count >= 5000 and count < 100000")).select("ORIGIN_COUNTRY_NAME")
answer.distinct()
# -
# ### 4.10 정렬 (SORT)
print("# sort 와 orderBy 함수는 동일한 효과를 가집니다")
df.sort("count").show(2)
df.orderBy("count", "DEST_COUNTRY_NAME").show(2)
df.orderBy(col("count"), col("DEST_COUNTRY_NAME")).show(2)
from pyspark.sql.functions import *
print("# asc_nulls_first, desc_nulls_first, asc_nulls_last, desc_nulls_last 메서드로 null의 정렬 순서를 지정")
df.sort("DEST_COUNTRY_NAME").show(1)
df.sort(df["DEST_COUNTRY_NAME"].asc_nulls_first()).show(1)
df.sort(df.DEST_COUNTRY_NAME.asc_nulls_first()).show(1)
print("# 정렬의 경우 예약어 컬럼명에 유의해야 하므로, expr 을 사용하거나, 명시적으로 구조화 API 를 사용하는 것도 좋습니다")
from pyspark.sql.functions import desc, asc
df.orderBy(df["count"].desc()).show(2)
df.orderBy(df.ORIGIN_COUNTRY_NAME.desc(), df.DEST_COUNTRY_NAME.asc()).show(2)
df.orderBy(expr("ORIGIN_COUNTRY_NAME DESC"), expr("DEST_COUNTRY_NAME ASC")).show(2)
# ### 4.11 로우 수 제한 (LIMIT)
df.limit(5).show()
df.orderBy(expr("count desc")).limit(6).show()
# ### <font color=red>5. [고급]</font> "data/flight-data/json/2015-summary.json" 경로의 JSON 데이터를 읽고
# #### 1. 스키마를 출력하세요
# #### 2. 10건의 데이터를 출력하세요
# #### 3. count 를 100으로 나눈 몫을 가지는 cnt 컬럼을 추가합니다 표현식은 다음과 같습니다 `expr("floor(count / 100)")` - 힌트: withColumn("컬럼명", "표현식")
# #### 4. cnt 컬럼을 기준으로 내림차순 정렬하되, 동순위가 발생하는 경우 ORIGIN_COUNTRY_NAME 오름차순, DEST_COUNTRY_NAME 내림차순으로 정렬하여 출력하세요
# #### 5. 출력된 결과의 상위 10개만 제한하여 (limit) display 함수로 출력하세요
#
# <details><summary>[실습5] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
#
# ```python
# df5 = (
# spark
# .read
# .option("header", "true")
# .option("inferSchema", "true")
# .json("data/flight-data/json/2015-summary.json")
# )
#
# df5.printSchema()
# answer = df5.withColumn("cnt", expr("floor(count / 100)")).orderBy(desc("cnt"), asc("ORIGIN_COUNTRY_NAME"), desc("DEST_COUNTRY_NAME")).limit(10)
# display(answer)
# ```
#
# </details>
#
# +
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df5 = (
spark
.read
.option("header", "true")
.option("inferSchema", "true")
.json("data/flight-data/json/2015-summary.json")
)
df5.printSchema()
answer = df5.withColumn("cnt", expr("floor(count / 100)")).orderBy(desc("cnt"), asc("ORIGIN_COUNTRY_NAME"), desc("DEST_COUNTRY_NAME")).limit(10)
display(answer)
# -
# ## 5. 기타 데이터 프레임 연산자
#
# ### 5.1 사전에 스키마를 정의하는 장점
# * 데이터 타입을 추론에 대한 신경을 쓸 필요가 없다
# * 스키마 추론을 위한 별도의 작업에 드는 리소스를 줄일 수 있다
# * 스키마에 맞지 않는 데이터의 오류를 빠르게 인지할 수 있다
#
# ### 5.2 스키마를 정의하는 두 가지 방법
# * 1. 프로그래밍 방식으로 정의하는 방법
# * 2. DDL 구문을 이용하는 방법
# +
from pyspark.sql.types import *
from pyspark.sql import Row
data = [
["정휘센", "안녕하세요 정휘센 입니다", 300],
["김싸이언", "안녕하세요 김싸이언 입니다", 200],
["유코드제로", "안녕하세요 유코드제로 입니다", 100]
]
print("# 1. Programming Style")
schema1 = StructType([
StructField("author", StringType(), False),
StructField("title", StringType(), False),
StructField("pages", IntegerType(), False),
])
print(schema1)
df1 = spark.createDataFrame(data, schema1)
df1.printSchema()
df1.show(truncate=False)
rows = [
Row("정휘센", "안녕하세요 정휘센 입니다", 300),
Row("김싸이언", "안녕하세요 김싸이언 입니다", 200),
Row("유코드제로", "안녕하세요 유코드제로 입니다", 100)
]
print("\n# 2. DDL Style")
schema2 = "`author` string, `title` string, `pages` int"
print(schema2)
df2 = spark.createDataFrame(rows, schema2)
df2.printSchema()
df2.show(truncate=False)
assert(df1.subtract(df2).count() == 0)
assert(df2.subtract(df1).count() == 0)
# -
# ### <font color=red>6. [고급]</font> Row 와 문자열을 통한 스키마 구현을 통해 데이터 프레임을 생성하세요
# #### 1. 스키마 : id int, name string, payment int
# #### 2. 임의의 데이터를 3건 정도 생성해서 데이터프레임을 만들어 보세요
# #### 3. 스키마를 출력하세요
# #### 4. 데이터를 출력하세요
#
# <details><summary>[실습6] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
#
# ```python
# df6 = [
# Row(1, "엘지전자", 1000),
# Row(2, "엘지화학", 2000),
# Row(3, "엘지디스플레이", 3000)
# ]
# sc6 = "`id` int, `name` string, `payment` int"
# answer = spark.createDataFrame(df6, sc6)
# answer.printSchema()
# display(answer)
# ```
#
# </details>
#
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df6 = [
Row(1, "엘지전자", 1000),
Row(2, "엘지화학", 2000),
Row(3, "엘지디스플레이", 3000)
]
sc6 = "`id` int, `name` string, `payment` int"
answer = spark.createDataFrame(df6, sc6)
answer.printSchema()
display(answer)
# ### 5.3 중첩된 배열 스키마
schema = StructType([
StructField("Id", IntegerType(), False),
StructField("First", StringType(), False),
StructField("Last", StringType(), False),
StructField("Url", StringType(), False),
StructField("Published", StringType(), False),
StructField("Hits", IntegerType(), False),
StructField("Campaigns", ArrayType(StringType()), False),
])
blogDF = spark.read.schema(schema).json("data/learning-spark/blogs.json")
blogDF.printSchema()
blogDF.show(1, truncate=False)
# ### 5.4 컬럼과 표현식
# > 컬럼은 공용 메소드들을 가진 객체들이며, pyspark.sql.functions.expr() 함수를 이용하여 표현식을 그대로 사용할 수 있습니다
#
# * 특히 컬럼 함수를 통해 다양한 연산자를 확인할 수 있습니다.
from pyspark.sql.functions import Column
print(blogDF.columns)
# help(Column)
blogDF.withColumn("AuthorsId", (concat(expr("First"), lit("."), expr("Last"), lit("@"), expr("Id"))))\
.select(col("AuthorsId"))\
.show(4)
blogDF.select(expr("Hits")).show(2)
blogDF.select(col("Hits")).show(2)
blogDF.select("Hits").show(2)
blogDF.sort(col("Id").desc()).show()
# ### 5.5 로우 생성 및 다루기
# * 로우의 경우 컬럼을 인덱스를 기준으로 접근할 수 있습니다.
# +
from pyspark.sql import Row
blog_row = Row(6, "Reynold", "Xin", "https://tinyurl.6", 255568, "3/2/2015",
["twitter", "LinkedIn"])
print(blog_row[1])
rows = [Row("<NAME>", "CA"), Row("<NAME>", "CA")]
authors_df = spark.createDataFrame(rows, ["Authors", "State"])
authors_df.show()
# +
# In Python, define a schema
from pyspark.sql.types import *
# Programmatic way to define a schema
fire_schema = StructType([StructField('CallNumber', IntegerType(), True),
StructField('UnitID', StringType(), True),
StructField('IncidentNumber', IntegerType(), True),
StructField('CallType', StringType(), True),
StructField('CallDate', StringType(), True),
StructField('WatchDate', StringType(), True),
StructField('CallFinalDisposition', StringType(), True),
StructField('AvailableDtTm', StringType(), True),
StructField('Address', StringType(), True),
StructField('City', StringType(), True),
StructField('Zipcode', IntegerType(), True),
StructField('Battalion', StringType(), True),
StructField('StationArea', StringType(), True),
StructField('Box', StringType(), True),
StructField('OriginalPriority', StringType(), True),
StructField('Priority', StringType(), True),
StructField('FinalPriority', IntegerType(), True),
StructField('ALSUnit', BooleanType(), True),
StructField('CallTypeGroup', StringType(), True),
StructField('NumAlarms', IntegerType(), True),
StructField('UnitType', StringType(), True),
StructField('UnitSequenceInCallDispatch', IntegerType(), True),
StructField('FirePreventionDistrict', StringType(), True),
StructField('SupervisorDistrict', StringType(), True),
StructField('Neighborhood', StringType(), True),
StructField('Location', StringType(), True),
StructField('RowID', StringType(), True),
StructField('Delay', FloatType(), True)])
# Use the DataFrameReader interface to read a CSV file
sf_fire_file = "data/learning-spark/sf-fire-calls.csv"
fire_df = spark.read.csv(sf_fire_file, header=True, schema=fire_schema)
fire_df.select("CallNumber", "UnitID", "IncidentNumber", "CallType", "CallDate", "RowID").show(10, truncate=False)
# -
# ### 5.6 파케이 파일 혹은 테이블 저장
# * save 저장 시에는 해당 경로에 파케이 파일이 저장되고, saveAsTable 저장 시에는 "spark.sql.warehouse.dir" 의 위치에 생성됩니다
parquetPath="target/sf_fire_calls"
fire_df.write.format("parquet").mode("overwrite").save(parquetPath)
# !rm -rf "/home/jovyan/work/spark-warehouse/sf_fire_calls"
parquetTable="sf_fire_calls"
# spark.conf.set("spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation","true")
fire_df.write.format("parquet").saveAsTable(parquetTable)
# ### 5.7 프로젝션과 필터
# > *Projection*은 특정 관계형 조건 혹은 필터에 매칭되는 로우에 대해서만 반환하는 것을 말합니다.
# +
few_fire_df = (
fire_df
.select("IncidentNumber", "AvailableDtTm", "CallType")
.where(col("CallType") != "Medical Incident")
)
print(few_fire_df)
few_fire_df.show(5, truncate=False)
new_fire_df = fire_df.withColumnRenamed("Delay", "ResponseDelayedinMins")
(
new_fire_df
.select("ResponseDelayedinMins")
.where(col("ResponseDelayedinMins") > 5)
.show(5, False)
)
# -
# ### 5.8 날짜 관련 함수
# * 날짜의 경우 문자열로 전달되고 있기 때문에 표현 및 활용을 위해서는 to_timestamp(), to_date() 와 같은 날짜관련 함수를 사용할 수 있습니다.
# - 한번 timestamp 형태로 변경된 컬럼에 대해서는 year, month, dayofmonth 와 같은 일자관련 함수를 통해 다양한 예제를 실습할 수 있습니다
# +
fire_ts_df = (new_fire_df
.withColumn("IncidentDate", to_timestamp(col("CallDate"), "MM/dd/yyyy"))
.drop("CallDate")
.withColumn("OnWatchDate", to_timestamp(col("WatchDate"), "MM/dd/yyyy"))
.drop("WatchDate")
.withColumn("AvailableDtTS", to_timestamp(col("AvailableDtTm"),"MM/dd/yyyy hh:mm:ss a"))
.drop("AvailableDtTm")
)
fire_ts_df.select("IncidentDate", "OnWatchDate", "AvailableDtTS").show(5, truncate=False)
from pyspark.sql.functions import *
(
fire_ts_df
.select(year('IncidentDate'), month('IncidentDate'), dayofmonth("IncidentDate"))
.distinct()
.orderBy(year('IncidentDate'), month('IncidentDate'), dayofmonth("IncidentDate"))
.show(5)
)
# -
# ### <font color=green>7. [기본]</font> 아래와 같이 제공된 데이터를 통해 생성한 데이터를 파케이 포맷으로 저장하세요
# #### 1. "target/lgde_user" 경로에 파케이 파일로 저장하세요
# #### 2. "lgde_user" 테이블로 저장하세요
#
# <details><summary>[실습7] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
# ```python
# df6 = [
# Row(1, "엘지전자", 1000),
# Row(2, "엘지화학", 2000),
# Row(3, "엘지디스플레이", 3000)
# ]
# sc6 = "`id` int, `name` string, `payment` int"
# answer = spark.createDataFrame(df6, sc6)
# answer.printSchema()
# display(answer)
#
# answer.write.format("parquet").save("target/lgde_user")
# answer.write.format("parquet").saveAsTable("lgde_user")
# ```
#
# </details>
#
# +
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df6 = [
Row(1, "엘지전자", 1000),
Row(2, "엘지화학", 2000),
Row(3, "엘지디스플레이", 3000)
]
sc6 = "`id` int, `name` string, `payment` int"
answer = spark.createDataFrame(df6, sc6)
answer.printSchema()
display(answer)
answer.write.format("parquet").save("target/lgde_user")
answer.write.format("parquet").saveAsTable("lgde_user")
# -
# ### <font color=green>8. [기본]</font> '실습7' 에서 생성한 데이터를 읽어서 출력하세요
# #### 1. "target/lgde_user" 경로에 파케이 파일을 읽어서 스키마와 데이터를 출력하세요
# #### 2. "lgde_user" 테이블로 저장된 테이블을 spark sql 로 읽어서 스키마와 데이터를 출력하세요
#
# <details><summary>[실습8] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
# ```python
# df8 = (
# spark
# .read.parquet("target/lgde_user")
# )
# df8.printSchema()
# display(df8)
#
# answer = spark.sql("select * from lgde_user")
# display(answer)
# ```
#
# </details>
#
# +
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df8 = (
spark
.read.parquet("target/lgde_user")
)
df8.printSchema()
display(df8)
answer = spark.sql("select * from lgde_user")
display(answer)
# -
# ## 6. 데이터셋 API
# > Python 과 R 은 compile-time type-safe 하지 않기 때문에, Datasets 통한 Typed 데이터 타입을 사용할 수 없습니다. Datasets 을 이용하는 경우에도 Spark SQL 엔진이 객체를 생성, 변환, 직렬화, 역직렬화를 수행하며, **Dataframe 의 경우와 마찬가지로 Off-heap 을 통한 메모리 관리를 수행**하게 되며, Dataset encoders 를 이용합니다
#
# ### 6.1 데이터셋과 데이터프레임 비교
#
# | Structured APIs | SQL vs. Dataframe vs. Datasets |
# |---|---|
# |  |  |
#
# * 언어별 타입 객체 비교
# 
#
# * Scala: Case Class 를 통해 선언
# ```scala
# case class DeviceIoTData (
# battery_level: Long,
# c02_level: Long,
# cca2: String,
# cca3: String,
# cn: String,
# device_id: Long,
# device_name: String,
# humidity: Long,
# ip: String,
# latitude: Double,
# lcd: String,
# longitude: Double,
# scale:String,
# temp: Long,
# timestamp: Long)
# ```
#
# * 데이터를 읽고 DeviceIoTData 클래스로 변환을 수행합니다
# ```scala
# val ds = spark.read.json("/databricks-datasets/learning-spark-v2/iot-devices/iot_devices.json").as[DeviceIoTData]
# val filterTempDS = ds.filter({d => {d.temp > 30 && d.humidity > 70})
# ```
# * Datasets 이용 시에는 filter(), map(), groupBy(), select(), take() 등의 일반적인 함수를 사용합니다
# ```scala
# case class DeviceTempByCountry(temp: Long, device_name: String, device_id: Long, cca3: String)
# val dsTemp = ds.filter(d => {d.temp > 25})
# .map(d => (d.temp, d.device_name, d.device_id, d.cca3))
# .toDF("temp", "device_name", "device_id", "cca3")
# .as[DeviceTempByCountry]
# ```
#
# ### 6.2 데이터셋 데이터프레임 그리고 RDD
# * Datasets
# - compile-time 의 type safety 가 필요한 경우
# * Dataframe
# - SQL-like 쿼리를 이용하고자 하는 경우
# - 통합, 코드 최적화 그리고 API를 활용한 모듈화를 원하는 경우
# - R 혹은 Python 을 이용해야 하는 경우
# - 공간, 속도 효율성을 고려해야 하는 경우
# * RDD
# - 별도의 RDD를 이용하는 써드파티 패키지를 사용하는 경우
# - 코드, 공간, 속도 최적화 등을 원하지 않는 경우
# - 스파크가 수행할 쿼리를 정확히 지시해야만 할 때
#
#
# * RDD와 데이터프레임과 데이터셋은 서로 다른가?
# - 데이터프레임과 데이터셋은 RDD 위에서 구현됩니다. 즉, whole-stage code generation 단계에서 압축된 RDD 코드로 분해됩니다.
#
# > DataFrames and Datasets are
# built on top of RDDs, and they get decomposed to compact RDD code during wholestage
# code generation, which we discuss in the next section
#
# * Spark SQL
# 
# ## 7. 카탈리스트 옵티마이저 (참고용)
# > Spark SQL 엔진의 핵심이며 크게 4가지 단계로 구분됩니다.
#
# ### 7.1 분석 (Analysis)
# * "추상화 구문 트리(AST, Abstract Syntax Tree)" 생성 단계로, 모든 테이블명과 컬럼명은 내부적으로 컬럼명, 데이터유형, 함수와 더불어 데이터베이스와 테이블 이름까지 모두 관리하고 있는 *Catalog*에 의해 해석되어 트리 형태의 구조로 생성됩니다
#
# ### 7.2 논리 최적화 (Logical Optimization)
# * 카탈리스트 옵티마이저는 우선 다수의 논리적 계획을 세우고, "비용 기반 옵티마이저(CBO, Cost-Based Optimizer)"를 이용하여 각 계획에 비용(Cost)를 할당합니다. 이러한 계획은 아래의 "Figure 3-5"와 같은 연산자 트리 형태로 구성되며, 이때에 **constant folding, predicate pushdown, projection pruning, Boolean expression simplification** 등의 최적화가 이루어집니다
#
# ### 7.3 물리 계획 (Physical Planning)
# * Spark SQL 엔진은 CBO에 의해 선택된 논리 계획에 대해 스파크 엔진에 존재하는 적절한 연산자들을 이용하여 최적의 계획을 생성합니다
#
# ### 7.4 코드 생성 (Code Generation)
# * 마지막 단계에서는 Project Tungsten 의 whole-stage code generation 을 통해 [마치 컴파일러와 같이 동작](https://databricks.com/blog/2016/05/23/apache-spark-as-a-compiler-joining-a-billion-rows-per-second-on-a-laptop.html)하며, 메모리 상에 로딩된 데이터 집합에 대해 수행될 최적의 자바 바이트 코드를 생성해 냅니다.
#
# * What is ***whole-stage code generation***?
# - 물리적인 쿼리 최적화 단계를 말하며, 쿼리 전체를 하나의 함수로 만들어 냅니다
# - virtual function call 을 제거하거나, 중간 데이터를 CPU registers 에 올리는 등의 최적화 작업을 수행합니다
# - Spark 2.0 텅스텐 엔진은 압축된 RDD 코드를 생성하는 방식으로 개선 되었습니다
mnm_df = (
spark.read
.option("header", "true")
.option("inferSchema", "true")
.csv("data/databricks/mnm_dataset.csv")
)
count_mnm_df = (
mnm_df.select("State", "Color", "Count")
.groupBy("State", "Color", "Count")
.agg(sum("Count")
.alias("Total"))
.orderBy("Total", ascending=False)
)
count_mnm_df.explain(True)
mnm_df.createOrReplaceTempView("mnm_dataset")
count_mnm_df = spark.sql("""
SELECT State, Color, Count, sum(Count) AS Total
FROM mnm_dataset
GROUP BY State, Color, Count
ORDER BY Total DESC
""")
count_mnm_df.explain(True)
# * 아래와 같이 2개의 테이블에 대해 조인, 필터, 프로젝션 등의 연산 시에 아래와 같은 최적화로 **Disk 및 Network I/O 를 줄일 수 있습니다**.
# - Predicate Pushdown : 데이터 소스를 모두 읽지 않고, 필터 조건에 해당하는 데이터만 읽습니다
# - Column Pruning : 데이터 소스에서 모든 필드를 읽지 않고, 필요한 필터만 읽습니다
#
# ```scala
# val users = spark.read.parquet("/users/parquet/path")
# val events = spark.read.parquet("/events/parquet/path")
# val joinedDF = users.join(events, users("id") === events("uid"))
# .filter(events("date") > "2015-01-01")
# ```
#
# 
# ### <font color=blue>9. [중급]</font> "data/tbl_user.csv" 에 저장된 CSV 파일을 읽고
# #### 1. 스키마를 출력하세요
# #### 2. 데이터를 10건 출력하세요
# #### 3. 가입일자 컬럼(u_signup)을 이용하여 가장 최근에 가입한 5명을 출력하세요
#
# <details><summary>[실습9] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
# ```python
# df9 = (
# spark
# .read
# .option("header", "true")
# .option("inferSchema", "true")
# .csv("data/tbl_user.csv")
# )
# df9.printSchema()
# df9.show(10)
# df9.createOrReplaceTempView("tbl_user")
# answer = spark.sql("select * from tbl_user order by u_signup desc limit 5")
# display(answer)
# ```
#
# </details>
#
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df9 = (
spark
.read
.option("header", "true")
.option("inferSchema", "true")
.csv("data/tbl_user.csv")
)
df9.printSchema()
df9.show(10)
df9.createOrReplaceTempView("tbl_user")
answer = spark.sql("select * from tbl_user order by u_signup desc limit 5")
display(answer)
# ### <font color=blue>10. [중급]</font> "data/tbl_purchase.csv" 에 저장된 CSV 파일을 읽고
# #### 1. 스키마를 출력하세요
# #### 2. 데이터를 10건 출력하세요
# #### 3. 상품가격 컬럼(p_amount)을 이용하여 200만원 이상 금액의 상품 가운데 상위 3개를 출력하세요
#
# <details><summary>[실습10] 출력 결과 확인 </summary>
#
# > 아래와 유사하게 방식으로 작성 되었다면 정답입니다
#
# ```python
# df10 = (
# spark
# .read
# .option("header", "true")
# .option("inferSchema", "true")
# .csv("data/tbl_purchase.csv")
# )
# df10.printSchema()
# df10.show(10)
# df10.createOrReplaceTempView("tbl_purchase")
# answer = spark.sql("select * from tbl_purchase where p_amount > 2000000 order by p_amount desc limit 3")
# display(answer)
# ```
#
# </details>
#
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
df10 = (
spark
.read
.option("header", "true")
.option("inferSchema", "true")
.csv("data/tbl_purchase.csv")
)
df10.printSchema()
df10.show(10)
df10.createOrReplaceTempView("tbl_purchase")
answer = spark.sql("select * from tbl_purchase where p_amount > 2000000 order by p_amount desc limit 3")
display(answer)
# ## 참고자료
#
# #### 1. [Spark Programming Guide](https://spark.apache.org/docs/latest/sql-programming-guide.html)
# #### 2. [PySpark SQL Modules Documentation](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html)
# #### 3. <a href="https://spark.apache.org/docs/3.0.1/api/sql/" target="_blank">PySpark 3.0.1 Builtin Functions</a>
| day3/notebooks/lgde-spark-core-2-operators-answer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="OUPr730mxaYZ" outputId="320efa3d-f520-4542-fc59-864f22281181"
69+69
# + colab={"base_uri": "https://localhost:8080/"} id="fwaV64-N0rz3" outputId="7d6ef233-ce39-49b2-c594-6da97bcf3a67"
69*69
# + colab={"base_uri": "https://localhost:8080/"} id="sfu4ame51Vme" outputId="a93f18c4-229f-4b0e-b9d4-538dc761d87a"
69-69
# + colab={"base_uri": "https://localhost:8080/"} id="2vBiZSp-1cVG" outputId="a08ebada-0b66-4ef8-9957-736635368e21"
69/13
# + id="EJPOCbtl1tU4"
x=69
y=96
# + id="dHdrZqRE2NaQ"
x=y
# + colab={"base_uri": "https://localhost:8080/"} id="KwmQ0t4O2Qqu" outputId="d13117c4-e689-4dde-f2d3-789be6bb8714"
x+y
# + colab={"base_uri": "https://localhost:8080/"} id="GXNzxFfR2Scu" outputId="2fb1d5ce-ced0-4453-ee91-0db18f730b46"
x-y
# + colab={"base_uri": "https://localhost:8080/"} id="8eCGPzsQ2VU9" outputId="18a27d8b-9863-4d26-fb36-145089e22212"
x/y
# + colab={"base_uri": "https://localhost:8080/"} id="SSF1TBoB2ZQm" outputId="eb8fba68-2493-4a3a-f2f4-75cd4f87eebd"
x*y
# + id="fu4yoYxP2a8e"
p=25000
n=5
r=8
# + id="m5EPSwvY3tJo"
i=p*n*r/100
# + colab={"base_uri": "https://localhost:8080/"} id="wlJUap4o38vO" outputId="2bcdac9c-a535-40ec-ca66-5d47077d5960"
i
# + id="Zg4y0MmD3_py"
x='AKASH'
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="cNtobxPj67Jx" outputId="ba140fb8-3f6f-4e6b-8ae6-9424f002a435"
x
# + id="UcNf-NmJ7ESP"
x=10
y=50.6
# + id="rG3ek2-1798X"
z='velur'
# + colab={"base_uri": "https://localhost:8080/"} id="cgsFss-k8JSn" outputId="8961b0c0-aec0-4fab-a53b-7596019effca"
type(x)
# + colab={"base_uri": "https://localhost:8080/"} id="GlacXlHJ8YMn" outputId="4b9fb651-3fab-49de-b3ea-fc390da821c9"
type(y)
# + colab={"base_uri": "https://localhost:8080/"} id="Lk7GJMLd8bJY" outputId="ee50cb52-61a0-4345-9f4d-04dd81a40d07"
type(z)
# + colab={"base_uri": "https://localhost:8080/"} id="b237cU9-8gmQ" outputId="77261e81-e92f-46ed-cccb-0e1e319d9af5"
x=input('enter your favourite director')
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="KcWdwoWq-ahS" outputId="3f09a13d-72eb-4ef5-c0c1-09aff51bd07a"
x
# + id="SQ-eIf0__bcq"
x='india'
y='kerala'
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="Ww3bViFFHR5q" outputId="b606071b-7b81-4672-ef52-2d90265b5416"
x
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="vpSrwrCNHXXa" outputId="3ede0ef2-cc63-465f-c1ee-fa683077ff01"
y
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="gq1N9WIwHm4y" outputId="7961c65c-9bf3-4ce9-f8ca-52bc013f9e22"
x+y
# + colab={"base_uri": "https://localhost:8080/"} id="u5rPWFyDHvhq" outputId="14e3eb75-8b7d-404e-e868-5fd1de5fc461"
x=input('enter the first number')
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="BxH3D7z6Ir_L" outputId="07acfdb7-0b6c-4fb0-b4b8-6b2254f6e73a"
x
# + colab={"base_uri": "https://localhost:8080/"} id="BflTUx9dI2aq" outputId="11295b83-4d9b-46fa-eaa6-dee825fd2bdb"
y=input('enter the second number')
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="C5j1W_0JI9Vh" outputId="afe6a94a-6323-413d-c94a-31f654a76c9b"
y
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="de9-KQ0EI_FJ" outputId="b4ad99c9-9202-4400-9733-bcea172dd419"
x+y
# + colab={"base_uri": "https://localhost:8080/"} id="-ATvRWbJLShz" outputId="b74a986a-1bd8-41a0-8765-bbb56ca84189"
type(x)
type(y)
# + colab={"base_uri": "https://localhost:8080/"} id="HI80sKgXLeFC" outputId="f74c7b2f-5850-4b79-cd5f-1edd0284a6a7"
int(x)+int(y)
# + colab={"base_uri": "https://localhost:8080/"} id="m8TYcDD8LkBi" outputId="e4968158-c69a-4c32-d487-bc6448af0a10"
x=input('enter first number')
# + colab={"base_uri": "https://localhost:8080/"} id="2sYoY9g6NFQy" outputId="d030f6a3-7655-4d56-eba1-ed41cb63c9b8"
y=input('enter second number')
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="iV_WjNCDNM8r" outputId="e8c7a94c-fcab-4c2e-d1cf-3872a72694cf"
x
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="mcGOT75PNSpj" outputId="a28136fd-5708-415f-eee1-ccf5598a8969"
y
# + colab={"base_uri": "https://localhost:8080/"} id="o92CFTVnNT4r" outputId="8b0d2aa5-232d-4cd6-ea46-be2e701c71b9"
int(x)+int(y)
# + colab={"base_uri": "https://localhost:8080/"} id="zKMmji31Od0C" outputId="d960d5bf-5ca9-42e5-9ae5-51b7b284cf5a"
x=int(input('enter first number'))
y=int(input('enter second student'))
sum=x+y
sub=x-y
division=x/y
multiplication=x*y
print(sum)
print(sub)
print(division)
print(multiplication)
# + id="tdVcNWnbQSt7"
| Untitled0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="6EBVfVBaI8b4" colab_type="text"
# ##### Run this cell to set your notebook up (only mandatory if rlss2019-docker image is not used)
# + id="_q0MdZeQI8b8" colab_type="code" colab={}
# !apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
# !git clone https://github.com/yfletberliac/rlss2019-hands-on.git > /dev/null 2>&1
# !pip install -q torch==1.1.0 torchvision pyvirtualdisplay piglet > /dev/null 2>&1
# + [markdown] id="YQetcFxYI8cE" colab_type="text"
# # <font color='#ed7d31'>Deep Q Networks</font>
# ------------
# You can find the original paper [here](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf).
# + [markdown] id="3bQlci-wI8cH" colab_type="text"
# ## <font color='#ed7d31'>Preliminaries: Q Learning</font>
# + [markdown] id="3uKRby3QI8cJ" colab_type="text"
# #### <font color='#ed7d31'>Q-Value</font>
# + [markdown] id="OF83QDN8I8cL" colab_type="text"
# **Q-Value** is a measure of the overall expected reward assuming the agent is in state $s$ and performs action $a$, and then continues playing until the end of the episode following some policy $\pi$. It is defined mathematically as:
#
# \begin{equation}
# Q^{\pi}\left(s_{t}, a_{t}\right)=E\left[R_{t+1}+\gamma R_{t+2}+\gamma^{2} R_{t+3}+\ldots | s_{t}, a_{t}\right]
# \end{equation}
#
# where $R_{t+1}$ is the immediate reward received after performing action $a_{t}$ in state $s_{t}$ and $\gamma$ is the discount factor and controls the importance of the future rewards versus the immediate ones: the lower the discount factor is, the less important future rewards are.
# + [markdown] id="WNo2x5ctI8cN" colab_type="text"
# #### <font color='#ed7d31'>Bellman Optimality Equation</font>
# + [markdown] id="qgGvMVM8I8cO" colab_type="text"
# Formally, the Bellman equation defines the relationships between a given state (or, in our case, a **state-action pair**) and its successors. While many forms exist, one of the most common is the **Bellman Optimality Equation** for the optimal **Q-Value**, which is given by:
#
# \begin{equation}
# Q^{*}(s, a)=\sum_{s^{\prime}, r} p\left(s^{\prime}, r | s, a\right)\left[r+\gamma \max _{a^{\prime}} Q^{*}\left(s^{\prime}, a^{\prime}\right)\right]
# \end{equation}
#
# Of course, when no uncertainty exists (transition probabilities are either 0 or 1), we have:
#
# \begin{equation}
# Q^{*}(s, a)=r(s, a)+\gamma \max _{a^{\prime}} Q^{*}\left(s^{\prime}, a^{\prime}\right)
# \end{equation}
# + [markdown] id="krQO9lXaI8cQ" colab_type="text"
# #### <font color='#ed7d31'>Q-Value Iteration</font>
# + [markdown] id="YzmGUwfPI8cS" colab_type="text"
# We define the corresponding Bellman backup operator:
# \begin{equation}
# [\mathcal{T} Q]\left(s, a\right)=r(s, a)+\gamma \max _{a^{\prime}} Q\left(s^{\prime}, a^{\prime}\right)
# \end{equation}
# + [markdown] id="Jr6RCm78I8cU" colab_type="text"
# $Q$ is a fixed point of $\mathcal{T}$:
# \begin{equation}
# \mathcal{T} Q^{*}=Q^{*}
# \end{equation}
# + [markdown] id="QFZT3OkxI8cY" colab_type="text"
# If we apply the Bellman operator $\mathcal{T}$ repeatedly to any initial $Q$, the series converges to $Q^{*}$:
# \begin{equation}
# Q, \mathcal{T} Q, \mathcal{T}^{2} Q, \cdots \rightarrow Q^{*}
# \end{equation}
# + [markdown] id="uZBnvkiLI8cY" colab_type="text"
# # <font color='#ed7d31'>Imports</font>
# + id="ltRkWsfyI8ca" colab_type="code" colab={}
import sys
sys.path.insert(0, './rlss2019-hands-on/utils')
# If using the Docker image, replace by:
# sys.path.insert(0, '../utils')
import gym, random, os.path, math, glob, csv, base64
from pathlib import Path
from timeit import default_timer as timer
from datetime import timedelta
import numpy as np
import torch
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
import matplotlib
# %matplotlib inline
from qfettes_plot import plot_all_data
from qfettes_wrappers import *
from openai_wrappers import make_atari, wrap_deepmind
from gym.wrappers import Monitor
from pyvirtualdisplay import Display
from IPython import display as ipythondisplay
from IPython.display import clear_output
# + [markdown] id="OOS4HGeAI8cf" colab_type="text"
# ------------
# + [markdown] id="aRFJRl5SI8cg" colab_type="text"
# # <font color='#ed7d31'>Deep Q learning</font>
# + [markdown] id="mGvghDlPI8ci" colab_type="text"
# Usually in Deep RL, the **Q-Value** is defined as $Q(s,a;\theta)$ where $\theta$ represents the parameters of the function approximation used.
#
# <img src="https://github.com/yfletberliac/rlss2019-hands-on/blob/master/labs/imgs/approx.png?raw=1" alt="Drawing" width="200"/>
#
# For *MuJoCo* or *Roboschool* environments, we usually use a simple 2- or 3-layer MLP whereas when using **raw pixels for observations** such as in *Atari 2600* games, we usually use a 1-, 2- or 3-layer CNN.
#
# In our case, since we want to train DQN on *CartPole*, we will use a 3-layer perceptron for our function approximation.
# + [markdown] id="KaSmqUFuI8cj" colab_type="text"
# ## <font color='#ed7d31'>Network declaration</font>
# + [markdown] id="j6NZQAw6I8cl" colab_type="text"
# In this section, we build $Q(s,a;\theta)$ function approximation. Since the input is composed of 4 scalars, namely:
# <center>[position of cart, velocity of cart, angle of pole, rotation rate of pole]</center>
# we build a FCN -> ReLU -> FCN -> ReLU -> FCN neural network. As an exercice, change the architecture of the network:
#
# 1. Change the two fully-connected layers from 8 hidden neurons to 16
# 3. Add a third layer to the network, with no activation function and `self.num_actions` as the output size
# + id="dfWfc7aYI8cn" colab_type="code" colab={}
class DQN(nn.Module):
def __init__(self, input_shape, num_actions):
super().__init__()
self.input_shape = input_shape
self.num_actions = num_actions
self.fc1 = nn.Linear(self.input_shape[0], 8)
self.fc2 = nn.Linear(8, 8)
self.fc3 = nn.Linear(8, self.num_actions)
def forward(self, x):
x = F.relu(self.fc2(F.relu(self.fc1(x))))
x = self.fc3(x)
return x
# + [markdown] id="ZsuemynWI8cw" colab_type="text"
# ## <font color='#ed7d31'>Safety checks</font>
# + [markdown] id="snREUsj-I8cy" colab_type="text"
# #### <font color='#ed7d31'>Network architecture</font>
# + [markdown] id="x4y2zaTBI8c0" colab_type="text"
# As a *safety check*, inspect the resulting network in the next cell. For instance, the total number of trainable parameters should change with the architecture. Check the correctness of `in_features` and `out_features`.
# + id="wtorqEU-I8c1" colab_type="code" colab={}
env_id = 'CartPole-v0'
env = gym.make(env_id)
network = DQN(env.observation_space.shape, env.action_space.n)
print("Observation space:\n", env.observation_space.shape, "\n")
print("Network architecture:\n", network, "\n")
model_parameters = filter(lambda p: p.requires_grad, network.parameters())
print("Total number of trainable parameters:\n", sum([np.prod(p.size()) for p in model_parameters]))
# + [markdown] id="Z33fZrIfI8c6" colab_type="text"
# #### <font color='#ed7d31'>Run a Policy with Random Actions</font>
# + [markdown] id="6wTyHNXfI8c9" colab_type="text"
# What the working environment looks like? It's always useful to know the details about the environment you train your policy on. For instance, its dynamics, the size of action and observation space, etc. Below we display three different random policies on `CartPole-v0`.
# + id="5zczY4-qI8c_" colab_type="code" colab={}
display = Display(visible=0, size=(1400, 900))
display.start()
def show_video():
html = []
for mp4 in Path("videos").glob("*.mp4"):
video_b64 = base64.b64encode(mp4.read_bytes())
html.append('''<video alt="{}" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{}" type="video/mp4" />
</video>'''.format(mp4, video_b64.decode('ascii')))
ipythondisplay.display(ipythondisplay.HTML(data="<br>".join(html)))
env = Monitor(env, './videos', force=True, video_callable=lambda episode: True)
for episode in range(2):
done = False
obs = env.reset()
while not done:
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
env.close()
show_video()
# + [markdown] id="2OOOnyXOI8dD" colab_type="text"
# We can see the episode ending prematurely because the pole drops.
# + [markdown] id="E8Kh6ie2I8dE" colab_type="text"
# -----
# <font color='#ed7d31'>**Question**:</font>
#
# It is also important to identify some of the characteristics of the problem. `CartPole-v0` can be described as a **fully-observable**, **deterministic**, **continuous state space**, with a **discrete action space** and **frequent rewards**. Take some time to understand each of these terms :-) Try to find the opposite term for each of them, e.g. deterministic <> stochastic.
# + [markdown] id="syet2wKRI8dF" colab_type="text"
# ## <font color='#ed7d31'>Experience Replay Memory</font>
# + [markdown] id="C0nOts_GI8dG" colab_type="text"
# As usual RL tasks have no pre-generated training sets which they can learn from, in off-policy learning, our agent must keep records of all the state-transitions it encountered so it can **learn from them later**. The memory-buffer used to store this is often referred to as the **Experience Replay Memory**. There are several types and architectures of these memory buffers — but some very common ones are:
# - the *cyclic memory buffers*: they make sure the agent keeps training over its new behavior rather than things that might no longer be relevant
# - the *reservoir-sampling-based memory buffers*: they guarantee each state-transition recorded has an even probability to be inserted to the buffer
#
# We use a combination of both.
# + id="aEN7GuNZI8dH" colab_type="code" colab={}
class ExperienceReplayMemory:
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
def push(self, transition):
self.memory.append(transition)
# This function needs an `if` statement in order to keep the capacity to its limit. Write it below.
# Hint: `del something` will delete something if something is an array
if len(self.memory) > self.capacity:
del self.memory[0]
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
# + [markdown] id="LS4Bs-DOI8dM" colab_type="text"
# ------------
# + [markdown] id="zG5kz3TqI8dO" colab_type="text"
# Now we have:
# - the **DQN** network,
# - the **ExperienceReplayMemory**.
#
# Let's build the **Agent** class !
# + [markdown] id="EiZPkKJQI8dP" colab_type="text"
# ## <font color='#ed7d31'>Agent declaration</font>
# + id="TvzHcWleI8dQ" colab_type="code" colab={}
class Agent(object):
def __init__(self, config, env, log_dir='/tmp/gym'):
self.log_dir = log_dir
self.rewards = []
self.action_log_frequency = config.ACTION_SELECTION_COUNT_FREQUENCY
self.action_selections = [0 for _ in range(env.action_space.n)]
# Define the DQN networks
def declare_networks(self):
self.model = DQN(self.num_feats, self.num_actions)
# Create `self.target_model` with the same network architecture
self.target_model = DQN(self.num_feats, self.num_actions)
# Define the Replay Memory
def declare_memory(self):
self.memory = ExperienceReplayMemory(self.experience_replay_size)
# Append the new transition to the Replay Memory
def append_to_replay(self, s, a, r, s_):
self.memory.push((s, a, r, s_))
# Sample transitions from the Replay Memory
def sample_minibatch(self):
transitions = self.memory.sample(self.batch_size)
batch_state, batch_action, batch_reward, batch_next_state = zip(*transitions)
shape = (-1,)+self.num_feats
batch_state = torch.tensor(batch_state, device=self.device, dtype=torch.float).view(shape)
batch_action = torch.tensor(batch_action, device=self.device, dtype=torch.long).squeeze().view(-1, 1)
batch_reward = torch.tensor(batch_reward, device=self.device, dtype=torch.float).squeeze().view(-1, 1)
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, batch_next_state)), device=self.device, dtype=torch.uint8)
# Sometimes all next states are false
try:
non_final_next_states = torch.tensor([s for s in batch_next_state if s is not None], device=self.device, dtype=torch.float).view(shape)
empty_next_state_values = False
except:
non_final_next_states = None
empty_next_state_values = True
return batch_state, batch_action, batch_reward, non_final_next_states, non_final_mask, empty_next_state_values
# Sample action
def get_action(self, s, eps=0.1):
with torch.no_grad():
# Epsilon-greedy
if np.random.random() >= eps:
X = torch.tensor([s], device=self.device, dtype=torch.float)
a = self.model(X).max(1)[1].view(1, 1)
return a.item()
else:
return np.random.randint(0, self.num_actions)
# + [markdown] id="BkUmobU-I8dY" colab_type="text"
# -----
# <font color='#ed7d31'>**Question**:</font>
#
# Remember we define the objective function as
# \begin{equation}
# J=\left(r+\gamma \max _{a^{\prime}} Q\left(s^{\prime}, a^{\prime}, \mathbf{\theta}^{-}\right)-Q(s, a, \mathbf{\theta})\right)^{2},
# \end{equation}
# where $\theta^{-}$ are the target parameters.
#
# Why do we need a target network in the first place ?
# + [markdown] id="Wyl5We9QI8da" colab_type="text"
# ## <font color='#ed7d31'>Learning</font>
# + id="xSp4-kRrI8db" colab_type="code" colab={}
class Learning(Agent):
def __init__(self, env=None, config=None, log_dir='/tmp/gym'):
super().__init__(config=config, env=env, log_dir=log_dir)
# Compute loss from the Bellman Optimality Equation
def compute_loss(self, batch_vars):
batch_state, batch_action, batch_reward, non_final_next_states, non_final_mask, empty_next_state_values = batch_vars
# Estimate
current_q_values = self.model(batch_state).gather(1, batch_action)
# Target
with torch.no_grad():
max_next_q_values = torch.zeros(self.batch_size, device=self.device, dtype=torch.float).unsqueeze(dim=1)
if not empty_next_state_values:
max_next_action = self.get_max_next_state_action(non_final_next_states)
max_next_q_values[non_final_mask] = self.target_model(non_final_next_states).gather(1, max_next_action)
# From the equation above, write the value `expected_q_values`.
expected_q_values = batch_reward + self.gamma*max_next_q_values
# From the equation above, write the value `diff`.
diff = (expected_q_values - current_q_values)
loss = self.MSE(diff)
loss = loss.mean()
return loss
# Update both networks (the agent and the target)
def update(self, s, a, r, s_, sample_idx=0):
self.append_to_replay(s, a, r, s_)
# When not to update ?
# There is a concise way to write to skip the update, fill in the 2 blanks in the `if` statement below.
# Hint: the sample count should be < the learn_start hyperparameter and respect the update_freq.
# if ... or ...:
if sample_idx < self.learn_start or sample_idx % self.update_freq != 0:
return None
batch_vars = self.sample_minibatch()
loss = self.compute_loss(batch_vars)
# Optimize the model
self.optimizer.zero_grad()
loss.backward()
for param in self.model.parameters():
param.grad.data.clamp_(-1, 1)
self.optimizer.step()
self.update_target_model()
self.save_td(loss.item(), sample_idx)
def update_target_model(self):
# Copy weights from model to target_model following `target_net_update_freq`.
self.update_count+=1
if self.update_count % self.target_net_update_freq == 0:
self.target_model.load_state_dict(self.model.state_dict())
# + [markdown] id="-_2J59ARI8dg" colab_type="text"
# ## <font color='#ed7d31'>Model declaration</font>
# + id="ObPKoEy4I8dh" colab_type="code" colab={}
class Model(Learning):
def __init__(self, env=None, config=None, log_dir='/tmp/gym'):
super().__init__(config=config, env=env, log_dir=log_dir)
self.device = config.device
# Hyperparameters
self.gamma = config.GAMMA
self.target_net_update_freq = config.TARGET_NET_UPDATE_FREQ
self.experience_replay_size = config.EXP_REPLAY_SIZE
self.batch_size = config.BATCH_SIZE
self.learn_start = config.LEARN_START
self.update_freq = config.UPDATE_FREQ
# Environment specific parameters
self.num_feats = env.observation_space.shape
self.num_actions = env.action_space.n
self.env = env
self.declare_networks()
self.declare_memory()
self.target_model.load_state_dict(self.model.state_dict())
self.optimizer = optim.Adam(self.model.parameters(), lr=config.LR)
# Move to correct device
self.model = self.model.to(self.device)
self.target_model.to(self.device)
self.model.train()
self.target_model.train()
self.update_count = 0
def save_td(self, td, tstep):
with open(os.path.join(self.log_dir, 'td.csv'), 'a') as f:
writer = csv.writer(f)
writer.writerow((tstep, td))
def get_max_next_state_action(self, next_states):
return self.target_model(next_states).max(dim=1)[1].view(-1, 1)
def MSE(self, x):
return 0.5 * x.pow(2)
def save_reward(self, reward):
self.rewards.append(reward)
def save_action(self, action, tstep):
self.action_selections[int(action)] += 1.0/self.action_log_frequency
if (tstep+1) % self.action_log_frequency == 0:
with open(os.path.join(self.log_dir, 'action_log.csv'), 'a') as f:
writer = csv.writer(f)
writer.writerow(list([tstep]+self.action_selections))
self.action_selections = [0 for _ in range(len(self.action_selections))]
def save_w(self):
if not os.path.exists("../saved_agents"):
os.makedirs("../saved_agents")
torch.save(self.model.state_dict(), '../saved_agents/model.dump')
torch.save(self.optimizer.state_dict(), '../saved_agents/optim.dump')
# + [markdown] id="1Yx9mlzcI8dm" colab_type="text"
# ## <font color='#ed7d31'>Hyperparameters</font>
# + id="2SXOGjXbI8dn" colab_type="code" colab={}
class Config(object):
def __init__(self):
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Main agent variables
self.GAMMA=0.99
self.LR=1e-3
# Epsilon variables
self.epsilon_start = 1.0
self.epsilon_final = 0.01
self.epsilon_decay = 10000
self.epsilon_by_sample = lambda sample_idx: config.epsilon_final + (config.epsilon_start - config.epsilon_final) * math.exp(-1. * sample_idx / config.epsilon_decay)
# Memory
self.TARGET_NET_UPDATE_FREQ = 1000
self.EXP_REPLAY_SIZE = 10000
self.BATCH_SIZE = 64
# Learning control variables
self.LEARN_START = 1000
self.MAX_SAMPLES = 50000
self.UPDATE_FREQ = 1
# Data logging parameters
self.ACTION_SELECTION_COUNT_FREQUENCY = 1000
config = Config()
# + [markdown] id="ZkCyxs4gI8dr" colab_type="text"
# ## <font color='#ed7d31'>Training</font>
# + id="n6eOiLKRI8ds" colab_type="code" colab={}
import gym
from openai_monitor import Monitor
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
start=timer()
log_dir = "/tmp/gym/"
try:
os.makedirs(log_dir)
except OSError:
files = glob.glob(os.path.join(log_dir, '*.monitor.csv')) \
+ glob.glob(os.path.join(log_dir, '*td.csv')) \
+ glob.glob(os.path.join(log_dir, '*action_log.csv'))
for f in files:
os.remove(f)
env_id = 'CartPole-v0'
env = gym.make(env_id)
env = Monitor(env, os.path.join(log_dir, env_id))
model = Model(env=env, config=config, log_dir=log_dir)
episode_reward = 0
observation = env.reset()
for sample_idx in range(1, config.MAX_SAMPLES + 1):
epsilon = config.epsilon_by_sample(sample_idx)
action = model.get_action(observation, epsilon)
# Log action selection
model.save_action(action, sample_idx)
prev_observation=observation
observation, reward, done, _ = env.step(action)
observation = None if done else observation
model.update(prev_observation, action, reward, observation, sample_idx)
episode_reward += reward
if done:
observation = env.reset()
model.save_reward(episode_reward)
episode_reward = 0
if sample_idx % 1000 == 0:
try:
clear_output(True)
plot_all_data(log_dir, env_id, 'DQN', config.MAX_SAMPLES, bin_size=(10, 100, 100, 1), smooth=1, time=timedelta(seconds=int(timer()-start)), ipynb=True)
except IOError:
pass
model.save_w()
env.close()
# + [markdown] id="IkPiA0YwI8dv" colab_type="text"
# By observing the plots, does the learning appear to be stable?
#
# If your answer is *yes*, then start a second run, and a third, with the same hyperparameters. ;-)
# + [markdown] id="w6QVJjZ8I8dw" colab_type="text"
# ## <font color='#ed7d31'>Visualize the agent</font>
# + id="axHJNh-vI8dz" colab_type="code" colab={}
from gym.wrappers import Monitor
# Loading the agent
fname_model = "../saved_agents/model.dump"
fname_optim = "../saved_agents/optim.dump"
log_dir = "/tmp/gym/"
model = Model(env=env, config=config, log_dir=log_dir)
if os.path.isfile(fname_model):
model.model.load_state_dict(torch.load(fname_model))
model.target_model.load_state_dict(model.model.state_dict())
if os.path.isfile(fname_optim):
model.optimizer.load_state_dict(torch.load(fname_optim))
env_id = 'CartPole-v0'
env = gym.make(env_id)
env = Monitor(env, './videos', force=True, video_callable=lambda episode: True)
for episode in range(3):
done = False
obs = env.reset()
while not done:
action = model.get_action(obs)
obs, _, done, _ = env.step(action)
env.close()
show_video()
# + [markdown] id="3w1_RPnHI8d1" colab_type="text"
# You can experiment with modifying the hypermarameters (learning rate, batch size, experience replay size, etc.) to see if you can make its behaviour improve !
# + [markdown] id="Isg6kOBaI8d2" colab_type="text"
# -------------
| Reinforcement Learning Summer School 2019 (Lille, France)/practical_drl_2/practical_drl_2_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import folium
from folium.plugins import MarkerCluster
from folium.plugins import Draw
from folium import plugins
from folium.plugins import MeasureControl
import pandas as pd
import branca
import numpy as np
import vincent
import os
import json
import matplotlib.pyplot as plt
from folium.plugins import FloatImage
print(folium.__version__)
# +
#attr it's information related to the map that is visualized below
attr = ('© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'UTEM Cartografia, © <a href="https://github.com/camiloceacarvajal?tab=repositories/attributions">CamiloGithub</a>')
#attr2 is an access with my name that direct me to facebook
attr2 = ('© <a href="https://earth.google.com/web/">google</a> '
'UTEM Cartografia, © <a href="https://github.com/camiloceacarvajal?tab=repositories/attributions">CamiloGithub</a>')
#define where information is extracted from
cementerio_data = pd.read_csv("Tablacementerio.csv")
#define the name of the map and its initial characteristics such as: scale, zomm, initial coordinates
m = folium.Map(cementerio_data[['Latitudes','Longitudes']].mean().tolist(),
attr=None,tiles = None, zoom_start=13,control_scale=True,prefer_canvas=True,detect_retina = True)
#thunderforest base map
folium.raster_layers.TileLayer(
tiles='https://tile.thunderforest.com/transport/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0',
attr= attr,name='Mapa Base OSM',
max_zoom=21,subdomains=['mt0', 'mt1', 'mt2', 'mt3'],
overlay=False,control=True,
).add_to(m)
#googe base tesela
folium.raster_layers.TileLayer(
tiles='http://{s}.google.com/vt/lyrs=s&x={x}&y={y}&z={z}',
attr= attr2,name='Google Earth',
max_zoom=21,subdomains=['mt0', 'mt1', 'mt2', 'mt3'],
overlay=False,control=True,
).add_to(m)
#plugins.ScrollZoomToggler().add_to(m)
plugins.Fullscreen(
position='topright',title='Expandir',
title_cancel='salir',force_separate_button=True).add_to(m)
#definition and reading of attribute tables
cementerio_data = pd.read_csv("Tablacementerio.csv")
mc = MarkerCluster(name="Latas_bebida_cerveza")
#cycle to generate markers with a visual as excel
for N,data in cementerio_data.iterrows():
station_html = folium.Html('<b>%s</b>' %(pd.DataFrame(data).to_html()),script=True)
mc.add_child(folium.Marker([data.Latitudes,data.Longitudes],
popup = folium.Popup(station_html),icon=folium.Icon(color='green'if data.Codigo == 'cerveza'
else 'red',prefix = 'glyphicon',icon='tree-deciduous')
))
url = ('https://raw.githubusercontent.com/camiloceacarvajal/folium-python/master/descarga.jpeg')
FloatImage(url, bottom=4, left=92).add_to(m)
#layer controller
m.add_child(mc)
m.add_child(folium.LayerControl())
#mark latitude longitude of the point or place
m.add_child(folium.LatLngPopup())
#marca lat lot del punto o lugar
Draw(export=None).add_to(m)
#guardar archivo con un nombre
m.save(os.path.join( 'Preproyectolatas.02.html'))
m
# -
| Proyectolatas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Text Analysis
# ---
#
# ## Introduction
#
# Text Analysis is used for summarizing or getting useful information out of a large amount of unstructured text stored in documents. This opens up the opportunity of using text data alongside more conventional data sources (e.g, surveys and administrative data). The goal of text analysis is to take a large corpus of complex and unstructured text data and extract important and meaningful messages in a comprehensible meaningful way.
#
#
# Text Analysis can help with the following tasks:
#
# * **Informationan retrieval**: Help find relevant information in large databases such as a systematic literature review.
#
# * **Clustering and text categorization**: Techniques like topic modeling can summarize a large corpus of text by finding the most important phrases.
#
# * **Text Summarization**: Create category-sensitive text summaries of a large corpus of text.
#
# * **Machine Translation**: Translate from one language to another.
#
# In this tutorial, we are going to analyze job advertisements from 2010-2015 using topic modeling to examine the content of our data and document classification to tag the type of job in the advertisement. First we will go over how to transform our data into a matrix that can be read in by an algorithm.
#
#
#
# ## Glossary of Terms
#
# * **Corpus**: A corpus of documents is the set of all documents in the dataset.
#
# * **Tokenize**: Tokenization is the process by which text is sepearated into meaningful terms or phrases. In english this is fairly triial as words as separated by whitespace.
#
# * **Stemming**: Stemming is a type of text normalization where words that have different forms but their essential meaning at the same are normalized to the original dictionary form of a word. For example "go," "went," and "goes" all stem from the lemma "go."
#
# * **TFIDF**: TFIDF (Term frequency-inverse document frequency) is an example of feature enginnering where the most important words are extracted by taking account their frequency in documents and the entire corpus of documents as a whole.
#
# * **Topic Modeling**: Topic modeling is an unsupervised learning method where groups of co-occuring words are clustered into topics. Typically, the words in a cluster should be related and make sense (e.g, boat, ship, captain). Individual documents will then fall into multiple topics.
#
# * **LDA**: LDA (latent Dirichlet allocation) is a type of probabilistic model commonly used for topic modelling.
#
# * **Stop Words**: Stop words are words that have little semantic meaning like prepositions, articles and common nouns. They can often be ignored.
# +
# %pylab inline
import nltk
import ujson
import re
import time
import progressbar
import pandas as pd
from __future__ import print_function
from six.moves import zip, range
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_curve, roc_auc_score, auc
from sklearn import preprocessing
from collections import Counter, OrderedDict
from nltk.corpus import stopwords
from nltk import PorterStemmer
nltk.download('stopwords') #download the latest stopwords
# -
# # Load the Data
#
# Our Dataset for this tutorial will be a subset jobs-ad data from 2010-2015 compiled by the Commonwealth of Virginia. The full data and how this subset was created can be found in the data folder in this tutorial.
df_jobs_data = pd.read_csv('./data/jobs_subset.csv')
# # Explore the Data
df_jobs_data.head()
# Our table has 4 fields. `normalizedTitle_onetName`, `normalizedTitle_onetCode`, `jobDescription`, `title`
#
# Onet is an online database that contains hundreds of occupational definitions. https://en.wikipedia.org/wiki/Occupational_Information_Network
#
# The normalizedTitle_onetName and the normalizedTitle_onetCode are derived from the Onet Database. We wil use the names in the document tagging portion of the tutorial. The jobDescription is the actual jobDescription and the title is derived from the jobDescription.
# ### How many unique job titles are in this dataset?
df_jobs_data.normalizedTitle_onetName.unique()
df_jobs_data.title.unique()
df_jobs_data.title.unique().shape
# There are 5 unique categories of jobs using the ONet classification. There are too many unique job titles in the title field to display. We can see the shape of the array of unique titles is 2496 titles.
#
# Each job description has a great deal of information contained in unstructered text. We can use text analysis to find overarching concepts that are in our corpus. This will allow us to the discover the most important words and phrases in the job descriptions and give us a big-picture of the content in our collection.
#
#
# # Topic Modeling
#
# We are going to apply topic modeling, an unsuperivised learning method, to our corpus to find the high-level topics in our corpus as a first-go for exploring our data. As we apply topic modeling we will discuss ways of cleaning and preprocessing our data to get the best results.
#
# Topic modeling is a broad subfield of machine learning and natural language processing. We are going to focus on one approach, Latent Dirichlet allocation (LDA). LDA is a fully Bayesian extension of probabilistic latent semantic indexing, itself a probabilistic extension of latent semantic analysis.
#
# In topic modeling we first assume the existence of topics in the corpus and that there is a small number of topics that can explain a corpus. Topics, in this case, are a ranked-list of words from our corpus, with the highest probability words at the top. A single document can be explained by multiple topics. For instance, an article on net neutrality has to do with both technology and politics. The set of topics used by a document is known as the document's allocation, hence, the name latent Dirchlet allocation, each document has an allocation of latent topics allocated by Dirchlet distribution.
#
# ## Processing text data
#
# The first important step in working with text data is cleaning and processing the data. This include but is not limited to *forming a corpus of text, tokenization, removing stop-words, finding words colocated together (N-grams), and stemming and lemmatization*. Each of these steps will be discussed below. The ultimate goal is to transform our text data into a form an algorithm can work with. A sequence of symbols cannot be fed directly into an algorithm. Algorithms expect numerical feature vectors with fixed size rather then a document with a variable document length. We will be transforming our text corpus into a *bag of n-grams* to be further analyzed. In this form our text data is represented as a matrix where each row refers to a specific job description (document) and each column is the occurence of a word (feature).
#
#
# ### Bag of n-gram representation example
#
# Ultimately, we want to take our collection of documents, corpus, and convert it into a matrix. Fortunately sklearn has a pre-built object, `CountVectorizer`, that can tokenize, eliminate stopwords, identify n-grams and stem our corpus, outputing a matrix in one step. Before we apply the vectorizer on our corpus of data we are going to apply it to a toy example so we can understand the output and how a bag of words is represented.
def create_bag_of_words(corpus,
NGRAM_RANGE=(0,1),
stop_words = None,
stem = False,
MIN_DF = 0.05,
MAX_DF = 0.95,
USE_IDF=False):
"""
Turn a corpus of text into a bag-of-words.
Parameters
-----------
corpus: ls
test of documents in corpus
NGRAM_RANGE: tupule
range of N-gram default (0,1)
stop_words: ls
list of commonly occuring words that have little semantic
value
stem: bool
use a stemmer to stem words
MIN_DF: float
exclude words that have a frequency less than the threshold
MAX_DF: float
exclude words that have a frequency greater than the threshold
Returns
-------
bag_of_words: scipy sparse matrix
scipy sparse matrix of text
features:
ls of words
"""
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
if stem:
tokenize = lambda x: [stemmer.stem(i) for i in x.split()]
else:
tokenize = None
vectorizer = CountVectorizer(analyzer=ANALYZER,
tokenizer=tokenize,
ngram_range=NGRAM_RANGE,
stop_words = stop_words,
strip_accents=STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF)
bag_of_words = vectorizer.fit_transform( corpus ) #transform our corpus is a bag of words
features = vectorizer.get_feature_names()
if USE_IDF:
NORM = None #turn on normalization flag
SMOOTH_IDF = True #prvents division by zero errors
SUBLINEAR_IDF = True #replace TF with 1 + log(TF)
transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True)
#get the bag-of-words from the vectorizer and
#then use TFIDF to limit the tokens found throughout the text
tfidf = transformer.fit_transform(bag_of_words)
return tfidf, features
else:
return bag_of_words, features
toy_corpus = ['this is document one', 'this is document two', 'text analysis on documents is fun']
toy_bag_of_words, toy_features = create_bag_of_words(toy_corpus)
# The counter_vectorizer outputs a matrix. In this case a sparse matrix, a matrix that has a lot more 0s then 1s. To save space scipy has special methods for storing sparse matrices in a space-efficient way rather than saving many many 0s.
toy_corpus
np_bag_of_words = toy_bag_of_words.toarray()
np_bag_of_words
toy_features
# Our data has been transformed into a 3x9 matrix where each row corresponds to a document and the columns correspond to the features. A 1 indicates the existence of the feature or word in the document, 0 indicates the word is not present.Our toy corpus is now ready to be analyzed. We illustrated bag of n-gram with a toy example because the matrix for a much larger corpus would be much larger and harder to interpet on our corpus of data.
# ##### word counts
#
# As a initial look into the data we can examine what the top few words are in our corpus. We can sum the columns of the bag_of_words and then convert to a numpy array. From here we can zip the features and word_count into a dictionary
# and display the results.
def get_word_counts(bag_of_words, feature_names):
"""
Get the ordered word counts from a bag_of_words
Parameters
----------
bag_of_words: obj
scipy sparse matrix from CounterVectorizer
feature_names: ls
list of words
Returns
-------
word_counts: dict
Dictionary of word counts
"""
np_bag_of_words = bag_of_words.toarray()
word_count = np.sum(np_bag_of_words,axis=0)
np_word_count = np.asarray(word_count).ravel()
dict_word_counts = dict( zip(feature_names, np_word_count) )
orddict_word_counts = OrderedDict(
sorted(dict_word_counts.items(), key=lambda x: x[1], reverse=True), )
return orddict_word_counts
get_word_counts(toy_bag_of_words, toy_features)
# ### Text Corpora
#
# First we need to form our corpus, a set of multiple similiar documents. In our case, our corpus is the set of all job descriptions. We can pull out the job descriptions from the data frame by pulling out the underlying numpy array using the `.values` attribute.
corpus = df_jobs_data['jobDescription'].values #pull all the jobDescriptions and put them in a numpy array
corpus
def create_topics(tfidf, features, N_TOPICS=3, N_TOP_WORDS=5,):
"""
Given a matrix of features of text data generate topics
Parameters
-----------
tfidf: scipy sparse matrix
sparse matrix of text features
N_TOPICS: int
number of topics (default 10)
N_TOP_WORDS: int
number of top words to display in each topic (default 10)
Returns
-------
ls_keywords: ls
list of keywords for each topics
doctopic: array
numpy array with percentages of topic that fit each category
N_TOPICS: int
number of assumed topics
N_TOP_WORDS: int
Number of top words in a given topic.
"""
with progressbar.ProgressBar(max_value=progressbar.UnknownLength) as bar:
i=0
lda = LatentDirichletAllocation( n_topics= N_TOPICS,
learning_method='online') #create an object that will create 5 topics
bar.update(i)
i+=1
doctopic = lda.fit_transform( tfidf )
bar.update(i)
i+=1
ls_keywords = []
for i,topic in enumerate(lda.components_):
word_idx = np.argsort(topic)[::-1][:N_TOP_WORDS]
keywords = ', '.join( features[i] for i in word_idx)
ls_keywords.append(keywords)
print(i, keywords)
bar.update(i)
i+=1
return ls_keywords, doctopic
corpus_bag_of_words, corpus_features = create_bag_of_words(corpus)
# Let's examine our features.
corpus_features
# The first aspect of the feature list that should stand out for us is that the first few entries are numbers that have no real semantic meaning. There are also other useless words such as prepositions and articles that also have no semantic meaning. The words *ability* or *abilities* or *accuracy* and *accurate* are also quite similiar and mean the same thing. We should try cleaning our corpus of data of these types of words as they just add noise to our analysis. Nevertheless let's try creating topics and seeing the quality of the results.
get_word_counts(corpus_bag_of_words, corpus_features)
# Our top words are articles, prepositions and conjunctions that do not tell us anything about our courpus. Let's march on create topics anyway.
ls_corpus_keywords, corpus_doctopic = create_topics(corpus_bag_of_words, corpus_features)
# Looking at these topics we have no real knowledge of what is in our corpus, with the exception that there are job ads written in Spanish. The problem is the the top words in the topics are conjunctions and prepositions that have no semantic information. We have to try and clean and process our data to get more meaningful infomation.
# ### Text Cleaning and Normalization
#
# To clean and normalize text we will remove all special characters, numbers, and punctuation. Then we will make all the text lowercase to normalize the text; this is so words like "the" and "The" will be counted as the same in our analysis. To remove the special characters, numbers and punctuation we will use regular expressions.
#
# #### Regular Expressions
#
# >"Some people, when confronted with a problem, think
# >'I know, I'll use regular expressions.' Now they have two problems."
# > -- <NAME>
#
# Regular Expressions or regexes match a certain amount text in a document based on a set of rules and syntax. The name "regular expressions" actually comes from the mathematical theory it is based on. These rules are useful for pulling out useful information in a large amount of text (e.g., email addresses, html-tags, credit card numbers). Regexes often match text much more quickly then plain text sorting and can often reduce their development time. Some regular expressions can become quite complicated and it may then become a good idea to write code using Python. Any developer should keep in mind there is a trade-off between optimization and understandibility. In Python, a general philosophy is code is meant to be as understandable by *people* as much as possible. You should therefore always tend toward the understandabilty side of things rather than overly optimizing your code. Your future-self, code-reviewers, people who inherit your code, and anyone else who has to make sense of your code in the future will appreciate it.
#
# For our purposes we are going to use a regular expression to match all characters that are not letters -- punctutation, quotes, special characters and numbers --replace them with spaces and then take all the remaining characters and make them lowercase.
#
# A full tutorial on regular expressions would be outside the scope of this tutorial. There are many good tutorials that can be found on-line. There is also a great interactive tool for developing and checking regular expressions regex101.com.
#
# We will be using the `re` library in python for regular expression matching.
# +
#get rid of the punctuations and set all characters to lowercase
RE_PREPROCESS = r'\W+|\d+' #the regular expressions that matches all non-characters
#get rid of punctuation and make everything lowercase
#the code belows works by looping through the array of text
#for a given piece of text we invoke the `re.sub` command where we pass in the regular expression, a space ' ' to
#subsitute all the matching characters with
#we then invoke the `lower()` method on the output of the re.sub command
#to make all the remaining characters
#the cleaned document is then stored in a list
#once this list has been filed it is then stored in a numpy array
processed_corpus = np.array( [ re.sub(RE_PREPROCESS, ' ', comment).lower() for comment in corpus] )
# -
# #### first decription before cleaning
corpus[0]
# #### first description after cleaning
processed_corpus[0]
# All lowercase, all numbers and special chracters have been removed. Out text is now normalized.
# ### Tokenization
#
# Now that we have cleaned our text we can tokenize it by deciding which terms and phrases are the most meaningful. In this case we want to split our text into individual words. Our words are separted by spaces so we can use the `.split()` command to turn are document into a list of words using a space as the character to split on as an example. Normally the `CountVectorizer` handles this for us.
tokens = processed_corpus[0].split()
tokens
# ### Stopwords
#
# Stopwords are words that have very little semantic meaning and are found throughout a text. Having the word *the* or *of* will tell us nothing about our corpus, nor will they be meaningful features. Examples of stopwords are prepositions, articles and common nouns. To process the corpus even further we can eliminate these stopwords by checking if the are in a list of commonly occuring stopwords.
#
eng_stopwords = stopwords.words('english')
#sample of stopwords
eng_stopwords[::10]
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,stop_words=eng_stopwords)
dict_processed_word_counts = get_word_counts(processed_bag_of_words, processed_features)
dict_processed_word_counts
# Much better! Now let's see how this affects the topics that are produce. Though the top 20 words are likely to be in all the job ads. Let's add them to the stopwords to remove them as well.
top_20_words = list(dict_processed_word_counts.keys())[:20]
domain_specific_stopwords = eng_stopwords + top_20_words
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords)
dict_processed_word_counts = get_word_counts(processed_bag_of_words, processed_features)
dict_processed_word_counts
# This is a bit better. Let's see what topics we produce.
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features)
# Now we are starting to get somewhere! There are a lot of jobs that have to do with the law and engineering and medicine. We should increase the number of topics and words for each topics to see if we can understand more from our corpus.
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 5,
N_TOP_WORDS= 10)
# Adding more topics has revealed to larger subtopics. Let's see if adding 10 topics will tell us more information.
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
# It looks like we have a good amount of topics. Some of the top words are quite similiar such as engineering and enginner. We can reduce those words to its stem to further refine our features.
# ### Stemming and lemmitzation
#
# We can further process our text through *stemming and lemmatization*. Words can take on muliple forms with limited change to their meaning. For example "systems", "systematic" and "system" are all different words but they all have the same meeting. We can replace all these words with system witout losing any meaning. The lemma is the original dictionary form of a word (e.g. lying and lie). There are several well known stemming algorithms -- Porter, Snowball, Lancaster -- that all have strengths and weakneses. For this tutorial we are using the Porter Stemmer.
stemmer = PorterStemmer()
print(stemmer.stem('lies'))
print(stemmer.stem("lying"))
print(stemmer.stem('systematic'))
print(stemmer.stem("running"))
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords,
stem=True)
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
# Not it appears we have picked up some extra topics that describe the educational requirements for a job ad or the equal opportunity clause of a job ad.
# #### N-grams
#
# Individual words are not always the the correct unit of analysis. Prematurely removing stopwords can lead to losing pharases such as "kick the bucket", "commander in chief", or "sleeps with the fishes". Idenitfying these N-grams requires looking for patterns of words that often appear together in fixed patterns.
#
# Now let's transform our corpus into a bag of n-grams which in this case is a bag of bi-grams or bag of 2-grams.
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords,
stem=True,
NGRAM_RANGE=(0,2))
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
# Notice one of the top words in one of the topics is "northrop grumman", a bi-gram!
# #### TFIDF (Term Frequency Inverse Document Frequency)
#
# A final step in cleaning and processing our text data is TFIDF. TFIDF (Term frequency-inverse document frequency) is an example of feature enginnering where the most important words are extracted by taking account their frequency in documents and the entire corpus of documents as a whole. Words that appear in all documents are deemphazized while more meaningful words are emphaized.
processed_bag_of_words, processed_features = create_bag_of_words(processed_corpus,
stop_words=domain_specific_stopwords,
stem=True,
NGRAM_RANGE=(0,2),
USE_IDF = True)
dict_word_counts = get_word_counts(processed_bag_of_words,
processed_features)
dict_word_counts
# The words counts have been reweighted to emphasize the more meaninful words of the corpus while deemphasizing the the are found throughout the corpus.
processed_keywords, processed_doctopic = create_topics(processed_bag_of_words,
processed_features,
N_TOPICS = 10,
N_TOP_WORDS= 15)
#grab the topic_id of the majority topic for each document and store it in a list
ls_topic_id = [np.argsort(processed_doctopic[comment_id])[::-1][0] for comment_id in range(len(corpus))]
df_jobs_data['topic_id'] = ls_topic_id #add to the dataframe so we can compare with the job titles
# Now that each row is tagged with a topic id let's see how well the topics exaplain the job advertisements.
topic_num = 0
print(processed_keywords[topic_num])
df_jobs_data[ df_jobs_data.topic_id == topic_num ].head(10)
# # Supervised Learning: Document Classification.
#
# Now we turn our attention to supervised learning. Previously, using topic modelling, we were inferring relationships between the data. In supervised learning, we produce a label, *y*, given some data *x*. In order to produce labels we need to first have examples our algorithm can learn from, a training set. Developing a training set can be very expensive, as it can require a large amount of human labor or linguistic expertise. Document classification is the case where our x are documents and our y are what the documents are (e.g, title for a job position). A common example of document classification is spam detection in emails. In sentiment analysis our x is our documents and y is the state of the author. This can range from an author being happy or unhappy with a product or the author being pollitically conservative or liberal. There is also Part-of-speech tagging were our x are individual words and y is the part-of-speech.
#
# In this section we are going to train a classifier to classify job titles using our jobs dataset.
#
#
#
# ## Load the Data
df_train = pd.read_csv('./data/train_corpus_document_tagging.csv')
df_test = pd.read_csv('./data/test_corpus_document_tagging.csv')
df_train.head()
df_train['normalizedTitle_onetName'].unique()
Counter(df_train['normalizedTitle_onetName'].values)
df_test.head()
df_test['normalizedTitle_onetName'].unique()
Counter(df_test['normalizedTitle_onetName'].values)
# Our data is job advertisements for credit analysts and financial examiners.
# ## Process our Data
#
# In order to feed out data into a classifier we need to pull out the labels, our y's, and a clean corpus of documents, x, for our training and testing set.
train_labels = df_train.normalizedTitle_onetName.values
train_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_train.jobDescription.values])
test_labels = df_test.normalizedTitle_onetName.values
test_corpus = np.array( [re.sub(RE_PREPROCESS, ' ', text).lower() for text in df_test.jobDescription.values])
labels = np.append(train_labels, test_labels)
# Just as we had done in the unsupervised learning we have to transform our data. This time we have to transform our testing and training set into two different bag-of-words. The classifer will learn from the training set and we will evaluate the clasffier's performance on the testing set.
# +
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
TOKENIZER = None
NGRAM_RANGE = (0,2) #Range for pharases of words
MIN_DF = 0.01 # Exclude words that have a frequency less than the threshold
MAX_DF = 0.8 # Exclude words that have a frequency greater then the threshold
vectorizer = CountVectorizer(analyzer=ANALYZER,
tokenizer=None, # alternatively tokenize_and_stem but it will be slower
ngram_range=NGRAM_RANGE,
stop_words = stopwords.words('english'),
strip_accents=STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF)
# +
NORM = None #turn on normalization flag
SMOOTH_IDF = True #prvents division by zero errors
SUBLINEAR_IDF = True #replace TF with 1 + log(TF)
USE_IDF = True #flag to control whether to use TFIDF
transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True)
#get the bag-of-words from the vectorizer and
#then use TFIDF to limit the tokens found throughout the text
start_time = time.time()
train_bag_of_words = vectorizer.fit_transform( train_corpus ) #using all the data on for generating features!! Bad!
test_bag_of_words = vectorizer.transform( test_corpus )
if USE_IDF:
train_tfidf = transformer.fit_transform(train_bag_of_words)
test_tfidf = transformer.transform(test_bag_of_words)
features = vectorizer.get_feature_names()
print('Time Elapsed: {0:.2f}s'.format(
time.time()-start_time))
# -
# We cannot pass the label "Credit Analyst" or "Financial Examiner" into the classifier. Instead we to encode them as 0s and 1s using the labelencoder as a part of sklearn.
#relabel our labels as a 0 or 1
le = preprocessing.LabelEncoder()
le.fit(labels)
labels_binary = le.transform(labels)
# We also need to create arrays of indices so we can access the training and testing sets accoringly.
train_size = df_train.shape[0]
train_set_idx = np.arange(0,train_size)
test_set_idx = np.arange(train_size, len(labels))
train_labels_binary = labels_binary[train_set_idx]
test_labels_binary = labels_binary[test_set_idx]
# The classifier we are using in the example is LogisticRegression. As we saw in the Machine Learning tutorial we first decide on a classifier, then we fit the classfier to create a model. We can then test our model with our testing set by passing in the features of the testing set. The model will output the probablity of each document being classfied as a Credit Analyst or Financial Analyst.
clf = LogisticRegression(penalty='l1')
mdl = clf.fit(train_tfidf, labels_binary[train_set_idx]) #train the classifer to get the model
y_score = mdl.predict_proba( test_tfidf ) #score of the document being an ad for Credit or Financial Analyst
# ### Evaluation
#
def plot_precision_recall_n(y_true, y_prob, model_name):
"""
y_true: ls
ls of ground truth labels
y_prob: ls
ls of predic proba from model
model_name: str
str of model name (e.g, LR_123)
"""
from sklearn.metrics import precision_recall_curve
y_score = y_prob
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true, y_score)
precision_curve = precision_curve[:-1]
recall_curve = recall_curve[:-1]
pct_above_per_thresh = []
number_scored = len(y_score)
for value in pr_thresholds:
num_above_thresh = len(y_score[y_score>=value])
pct_above_thresh = num_above_thresh / float(number_scored)
pct_above_per_thresh.append(pct_above_thresh)
pct_above_per_thresh = np.array(pct_above_per_thresh)
plt.clf()
fig, ax1 = plt.subplots()
ax1.plot(pct_above_per_thresh, precision_curve, 'b')
ax1.set_xlabel('percent of population')
ax1.set_ylabel('precision', color='b')
ax1.set_ylim(0,1.05)
ax2 = ax1.twinx()
ax2.plot(pct_above_per_thresh, recall_curve, 'r')
ax2.set_ylabel('recall', color='r')
ax2.set_ylim(0,1.05)
name = model_name
plt.title(name)
plt.show()
plot_precision_recall_n(labels_binary[test_set_idx], y_score[:,1], 'LR')
# If we examine our precision-recall curve we can see that our precision is 1 and recall is 0.8 up to 0.4 percent of the population. Unlike the previous example where we are using a precision at k curve to prioritize our resources. We can still use a precision at k curve to see what parts of the corpus can be tagged by the classifier and which should undergo a manual clerical review. Based on this we can make decisions of what documents should be manually tagged by a person during a clerical rewiew, say, the percent of the population above 40%.
#
# Alternatively, we can try to maximize the entire precision-recall space. In this case we need a different metric.
def plot_precision_recall(y_true,y_score):
"""
Plot a precision recall curve
Parameters
----------
y_true: ls
ground truth labels
y_score: ls
score output from model
"""
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true,y_score[:,1])
plt.plot(recall_curve, precision_curve)
plt.xlabel('Recall')
plt.ylabel('Precision')
auc_val = auc(recall_curve,precision_curve)
print('AUC-PR: {0:1f}'.format(auc_val))
plt.show()
plt.clf()
plot_precision_recall(labels_binary[test_set_idx],y_score)
# If we look at the area under the curve, 0.96, we see we have a very good classifier. The AUC shows how accurate our scores are under different cut-off thresholds. If you recall from the Machine Learning tutorial, the model outputs a score. We then set a cutoff to bin each score as a 0 or 1. The closer our scores are to the true values the more resilent they are to different cutoffs. For instance, if our scores were perfect our AUC would be 1.
# ## Feature Importances
def display_feature_importances(coef,features, labels, num_features=10):
"""
output feature importances
Parameters
----------
coef: numpy
feature importances
features: ls
feature names
labels: ls
labels for the classifier
num_features: int
number of features to output (default 10)
Example
--------
"""
coef = mdl.coef_.ravel()
dict_feature_importances = dict( zip(features, coef) )
orddict_feature_importances = OrderedDict(
sorted(dict_feature_importances.items(), key=lambda x: x[1]) )
ls_sorted_features = list(orddict_feature_importances.keys())
num_features = 10
label0_features = ls_sorted_features[:num_features]
label1_features = ls_sorted_features[-num_features:]
print(labels[0],label0_features)
print(labels[1], label1_features)
display_feature_importances(mdl.coef_.ravel(), features, ['Credit Analysts','Financial Examiner'])
# The feature importances are which words are the most relevant for predicting the type of Job Ad. We would expect words like credit, customer and candidate to be found in ads for a Credit Analyst. While words like review officer, compliance would be found in ads for a Financial Examiner.
# ## Cross-validation
#
# Recall from the machine learning tutorial that we are seeking the find the most general pattern in the data in order to have to most general model that will be successfull at classfying new unseen data. Our previous strategy above was the *Out-of-sample and holdout set*. With this strategy we try to find a general pattern by randomly dividing our data into a test and training set based on some percentage split (e.g., 50-50 or 80-20). We train on the test set and evalute on the test set, where we pretend the test set is unforseen data. A significant drawback with this approach is we may be lucky or unlucky with our random split. A possible solution is to create many random splits into training and testing sets and evaluate each split to estimate the performance of a given model.
#
# A more sophisticated holdout training and testing procedure is *cross-validation*. In cross-validation we split our data into k-folds or k-partions, usually 5 or 10 folds. We then iterate k times. In each iteration one of the folds is used as a testing set and the rest of the folds are combined to form the training set. We can then evaluate the performance at each iteration to estimate the performance of a given method. An advantage of using cross-validation is all examples of data are used in the training set at least once.
#
def create_test_train_bag_of_words(train_corpus, test_corpus):
"""
Create test and training set bag of words
Parameters
----------
train_corpus: ls
ls of raw text for text corpus.
test_corpus: ls
ls of raw text for train corpus.
Returns
-------
(train_bag_of_words,test_bag_of_words): scipy sparse matrix
bag-of-words representation of train and test corpus
features: ls
ls of words used as features.
"""
#parameters for vectorizer
ANALYZER = "word" #unit of features are single words rather then phrases of words
STRIP_ACCENTS = 'unicode'
TOKENIZER = None
NGRAM_RANGE = (0,2) #Range for pharases of words
MIN_DF = 0.01 # Exclude words that have a frequency less than the threshold
MAX_DF = 0.8 # Exclude words that have a frequency greater then the threshold
vectorizer = CountVectorizer(analyzer=ANALYZER,
tokenizer=None, # alternatively tokenize_and_stem but it will be slower
ngram_range=NGRAM_RANGE,
stop_words = stopwords.words('english'),
strip_accents=STRIP_ACCENTS,
min_df = MIN_DF,
max_df = MAX_DF)
NORM = None #turn on normalization flag
SMOOTH_IDF = True #prvents division by zero errors
SUBLINEAR_IDF = True #replace TF with 1 + log(TF)
USE_IDF = True #flag to control whether to use TFIDF
transformer = TfidfTransformer(norm = NORM,smooth_idf = SMOOTH_IDF,sublinear_tf = True)
#get the bag-of-words from the vectorizer and
#then use TFIDF to limit the tokens found throughout the text
train_bag_of_words = vectorizer.fit_transform( train_corpus )
test_bag_of_words = vectorizer.transform( test_corpus )
if USE_IDF:
train_tfidf = transformer.fit_transform(train_bag_of_words)
test_tfidf = transformer.transform(test_bag_of_words)
features = vectorizer.get_feature_names()
return train_tfidf, test_tfidf, features
# +
from sklearn.cross_validation import StratifiedKFold
cv = StratifiedKFold(train_labels_binary, n_folds=5)
train_labels_binary = le.transform(train_labels)
for i, (train,test) in enumerate(cv):
cv_train = train_corpus[train]
cv_test = train_corpus[test]
bag_of_words_train, bag_of_words_test, feature_names = create_test_train_bag_of_words(cv_train,
cv_test)
probas_ = clf.fit(bag_of_words_train,
train_labels_binary[train]).predict_proba(bag_of_words_test)
cv_test_labels = train_labels_binary[test]
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(cv_test_labels,
probas_[:,1])
auc_val = auc(recall_curve,precision_curve)
plt.plot(recall_curve, precision_curve, label='AUC-PR {0} {1:.2f}'.format(i,auc_val))
plt.ylim(0,1.05)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend(loc="lower left", fontsize='x-small')
# -
# In this case we did 5-fold cross-validation and plotted precision-recall curves for each iteration. We can then average the AUC-PR of each iteration to estimate the performance of our method.
# ## Examples of tagging
# +
num_comments = 2
label0_comment_idx = y_score[:,1].argsort()[:num_comments] #SuicideWatch
label1_comment_idx = y_score[:,1].argsort()[-num_comments:] #depression
test_set_labels = labels[test_set_idx]
#convert back to the indices of the original dataset
top_comments_testing_set_idx = np.concatenate([label0_comment_idx,
label1_comment_idx])
#these are the 5 comments the model is most sure of
for i in top_comments_testing_set_idx:
print(
u"""{}:{}\n---\n{}\n===""".format(test_set_labels[i],
y_score[i,1],
test_corpus[i]))
# -
# These are the top-2 example for each label that the model is sure of. In this case we can see our important feature words in the ads and see how the model classified these advertisements.
# # Further Resources
# A great resource for NLP in python is
# [Natural Language Processing with Python](https://www.amazon.com/Natural-Language-Processing-Python-Analyzing/dp/0596516495)
# # Exercises
# Work thorugh the Reddit_TextAnalysis.ipynb notebook.
| curriculum/2_data_exploration_and_analysis/text-analysis/JobsData_TextAnalysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# encoding: UTF-8
# %matplotlib inline
"""
展示如何执行参数优化。
"""
from __future__ import division
from __future__ import print_function
from vnpy.trader.app.ctaStrategy.ctaBacktesting import BacktestingEngine, MINUTE_DB_NAME, OptimizationSetting
from vnpy.trader.app.ctaStrategy.ctaBase import loadContractDetail
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import csv
import time
import sys
def run_optimization(strategy_name, symbol, start_date, end_date,
parameter, slippage=1, rate=0.3/10000):
"""优化程序"""
#--------------------------------------------------
#载入策略类
file_name = 'strategy' + strategy_name.replace('Strategy', '')
file_name = 'vnpy.trader.app.ctaStrategy.strategy.{}'.format(file_name)
import_file = 'from {} import {}'.format(file_name, strategy_name)
#如果模块已经载入,删除已经存在的模块
if file_name in sys.modules.keys():
del sys.modules[file_name]
exec(import_file)
strategy_moudle = eval(strategy_name)
#--------------------------------------------------
#设置回测参数
start_date = start_date #回测起始日期
end_date = end_date #回测终止日期
symbol = symbol #回测合约
slippage = slippage #滑点为几个跳价
rate = rate #手续费
con_d = loadContractDetail(symbol)
size = con_d['trade_size'] #合约大小
price_tick = con_d['price_tick'] #最小价格变动
#判断参数间是否存在大小关系
if 'largerParameter' in parameter.keys():
setting.largerParameter = parameter['largerParameter']
setting.smallerParameter = parameter['smallerParameter']
del parameter['largerParameter']
del parameter['smallerParameter']
#设置优化参数与优化目标
setting = OptimizationSetting() # 新建一个优化任务设置对象
setting.setOptimizeTarget('returnRiskRatio') # 设置优化排序的目标是策略净盈利
for p in parameter:
setting.addParameter(p, parameter[p][0],
parameter[p][1], parameter[p][2]) # 增加第优化参数atrMa,起始,结束,步进
#--------------------------------------------------------------------------------------------------
# 创建回测引擎
engine = BacktestingEngine()
# 设置引擎的回测模式为K线
engine.setBacktestingMode(engine.BAR_MODE)
# 设置回测用的数据起始日期
engine.setStartDate(start_date)
engine.setEndDate(end_date)
# 设置产品相关参数
engine.setSlippage(slippage * price_tick) # 滑点
engine.setRate(rate) # 手续费
engine.setSize(size) # 合约大小
engine.setPriceTick(price_tick) # 最小价格变动
# 设置使用的历史数据库
engine.setDatabase(MINUTE_DB_NAME, symbol)
# 跑优化
# 性能测试环境:I7-3770,主频3.4G, 8核心,内存16G,Windows 7 专业版
# 测试时还跑着一堆其他的程序,性能仅供参考
start = time.time()
# 运行单进程优化函数,自动输出结果,耗时:359秒
# result_list = engine.runOptimization(strategy_moudle, setting)
# 多进程优化,耗时:89秒
result_list = engine.runParallelOptimization(strategy_moudle, setting)
#保存结果
file_name = strategy_moudle.__name__ + '.csv'
cols = ['parameter','optTarget'] + result_list[0][2].keys()
s_data = pd.DataFrame(columns=cols)
for r in result_list:
new_r = dict({'parameter':r[0]}, **{'optTarget':r[1]})
new_r = dict(new_r, **r[2])
s_data = s_data.append(pd.Series(new_r), ignore_index=True)
s_data.to_csv(file_name)
#画热力图
r1 = eval(result_list[0][0])
keys = list(r1.keys())
indexs = []
for k in keys:
d = []
for res in result_list:
dic = eval(res[0])
d.append(dic[k])
indexs.append(d)
for i in range(len(keys)-1):
for j in range(i+1, len(keys)):
a = list(set(indexs[i]))
b = list(set(indexs[j]))
a.sort()
b.sort()
data = pd.DataFrame(np.zeros((len(a), len(b))), index=a, columns=b)
for r in result_list:
dic = eval(r[0])
data.loc[dic[keys[i]], dic[keys[j]]] = float(r[1])
f, ax = plt.subplots(figsize = (10, 4))
cmap = sns.cubehelix_palette(start = 1, rot = 3, gamma=0.8, as_cmap = True)
sns.heatmap(data, cmap = cmap, linewidths = 0.05, ax = ax)
ax.set_title('Optimization')
ax.set_ylabel(keys[i])
ax.set_xlabel(keys[j])
plt.show()
print(u'耗时:%s' %(time.time()-start))
# +
strategy_name = 'MarketStableStrategy'
start_date = '20160101'
end_date = '20181010'
symbol = 'IF000'
parameter = {'threshold': (0.07, 0.1, 0.01),
'lossLimit': (0.004, 0.006, 0.001)}
run_optimization(strategy_name, symbol, start_date,
end_date, parameter)
# -
| auto_trade_system/CtaBacktesting/run_optimization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.3 64-bit (''base'': conda)'
# language: python
# name: python37364bitbasecondac31dea6cb7d64c599821de538c12ddbc
# ---
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn.model_selection import GridSearchCV
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
import numpy as np
# +
import pandas as pd
import statsmodels.api as sm
def forward_regression(X, y,
threshold_in,
verbose=False):
initial_list = []
included = list(initial_list)
while True:
changed=False
excluded = list(set(X.columns)-set(included))
new_pval = pd.Series(index=excluded)
for new_column in excluded:
model = sm.OLS(y, sm.add_constant(pd.DataFrame(X[included+[new_column]]))).fit()
new_pval[new_column] = model.pvalues[new_column]
best_pval = new_pval.min()
if best_pval < threshold_in:
best_feature = new_pval.idxmin()
included.append(best_feature)
changed=True
if verbose:
print('Add {:30} with p-value {:.6}'.format(best_feature, best_pval))
if not changed:
break
return included
def backward_regression(X, y,
threshold_out,
verbose=False):
included=list(X.columns)
while True:
changed=False
model = sm.OLS(y, sm.add_constant(pd.DataFrame(X[included]))).fit()
# use all coefs except intercept
pvalues = model.pvalues.iloc[1:]
worst_pval = pvalues.max() # null if pvalues is empty
if worst_pval > threshold_out:
changed=True
worst_feature = pvalues.idxmax()
included.remove(worst_feature)
if verbose:
print('Drop {:30} with p-value {:.6}'.format(worst_feature, worst_pval))
if not changed:
break
return included
# -
df = pd.read_excel(r'data_statsproj.xlsx')
print (df)
sec=1
y='BusStat2'
df_reg=df.loc[df.Section==1]
# +
lasso = Lasso()
parameters = {"alpha":[1e-8, 1e-4, 1e-3, 1e-2, 1, 5, 10, 20]}
lasso_regression = GridSearchCV(lasso, parameters, scoring='neg_mean_squared_error', cv=3)
reg_out_sec1_lasso=lasso_regression.fit(df_reg[['SocialMediaHrs','SleepTime','SleepHrs','Attention','Gender']].values, df_reg[[y]])
reg_out_sec1_lasso.best_estimator_.coef_
# -
lasso_regression.best_params_
linreg=LinearRegression()
reg_out_sec1=linreg.fit(df_reg[['SocialMediaHrs','SleepTime','Attention']].values.astype(float), df_reg[[y]])
reg_out_sec1.coef_
from statsmodels.regression import linear_model
import statsmodels.api as sm
model = sm.OLS( df_reg[[y]], df_reg[['SocialMediaHrs','SleepTime','Attention']].values.astype(float))
results = model.fit()
print(results.params)
print(results.pvalues)
print(results.summary())
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_regress_exog(model, 'assists', fig=fig)
plt.savefig('test2offline_resid.png',type=png,dpi=600)
out=forward_regression(df_reg[['SocialMediaHrs','SleepTime','SleepHrs','Attention','Gender']],df_reg[['BusStat1']],threshold_in=0.9,verbose=True)
| test2_offline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + slideshow={"slide_type": "skip"}
# %%HTML
<style>
.rendered_html table, .rendered_html th, .rendered_html tr, .rendered_html td {
font-size: 100%;
}
</style>
# + [markdown] slideshow={"slide_type": "slide"}
# # Metody Numeryczne
#
# ## Elementy analizy numerycznej
#
# ### dr hab. inż. <NAME>, Prof. nadzw.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Informacje ogólne
# - Katedra Automatyki i Robotyki, C3, p. 214
# - Konsultacje
# - Czwartki 11:00-12:00
# (o ile nie ma Kolegium Wydziałowego lub seminarium)
# - <EMAIL>
# - wykłady dostępne tutaj: https://github.com/jerzybaranowski/public_lectures
# + [markdown] slideshow={"slide_type": "slide"}
# # Reprezentacja liczb
# + [markdown] slideshow={"slide_type": "slide"}
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Kod binarny
#
# - Zapis liczby z wykorzystaniem dwóch symboli **1** i **0**
# - Podstawa współczesnego sposobu reprezentacji informacji
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Zamierzchła historia
#
# - Pingala, Chandaḥśāstra i Prozodia
# - Ok. 4 wiek pne
# - Wykorzystanie zapisu w formie zer i jedynek do opisu metrum
# - Chiny, hexagramy, <NAME>, I-Ching
# - Leibniz
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Algebra Boole'a
#
# $$
# \begin{align}
# x \land y & = xy & \mathrm{Koniunkcja}\\
# x \lor y & = x+y-xy & \mathrm{Alternatywa}\\
# \neg x & =1-x & \mathrm{Negacja}\\
# x \rightarrow y & = (\neg x\lor y) & \mathrm{Implikacja}\\
# x \oplus y & = (x \lor y)\land\neg(x\land y) & \mathrm{EXOR}\\
# x = y & = \neg(x\oplus y) & \mathrm{Równoważność}\\
# \end{align}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Nieco mniej zamierzchła historia
# - 1937 Shannon – przekaźnikowa realizacja operacji binarnych i algebry Boole’a
# - 1937 Stibitz – Pierwszy komputer przekaźnikowy (dodawanie)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Kod binarny
# | **0** | **0** | **1** | **0** | **1** | **0** | **1** | **1** |
# |---------|---------|---------|---------|---------|---------|---------|---------|
# | $2^{7}$ | $2^{6}$ | $2^{5}$ | $2^{4}$ | $2^{3}$ | $2^{2}$ | $2^{1}$ | $2^{0}$ |
#
# Co daje $ =2^5+2^3+2^1+2^0=32+8+2+1=43$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Liczby naturalne
# - Ogólnie zakres od 0 do 2<sup>n</sup>-1
# - 8 bit – zakres od 0 do 255
# - 16 bit – zakres od 0 do 65,535 (short, int)
# - 32 bit – zakres od 0 do 4,294,967,295 (long)
#
# W Pythonie i matlabie za bardzo nie przejmujemy się typami, chyba że je wymusimy
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Operacje na liczbach binarnych
#
# - Dodawanie
# - 0+0=0
# - 0+1=1
# - 1+0=1
# - 1+1=0, przenieś 1
# - Jak w dodawaniu pisemnym
#
# `` 1 1 1 1 1 ``(cyfry przenoszone)
# `` 0 1 1 0 1 ``(13<sub>10</sub>)
# ``+ 1 0 1 1 1 ``(23<sub>10</sub>)
# ``------------ ``
# ``=1 0 0 1 0 0 `` (36<sub>10</sub>)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Operacje na liczbach binarnych
#
# - Odejmowanie
# - 0-0=0
# - 0-1=1, pożyczka 1
# - 1-0=1
# - 1-1=0,
# - Analogicznie
#
# `` * * * * ``(pożyczki)
# `` 1 1 0 1 1 1 0``(110<sub>10</sub>)
# ``- 1 0 1 1 1``(23<sub>10</sub>)
# ``--------------- ``
# ``= 1 0 1 0 1 1 1`` (87<sub>10</sub>)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Co z liczbami ujemnymi?
#
# Uzupełniamy zapis o tzw. bit znaku
#
# | **1** | **0** | **1** | **0** | **1** | **0** | **1** | **1** |
# |---------|---------|---------|---------|---------|---------|---------|---------|
# | S | $2^{6}$ | $2^{5}$ | $2^{4}$ | $2^{3}$ | $2^{2}$ | $2^{1}$ | $2^{0}$ |
#
# Co daje $ =(-1)^1(2^3+2^1+2^0)=-(8+2+1)=-11$
#
# Zmieniają się zakresy:
# - 8 bit (-128 do 127)
# - 16 bit (−32,768 do 32,767)
# - itd
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Problemy
#
# - Niepraktyczny zapis
# - Trzeba przekodowywać wyniki operacji
# - Potencjalnie podatniejsze na błędy
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Kod uzupełnienia do 2 (U2)
# | **1** | **1** | **1** | **1** | **1** | **0** | **1** | **1** |
# |---------|---------|---------|---------|---------|---------|---------|---------|
# | $-2^{7}$| $2^{6}$ | $2^{5}$ | $2^{4}$ | $2^{3}$ | $2^{2}$ | $2^{1}$ | $2^{0}$ |
#
# Co daje $ =-2^7+2^6+2^5+2^4+2^3+2^1+2^0$
#
# $=-128+64+32+16+8+2+1=-5$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Bardzo łatwa konwersja
# - Liczby dodatnie są takie same jak były
# - Aby zamienić liczbę na jej przeciwną wystarczy zanegować wszystkie bity i do wyniku dodać 1 (*w obie strony*)
#
# | **0** | **0** | **0** | **0** | **0** | **1** | **0** | **1** | 5<sub>10</sub> | oryginał |
# |----------|---------|---------|---------|---------|---------|---------|---------|-----------------|-----------|
# | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | | negacja |
# | **1** | **1** | **1** | **1** | **1** | **0** | **1** | **1** | -5<sub>10</sub> | dodanie 1 |
# | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | | negacja |
# | **0** | **0** | **0** | **0** | **0** | **1** | **0** | **1** | 5<sub>10</sub> | dodanie 1 |
# | -$2^{7}$ | $2^{6}$ | $2^{5}$ | $2^{4}$ | $2^{3}$ | $2^{2}$ | $2^{1}$ | $2^{0}$ | | |
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Jaka z tego korzyść?
# - Odejmowanie staje się dodawaniem (prawie)
# $$ A - B = A + \neg B + 1$$
# - Przykład 13 – 7 (na 8 bitach)
#
# `` 1 1 1 1 1 ``(cyfry przenoszone)
# `` 0 0 0 0 1 1 0 1``(13<sub>10</sub>)
# `` 1 1 1 1 1 0 0 0``(zanegowane 7<sub>10</sub>)
# ``+ 1``(jedynka)
# ``-----------------``
# ``= 0 0 0 0 0 1 1 0`` (6<sub>10</sub>)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Operacje na liczbach binarnych
# Mnożenie również przypomina mnożenie pisemne
#
# `` 1 0 1 1`` 11<sub>10</sub>
# `` * 1 0 1 0`` 10<sub>10</sub>
# `` -----------``
# `` 0 0 0 0``
# `` + 1 0 1 1 ``
# `` + 0 0 0 0``
# `` + 1 0 1 1``
# `` ---------------``
# `` = 1 1 0 1 1 1 0`` 110<sub>10</sub>
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## A co z ułamkami?
# Są dwa sposoby zapisu liczb niecałkowitych
# - Stałoprzecinkowy (stałopozycyjny)
# - Zmiennoprzecinkowy (zmiennopozycyjny)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Zapis stałoprzecinkowy
# | **1** | **0** | **1** | **1** | **1** | **0** | **0** | **0** |
# |---------|---------|---------|---------|---------|---------|---------|---------|
# | $2^{1}$ | $2^{0}$ | $2^{-1}$ | $2^{-2}$ | $2^{-3}$ | $2^{-4}$ | $2^{-5}$ | $2^{-6}$ |
#
# $$
# 2^1+2^{-1}+2^{-2}+2^{-3}=2+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}=2.875
# $$
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Zalety zapisu stałoprzecinkowego
# - Nie ma różnicy w kodowaniu
# - Mamy stale określoną dokładność, którą możemy w miarę dokładnie kształtować
# - Stosunkowa prostota
# - Małe wymagania sprzętowe
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Wady zapisu stałoprzecinkowego
# Problemy z dokładnością, np. nie da się dokładnie przedstawić liczby 0.1
# - Na 3 bitach części ułamkowej różnica wynosi 0.025
# - Na 7 bitach części ułamkowej różnica wynosi ok. 0.001
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Jak wykonujemy działania?
# - Działania wykonujemy traktując zapis liczby stałoprzecinkowej jako normalną binarną
# - Kod U2 dalej działa
# - Należy pamiętać, że wtedy liczba jest pomnożona przez 2<sup>n</sup> gdzie n to ilość bitów części ułamkowej
# - W liczbach poddanych działaniu liczba bitów części całkowitej i ułamkowej musi być równa
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Działania stałoprzecinkowe
# - Dodawanie wykonujemy identycznie
# - W przypadku mnożenia wynik musimy podzielić przez 2<sup>n</sup>
# - Mnożenie liczb stałoprzecinkowych przez potęgę 2 polega tylko na przesuwaniu bitów (bardzo proste w realizacji)
#
# | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | |
# |---------------|---------------|-----|-----|-----|-----|-----|-----|---|
# | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | Podzielenie przez $2^{2}$ |
# | $2^{1}$ | $2^{0}$ | $2^{-1}$ | $2^{-2}$ | $2^{-3}$ | $2^{-4}$ | $2^{-5}$ | $2^{-6}$ ||
# + [markdown] slideshow={"slide_type": "slide"}
# ## Format zmiennoprzecinkowy
# - Bardziej zaawansowany sposób przedstawiania liczb
# - Ustandaryzowany normą IEEE
# - Dający pod pewnymi względami większą dokładność
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Format zmiennoprzecinkowy
#
# Reprezentacja liczby
#
# $$
# x=S\cdot M\cdot B^E
# $$
#
# - S – znak (*sign*)
# - M – mantysa (*mantissa*, także *fraction*)
# - B – podstawa (*base*, zazwyczaj 2, rzadziej 10)
# - E - wykładnik (*exponent*)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Mantysa
# - Liczba odpowiadająca za ułamkową część zapisu
# - Format stałoprzecinkowy, zazwyczaj liczba z przedziału [1,2)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Podstawa i wykładnik
#
# - Pozwalają na określenie szerokiego zakresu
# - Ze względu na kodowanie, zazwyczaj podstawa to 2
# - Wykładnik może być ujemny lub dodatni.
# - Wykładnik koduje się w U2, lub też wprowadza się przesunięcie
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Działania na liczbach zmiennoprzecinkowych
# Dodawanie i odejmowanie
#
# $$
# x_1\pm x_2=\left(M_1\pm M_2\cdot B^{E_2-E_1}\right)\cdot B^{E_1}
# $$
#
# Mnożenie i dzielenie
#
# $$
# x_1\cdot x_2=(S_1\cdot S_2)\cdot (M_1\cdot M_2)\cdot B^{E_1+E_2}
# $$
#
# $$
# x_1 / x_2=(S_1\cdot S_2)\cdot (M_1/ M_2)\cdot B^{E_1-E_2}
# $$
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Dzielenie
# - Mając możliwość zapisu liczby ulamkowej można sformułować operację dzielenia.
# - Istnieje wiele algorytmów np.
# - *restoring division*
# - *non-restoring division*
# - SRT
# - algorytm Newtona-Raphsona
# - algorytm Goldschmidta
# - Są one już zaimplementowane, jedno dzielenie zazwyczaj wymaga przeprowadzenia 3-4 mnożeń
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Ważne formaty – IEEE Single precision
# 
# - 8 bitów wykładnika, wykładnik przesunięty o 127 (zamiana z -126 do 127 na 1 do 244)
# - 24 bity mantysy, ale zawsze koduje się tylko 23 po kropce, przed kropką jest 1
# - Specjalne zapisy nieskończoności i błędów
# - w NumPy - ``float32``
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Ważne formaty – IEEE Double precision
# 
# - 11 bitów wykładnika, wykładnik przesunięty o 1023 (zamiana z -1022 do 1023 na 1 do 2046)
# - 53 bity mantysy, ale zawsz koduje się tylko 52 po kropce, przed kropką jest 1
# - Specjalne zapisy nieskończoności i błędów
# - w NumPy - ``float64``, ale w zasadzie każda liczba w Pythonie i Matlabie to double, chyba że wymusimy inaczej
# + [markdown] slideshow={"slide_type": "slide"}
# ## Wyświetlanie liczb
# - Normalnie
# - Notacja inżynierska
# - $3700=3.7\cdot10^3$, $0.12=120\cdot10^{-3}$
# - Notacja naukowa
# - ``3700=3.7E3``, ``0.12=1.2E-1``
# + [markdown] slideshow={"slide_type": "slide"}
# # Błędy numeryczne
# + [markdown] slideshow={"slide_type": "slide"}
# ## Podstawowe definicje
# Wartość dokładna
# $$y=\tilde{y}+\varepsilon$$
# - $\tilde{y}$ - wartość przybliżona
# - $\varepsilon$ - błąd
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Błąd bezwzględny
# Wartość bezwzględna różnicy między rozwiązaniem dokładnym i przybliżonym
# $$ \varepsilon=|y-\tilde{y}|$$
# + [markdown] slideshow={"slide_type": "-"}
# ## Błąd względny
# Stosunek błędu bezwzględnego do wartości bezwzględnej rozwiązania
# $$\eta=\frac{|y-\tilde{y}|}{|y|}=\left|\frac{y-\tilde{y}}{y}\right|=\left|1-\frac{\tilde{y}}{y}\right|$$
# Czasami błąd względny wyrażamy w procentach
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Przykłady
# Pierwiastek kwadratowy ze 122
#
# $$
# \begin{align}
# y{}&=\sqrt{122}\approx 11.04536\\
# \tilde{y}{}&=11\\
# \varepsilon{}&=|y-\tilde{y}|=0.04536\\
# \eta{}&=\frac{|y-\tilde{y}|}{|y|}=0.00411
# \end{align}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Przykłady
# Liczba oby<NAME> (stan na ostatni spis powszechny z 2011)
#
# $$
# \begin{align}
# y{}&=38\ 538\ 447\\
# \tilde{y}{}&=38\ 500\ 000\\
# \varepsilon{}&=|y-\tilde{y}|=38\ 447\\
# \eta{}&=\frac{|y-\tilde{y}|}{|y|}=9.97627\cdot10^{-4}\approx 0.001
# \end{align}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Przykłady
# Obliczanie stałej grawitacji
# $$
# \begin{align}
# y{}&=6.673841\cdot10^{-11}\\
# \tilde{y}{}&=6.7\cdot10^{-11}\\
# \varepsilon{}&=|y-\tilde{y}|=2.6159\cdot10^{-13}\\
# \eta{}&=\frac{|y-\tilde{y}|}{|y|}=0.00391
# \end{align}
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Źródła błędów
# Błędy powstające przy formułowaniu zagadnienia
# - Błędy pomiaru
# - Błędy wynikające z przyjęcia określonych przybliżeń opisu zjawisk fizycznych
#
# Błędy powstające przy obliczeniach
# - Błędy grube (pomyłki)
# - Błędy metody (obcięcia)
# - Błędy zaokrągleń
# + [markdown] slideshow={"slide_type": "slide"}
# ## Błędy grube
# - Błąd przy wpisywaniu wzoru do komputera
# np. ``x=A/b`` zamiast ``x=A\b``
# - Zła implementacja algorytmu
# - Niewłaściwa kolejność wykonywania działań
# + [markdown] slideshow={"slide_type": "slide"}
# ## Błędy metody (obcięcia)
# - Błędy obcięcia są nieodłącznym elementem obliczeń numerycznych.
# - Błąd obcięcia jest to błąd wynikający z tego, że do uzyskania dokładnego rozwiązania potrzebujemy wykonać nieskończenie wiele obliczeń
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Przykłady błędów metody
# Można wykazać, że
# $$
# \begin{align}
# \sin x={}&x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\ldots=\\
# ={}&\sum\limits_{n=0}^\infty(-1)^n\frac{x^{2n+1}}{(2n+1)!}
# \end{align}
# $$
# Błędem odcięcia będzie
# $$
# \sin x\approx x-\frac{x^3}{3!}+\frac{x^5}{5!}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Przykłady błędów metody
#
# Metoda bisekcji
# + slideshow={"slide_type": "-"}
def bisection(f,a,b,N):
a_n = a
b_n = b
for n in range(1,N+1):
m_n = (a_n + b_n)/2
f_m_n = f(m_n)
if f(a_n)*f_m_n < 0:
a_n = a_n
b_n = m_n
elif f(b_n)*f_m_n < 0:
a_n = m_n
b_n = b_n
return (a_n + b_n)/2
# + [markdown] slideshow={"slide_type": "subslide"}
# Szukamy pierwiastka wielomianu $x^2-2$, w przedziale $[1,2]$. Rozwiązanie to $\sqrt{2}$.
# + slideshow={"slide_type": "-"}
f = lambda x: x**2 - 2 # definicja funkcji
bisection(f,1,2,5) # 5 kroków
# -
bisection(f,1,2,10) # 10 kroków
bisection(f,1,2,15) # 15 kroków
import numpy as np
np.sqrt(2)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Błąd metody - podsumowanie
# - Praktycznie wszystkie metody numeryczne mają jakiś błąd metody
# - Dobre algorytmy podają jednak jego oszacowanie, w ten sposób wiemy jak daleko jesteśmy od rozwiązania nawet jak przerwiemy obliczenia
# + [markdown] slideshow={"slide_type": "slide"}
# ## Błędy zaokrągleń
# Kolejne nieusuwalne w pełni źródło błędów, nad którym mamy mniejszą kontrolę niż nad błędem metody
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Zaokrąglenie i cyfry znaczące
# Liczba $\tilde{y}=\mathrm{rd}(y)$ jest poprawnie zaokrąglona do *d* miejsc po przecinku, jeżeli
#
# $$
# \varepsilon=|y-\tilde{y}|\leq\frac{1}{2}\cdot10^{-d}
# $$
# *k*-tą cyfrę dziesiętną liczby $\tilde{y}$ nazwiemy znaczącą gdy
# $$|y-\tilde{y}|\leq\frac{1}{2}\cdot10^{-k}$$
# oraz
# $$|\tilde{y}|\geq10^{-k}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Rzeczywiste obliczenia zmiennoprzecinkowe
# $$
# \begin{align}
# \mathrm{fl}(x+y)={}&\mathrm{rd}(x+y)\\
# \mathrm{fl}(x-y)={}&\mathrm{rd}(x-y)\\
# \mathrm{fl}(x\cdot y)={}&\mathrm{rd}(x\cdot y)\\
# \mathrm{fl}(x/y)={}&\mathrm{rd}(x/y)\\
# \end{align}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Liczby maszynowe
# - Liczba maszynowa, to taka liczba jaką można przedstawić w komputerze. Zbiór tych liczb oznaczamy A
# - Dokładność maszynową (epsilon maszynowy) – eps, $\varepsilon_m$, definiujemy:
# $$
# \mathrm{eps}=\min\{x\in{A}\colon \mathrm{fl}(1+x)>1,\ x>0\}
# $$
# Innymi słowy, jest to najmniejsza liczba, którą możemy dodać do 1, aby uzyskać coś większego od 1.
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Epsilon maszynowy w różnych formatach
#
# Zależy on od liczby bitów na część ułamkową
# - Single precision $\varepsilon_m=2^{-24}\approx 5.96\cdot10^{-8}$
# - Double precision $\varepsilon_m=2^{-52}\approx 1.11\cdot10^{-16}$
#
# Przykład
# -
a=10**(-15)
b=10**(-17)
1+a>1,1+b>1
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Maksymalny błąd reprezentacji
# Dla każdej liczby rzeczywistej $x$ istnieje taka liczba $\varepsilon$, taka że $|\varepsilon|<\varepsilon_m$, że
# $\mathrm{fl}(x)=x(1+\varepsilon)$
#
# Oznacza to, że **błąd względny między liczbą rzeczywistą, a jej najbliższą reprezentacją zmiennoprzecinkową jest zawsze mniejszy od $\varepsilon_m$**
# + [markdown] slideshow={"slide_type": "subslide"}
# ## <NAME>
# Błedy zaokrągleń powstałe podczas wykonywania działań zmiennoprzecinkowych są równoważne zastępczemu zaburzeniu liczb, na których wykonujemy działania
#
# $$
# \begin{align}
# \mathrm{fl}(x+y)={}&(x+y)(1+\varepsilon_1)\\
# \mathrm{fl}(x-y)={}&(x-y)(1+\varepsilon_2)\\
# \mathrm{fl}(x\cdot y)={}&(x\cdot y)(1+\varepsilon_3)\\
# \mathrm{fl}(x/y)={}&(x/y)(1+\varepsilon_4)\\
# |\varepsilon_i|<{}&\varepsilon_m
# \end{align}
# $$
# (dla każdej pary liczb $x,\ y$ zaburzenia zastępcze $\varepsilon_i$ są inne)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Konsekwencja lematu Wilkinsona
# Prawa łączności i rozdzielności operacji matematycznych są ogólnie nieprawdziwe dla obliczeń zmiennoprzecinkowych
#
# ### Przykład
# -
a=np.float32(0.23371258*10**(-4))
b=np.float32(0.33678429*10**(2))
c=np.float32(-0.33677811*10**(2))
print([a,b,c])
# Chcemy obliczyć ``a+b+c``
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Obliczenia
# + slideshow={"slide_type": "-"}
## Podejście 1
d=b+c
wynik_1=a+d
print(wynik_1)
# + slideshow={"slide_type": "-"}
## Podejście 2
e=a+b
wynik_2=e+c
print(wynik_2)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Co tu się porobiło?
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Konsekwencje obliczen zmiennoprzecinkowych
# + slideshow={"slide_type": "-"}
m_a, e_a = np.frexp(a)
print(m_a,e_a)
m_b,e_b = np.frexp(b)
print(m_b,e_b)
m_c,e_c = np.frexp(c)
print(m_c,e_c)
# -
# Wykładnik ``a`` od wykładników ``b`` i ``c`` różni się o 21. Oznacza to, że z 23 bitów mantysy liczby ``a`` po sprowadzeniu do wspólnego wykładnika z ``b`` zostaną nam tylko 2 najbardziej znaczące.
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Konsekwencje cd..
# Jeżeli dodajemy małą liczbę do dużej, zawsze musimy się liczyć z zaokrągleniem i to normalne. W tym przypadku jednak dwie duże liczby ``b`` i ``c`` są przeciwnych znaków i bliskie co do wartości bezwzględnej. Wynik tego działania:
# -
m_d,e_d = np.frexp(d)
print(m_d,e_d)
print(wynik_2)
# W konsekwencji dodając ``a`` do ``d`` na zaokrągleniu stracimy jedynie 5 bitów mantysy ``a``.
# + [markdown] slideshow={"slide_type": "subslide"}
# ## O ile się pomyliliśmy (w stosunku do dokładniejszych obliczeń)
# +
a_dbl=(0.23371258*10**(-4))
b_dbl=(0.33678429*10**(2))
c_dbl=(-0.33677811*10**(2))
d_dbl=b_dbl+c_dbl
wynik_dbl=a_dbl+d_dbl
epsilon_1=np.abs((wynik_1)-wynik_dbl)
eta_1=epsilon_1/np.abs(wynik_dbl)
print("Metoda 1: Błąd bezwzględny %10.2e, Błąd względny %10.2e"%(epsilon_1,eta_1))
epsilon_2=np.abs((wynik_2)-wynik_dbl)
eta_2=epsilon_2/np.abs(wynik_dbl)
print("Metoda 2: Błąd bezwzględny %10.2e, Błąd względny %10.2e"%(epsilon_2,eta_2))
# + [markdown] slideshow={"slide_type": "slide"}
# # Analiza algorytmów numerycznych
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Notacja O duże
# - Mówimy, że dla wielkości zależnej od parametru np. $F(n)$ zachodzi
# $$
# F(n)=O(G(n))
# $$
# jeżeli istnieje taka stała $C$, że przy $n$ zmierzającym do nieskończoności (odpowiednio dużym), mamy
# $$F(n)≤C G(n)$$
# - Jeżeli interesuje nas $O(c)$, gdzie $c$ jest stałą, zależność ta ma zachodzić niezależnie od wielkości parametru.
# - Mówimy potocznie, gdy błąd jest równy $O(n^2)$, że błąd jest rzędu $n^2$
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Ocena algorytmu
# - Naszym celem jest obliczenie pewnej wielkości $f(x)$, zależnej od danych wejściowych $x$
# - W przypadku obliczeń komputerowych zawsze mamy do czynienia z obliczaniem przybliżonym stąd algorytm obliczania $f(x)$ będziemy oznaczać jako $f^*(x)$
# - Dane w komputerze również są reprezentowane w sposób zaokrąglony, więc będziemy je oznaczać jako $x^*$
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Uwarunkowanie problemu
#
# - Mówimy, że problem $f(x)$ jest dobrze uwarunkowany, jeżeli mała zmiana $x$ powoduje małą zmianę w $f(x)$
# - Problem jest źle uwarunkowany, jeżeli mała zmiana $x$ powoduje dużą zmianę w $f(x)$
# - Miarą uwarunkowania jest stała $\kappa$ (kappa), która (nieformalnie) określa największy iloraz zaburzeń $f(x)$ wywołanych przez najmniejsze zaburzenia $x$.
# - Stałą $\kappa$ można wyliczyć tylko w niektórych probemach
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Dokładność algorytmu
# - Algorytm jest dokładny, jeżeli
# $$
# \frac{\Vert f^*(x)-f(x) \Vert}{\Vert f(x)\Vert}=O(\varepsilon_m)
# $$
# - Zagwarantowanie, że algorytm jest dokładny wg tej definicji jest niezwykle trudne, zwłaszcza dla źle uwarunkowanych problemów
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Stabilność algorytmu
#
# Mówimy, że algorytm jest stabilny, gdy dla każdego $x$, zachodzi
# $$
# \frac{\Vert f^*(x)-f(x^*) \Vert}{\Vert f(x^*)\Vert}=O(\varepsilon_m)
# $$
# dla takich $x^*$, że
# $$\frac{\Vert x-x^* \Vert}{\Vert x\Vert}=O(\varepsilon_m)$$
# Innymi słowy
# **Stabilny algorytm daje prawie dobrą odpowiedź na prawie dobre pytanie**
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Stabilność wsteczna algorytmu
# Algorytm jest stabilny wstecznie, jeżeli dla każdego $x$, zachodzi
# $$f^*(x)=f(x^*)$$
# dla takich $x^*$, że
# $$\frac{\Vert x-x^* \Vert}{\Vert x\Vert}=O(\varepsilon_m)$$
# Innymi słowy
# **Stabilny wstecznie algorytm daje prawidłową odpowiedź na prawie dobre pytanie**
#
#
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Dokładność algorytmów stabilnych wstecznie przy złym uwarunkowaniu
# Jeśli algorytm jest stabilny wstecznie, to jego błąd względny pogarsza się proporcjonalnie do stałej uwarunkowania tj. $O(\kappa\varepsilon_m)$
| Metody Numeryczne/Lecture 1 (errors and stuff)/Lecture 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Create, train, and test the multi layer perceptron model
#install python libraries (optional)
# !pip install --upgrade scikit-learn
# !pip install pandas
# !pip install pyyaml h5py
# !pip install seaborn
# +
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam
import sklearn
from sklearn.metrics import auc, average_precision_score, roc_curve, precision_recall_curve, roc_auc_score, confusion_matrix
from sklearn.preprocessing import StandardScaler
# other libraries
import numpy as np
import pandas as pd
import sys
import pickle
#plotting
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
print('tensorflow-' + tf.__version__)
print('python-' + sys.version)
print('sklearn-' + sklearn.__version__)
print('numpy-' + np.__version__)
print('pandas-' + pd.__version__)
# +
#load numeric column names for scaling
with open('numeric_columns.pickle', 'rb') as f:
nu_cols = pickle.load(f)
def onc_plot_cm(y_true, y_predictions, proba_threshold=0.5):
'''
Plot the confusion matrix
'''
cm = confusion_matrix(y_true, y_predictions > proba_threshold)
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(proba_threshold))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Survival Detected (True Negatives): ', cm[0][0])
print('Legitimate Survival Incorrectly Detected (False Positives): ', cm[0][1])
print('Deceased Missed (False Negatives): ', cm[1][0])
print('Deceased Detected (True Positives): ', cm[1][1])
print('Total Deceased: ', np.sum(cm[1]))
print('Total Survived: ', np.sum(cm[0]))
plt.show()
def onc_plot_roc(train_y, train_predictions, test_y, test_predictions, **kwargs):
'''
Plot the training and test set roc curves and return the test ROC curve results
'''
train_false_positives, train_true_positives, _ = roc_curve(train_y, train_predictions)
train_roc_auc_score = auc(train_false_positives, train_true_positives)
test_false_positives, test_true_positives, _ = roc_curve(test_y, test_predictions)
test_roc_auc_score = auc(test_false_positives, test_true_positives)
plt.plot(
100*train_false_positives, 100*train_true_positives,
label=r'Train ROC MLP model (AUC = %0.3f)' % (train_roc_auc_score),
linewidth=2,
linestyle='--'
)
plt.plot(
100*test_false_positives, 100*test_true_positives,
label=r'Test ROC MLP model (AUC = %0.3f)' % (test_roc_auc_score),
linewidth=2,
**kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
#plt.xlim([-0.5,20])
#plt.ylim([80,100.5])
plt.grid(False)
ax = plt.gca()
ax.set_aspect('equal')
plt.show()
return(test_false_positives, test_true_positives, test_roc_auc_score)
def onc_plot_precision_recall(train_y, train_predictions, test_y, test_predictions, **kwargs):
'''
Plot the training and test set pr curves and return the test pr curve results
'''
train_precision, train_recall, _ = precision_recall_curve(train_y, train_predictions)
train_ap_score = average_precision_score(train_y, train_predictions)
test_precision, test_recall, _ = precision_recall_curve(test_y, test_predictions)
test_ap_score = average_precision_score(test_y, test_predictions)
plt.plot(
100*train_recall,100*(1-train_precision),
label=r'Train Precision-Recall Curve MLP model (AUC = %0.3f)' % (train_ap_score),
linewidth=2,
linestyle='--'
)
plt.plot(
100*test_recall, 100*(1-test_precision),
label=r'Test Precision-Recall Curve MLP model (AUC = %0.3f)' % (test_ap_score),
linewidth=2,
**kwargs)
plt.ylabel('Precision (PPV) [%]')
plt.xlabel('Recall (Sensitivity) [%]')
#plt.xlim([-0.5,20])
#plt.ylim([80,100.5])
plt.grid(False)
ax = plt.gca()
ax.set_aspect('equal')
plt.show()
return(test_precision, test_recall, test_ap_score)
# -
# # final model
# +
METRICS = [
tf.keras.metrics.TruePositives(name='tp'),
tf.keras.metrics.FalsePositives(name='fp'),
tf.keras.metrics.TrueNegatives(name='tn'),
tf.keras.metrics.FalseNegatives(name='fn'),
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc'),
tf.keras.metrics.AUC(name='auc_pr',
num_thresholds=200,
curve="PR",
summation_method="interpolation",
dtype=None,
thresholds=None,
multi_label=False,
label_weights=None)
]
# put in the best hyperparameters from the cross validation tuning
def final_build_mlp(
layers=2,
neurons=16,
output_bias=None,
optimizer='Adam',
activation='relu',
learn_rate=.0002,
dropout_rate=0.2,
kernel_regularizer='l2',
metrics=METRICS
):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = tf.keras.Sequential()
#add one or more dense layers
for i in range(layers):
model.add(Dense(
neurons,
activation=activation,
input_shape=(294,),
kernel_regularizer=kernel_regularizer)
)
model.add(Dropout(dropout_rate))
model.add(Dense(
1,
activation='sigmoid',
bias_initializer=output_bias))
opt = Adam(lr=learn_rate)
model.compile(
optimizer=opt,
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
# -
def get_data(imp):
with open('complete' + str(imp) + '.pickle', 'rb') as f:
dataset = pickle.load(f)
##Training set = subsets 0-6
X_train = dataset[dataset.subset <= 4].copy().sort_values(by = 'usrds_id')
y_train = np.array(X_train.pop('died_in_90'))
##validation set = subsets 5-6
X_val = dataset[(dataset.subset == 6) | (dataset.subset == 5)].copy().sort_values(by = 'usrds_id')
y_val = np.array(X_val.pop('died_in_90'))
print('shape val ' + str(X_val.shape))
# test set = subsets 7 8 9
# sorting by usrds_id is important so that we can calculate the fairness (or you could just run the predictions again.)
X_test = dataset[dataset.subset > 6].copy().sort_values(by = 'usrds_id')
y_test = np.array(X_test.pop('died_in_90'))
# scale the numeric features by fitting on the training set
scaler = StandardScaler()
X_train[nu_cols] = scaler.fit_transform(X_train[nu_cols])
X_train = np.array(X_train.drop(columns=['subset','usrds_id','impnum']))
print('scaled shape train ' + str(X_train.shape))
# use the model from scaling on the training data to transform val and test sets
X_val[nu_cols] = scaler.transform(X_val[nu_cols])
X_val = np.array(X_val.drop(columns=['subset','usrds_id','impnum']))
print('scaled shape val ' + str(X_val.shape))
X_test[nu_cols] = scaler.transform(X_test[nu_cols])
X_test = np.array(X_test.drop(columns=['subset','usrds_id','impnum']))
print('scaled shape test ' + str(X_test.shape))
return(X_train, y_train, X_val, y_val, X_test, y_test)
# # Train model and test
# +
date_final = 2021
imp = 0
y_test_all = []
y_pred_proba_all = []
fpr_list=[]
tpr_list=[]
roc_auc_list=[]
seed = 78
np.random.seed(seed)
prec_list = []
rec_list = []
thresh_list = []
ap_score_list = []
#for each imputed set
for i in range(5):
imp = imp + 1
X_train, y_train, X_val, y_val, X_test, y_test = get_data(imp)
#optimal params for the .fit
class_weight_10 = {0: 1, 1: 10}
epochs_final = 10
batches = 256
#instantiate model defined above
final_model = final_build_mlp()
#train the final model on the train/validation sets
final_history = final_model.fit(
X_train,
y_train,
batch_size=batches,
epochs=epochs_final,
validation_data=(X_val, y_val),
class_weight=class_weight_10)
#results from training
train_predictions_final = final_model.predict(
X_train,
batch_size=batches
)
#test the model on new data (test set)
test_predictions_final = final_model.predict(
X_test,
batch_size=batches
)
final_eval = final_model.evaluate(
X_test,
y_test,
batch_size=batches,
verbose=1
)
#print results of test set
res = {}
for name, value in zip(final_model.metrics_names, final_eval):
print(name, ': ', value)
res = {name : value}
#plot confusion matrix results
onc_plot_cm(y_test, test_predictions_final)
#plot roc auc results
test_false_positives, test_true_positives, test_roc_auc_score = onc_plot_roc(
y_train.ravel(),
train_predictions_final,
y_test.ravel(),
test_predictions_final
)
#collect results of the test roc_curve for saving
fpr_list.append(test_false_positives)
tpr_list.append(test_true_positives)
roc_auc_list.append(test_roc_auc_score)
#collect results of the test precision recall curve for saving
test_precision, test_recall, test_ap_score = onc_plot_precision_recall(
y_train.ravel(),
train_predictions_final,
y_test.ravel(),
test_predictions_final)
prec_list.append(test_precision)
rec_list.append(test_recall)
ap_score_list.append(test_ap_score)
#collect results
y_test_all.append(y_test.ravel())
y_pred_proba_all.append(test_predictions_final)
#save dicts of results
with open(str(date_final)+'_MLP_final_results_imp_' + str(imp) + '.pickle', 'wb') as f:
pickle.dump(res, f)
# Save the entire model to a HDF5 file.
# The '.h5' extension indicates that the model should be saved to HDF5.
final_model.save(str(date_final)+'_MLP_final_model_imp_' + str(imp) + '.h5')
with open(str(date_final)+'_MLP_final_eval_imp_' + str(imp) + '.pickle','wb') as f:
pickle.dump(final_eval, f)
# save metrics from all imputations for plotting
with open(str(date_final)+'_MLP_final_ytest_all.pickle', 'wb') as f:
pickle.dump(y_test_all, f)
with open(str(date_final)+'_MLP_final_ypred_all.pickle', 'wb') as f:
pickle.dump(y_pred_proba_all, f)
#save roc auc data
with open(str(date_final)+'_MLP_final_fpr.pickle', 'wb') as f:
pickle.dump(fpr_list, f)
with open(str(date_final)+'_MLP_final_tpr.pickle', 'wb') as f:
pickle.dump(tpr_list, f)
with open(str(date_final)+'_MLP_final_auc.pickle', 'wb') as f:
pickle.dump(roc_auc_list, f)
#save precision recall curve AUC data
with open(str(date_final)+'_MLP_final_prec.pickle', 'wb') as f:
pickle.dump(prec_list, f)
with open(str(date_final)+'_MLP_final_recall.pickle', 'wb') as f:
pickle.dump(rec_list, f)
with open(str(date_final)+'_MLP_final_avgprec_thresh.pickle', 'wb') as f:
pickle.dump(thresh_list, f)
with open(str(date_final)+'_MLP_final_avgprec_score.pickle', 'wb') as f:
pickle.dump(ap_score_list, f)
# -
final_model.save(str(date_final)+'_MLP_final_model_imp_' + str(imp) + '.h5')
| multilayer_perceptron/3_mlp_final_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import pandas as pd
#input file fields as they are saved into the UKF output file
my_cols=['NIS_radar','NIS_laser']
with open('obj_pose-laser-radar-ukf-output.txt') as f:
table_ekf_output = pd.read_table(f, sep='\t', header=None, names=my_cols, lineterminator='\n')
# -
#check the parsed file
table_ekf_output[0:5]
plt.plot(table_ekf_output['NIS_radar'])
plt.plot((0, len(table_ekf_output['NIS_radar'])), (7.815, 7.815), 'k-')
plt.title('NIS for radar')
plt.plot(table_ekf_output['NIS_laser'])
plt.plot((0, len(table_ekf_output['NIS_laser'])), (5.991, 5.991), 'k-')
plt.title('NIS for lidar')
| NIS_visualisation.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Reporte de resultados
#
# ## 0. Introducción
# Este documento presente un resumen de los resultados obtenidos con la implementación del método de eliminación por bloques, empleando la aproximación de la descomposición SVD vía el algoritmo **One-Sided Jacobi**. Cabe destacar que se emplean matrices de dimensiones dadas a lo más por $10^3 \times 10^3$ entradas.
#
# ## 1. Consideraciones
#
# Al respecto de los experimentos numéricos realizados para consolidar el presente reporte, tales se basan en las siguientes premisas:
#
# [Pendiente: desarrollo]
#
# En este sentido, en particular para cada experimento realizado, se reportan:
#
# * los parámetros empleados en la simulaciones,
# * las dimensiones de las matrices y vectores involucrados, así como el prodecimiento pseudo-aleatorio que les dio origen,
# * 1) el tiempo involucrado en correr los experimentos,
# * 2) número de condición de las matrices pseudo-aleatorias, y
# * 3) el número de condición aproximado de la matriz.
#
# ### 1. Consideraciones sobre la infraestructura empleada
#
# Las características del/de los equipos que se emplearon para correr los en la que se realizaron experimentos son:
#
# [Pendiente: desarrollo]
#
# ## 2. Experimentos numéricos
#
# [Pendiente: desarrollo]
#
# A tal respecto, se destaca que se realizaron experimentos buscando probar ....
#
# [Pendiente: desarrollo]
#
# **Cargamos codigo desarrollado previamente**
#
# * **utils.R:** contiene...
# * **00-load.R:** contiene...
# +
## Instalamos paquetes
rm(list = ls())
paquetes <- c('matrixcalc')
instalar <- function(paquete) {
if (!require(paquete,character.only = TRUE, quietly = TRUE, warn.conflicts = FALSE)) {
install.packages(as.character(paquete), dependecies = TRUE, repos = "http://cran.us.r-project.org")
library(paquete, character.only = TRUE, quietly = TRUE, warn.conflicts = FALSE)
}
}
lapply(paquetes, instalar)
# +
## Cargamos paquetes necesarios
library("matrixcalc")
#source("metadata.R")
source("utils.R")
source("00-load.R")
# -
# ### 2.1 Experimento 1
#
# **Objetivo:** [Pendiente: desarrollo]
#
#
# +
set.seed(231)
n= 10**2
A = matrix(rnorm(n**2), ncol=n)
b = matrix(rnorm(n), ncol=1)
TOL = 10**-8
z<-eliminacion_bloques(A,b,n/2,10^-8,5)
norm(A%*%z-b,"2")/norm(b,"2")
# -
# ### 2.2 Experimento 2
#
# **Objetivo:** [Pendiente: desarrollo]
#
# ### 2.3 Experimento 3
#
# **Objetivo:** [Pendiente: desarrollo]
#
#
# ## 3. Principalez hallazgos
#
# Al respecto, se destacan los siguientes hallazgos de la implementación realizada para resolver un sistema lineal $Ax=b$, empleando el método de eliminación por bloques, basado en la solución de sistemas más pequeños aproximando la descomposición SVD asociada a través del algoritmo **One-Sided Jacobi**:
#
# * Hallazgo 1: [Pendiente]
# * Hallazgo 2: [Pendiente]
# * Hallazgo 3: [Pendiente]
# * Hallazgo 4: [Pendiente]
#
# Respecto a los problemas y cuestiones que enfrentamos en la implementación de esto algoritmo:
# ## 4.Trabajo a futuro
#
# [Pendiente]
#
# ## 5.Conclusiones
#
# [Pendiente]
#
#
| results/.ipynb_checkpoints/Reporte_resultados-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
MECANICA CELESTE
<NAME> 161003449
<NAME> 161003415
Extraer las ecuaciones del documento mecánica celeste.
Excentricidad:e = 0.617155 ± 0.000007
Periodo orbital: T = 27,906,98172 ± 0.00005 s
Proyección del eje semi mayor del pulsar: a1sini = 2.3424 ± 0.0007 lights = (7.0223 ± 0.002)x10^10 cm
Tasa de avance de periastron: w = 4.226 ± 0.002 yr-1
Seno de ángulo de inclinación: sin(i) = 0.81 ± 0.16
Función de masa: f = 0.13126 +- 0.00002 M
La posición de un nuevo centro de masa, después de la explosión viene dada por: R=([(qm1)r1+m2r2])/((qm1+m2))
Donde qm es la masa restante después de la explosión de mí.
La energía total del movimiento relativo del sistema binario posterior a la explosión es:
E=[1/2(qm1)v1^2]+[1/2(m2)^2(v2)^2]-[1/2(qm1+m2)R^2]-[G(qm1)m2/a0]
Se relaciona con la velocidad relativa del binario previo a la explosión por:
v^2=(G(m1+m2))/a0
La energía total del binario posterior a la explosión se puede reducir a:
E=1/2m (v)^2[q(m1-m2-2qm1)/((qm1+m2))]
EL CRITERIO DE BLAAUW
Establece que para que el sistema se desate, la pérdida de masa debe ser mayor que la masa restante en el binario, luego:
(m1-qm1)>(qm1+m2)
El criterio Blaauw puede derivarse más fácilmente al requerir que el cuadrado de la velocidad relativa antes de la explosión
Sea mayor que el cuadrado de la velocidad de escape después de la explosión:
(G(m1+m2))/a0 = v^2>v^2esc=(2G(qm1+m2))/a0
El momento angular del binario final alrededor del centro de masa está dado por:
I´=I0(q(m1+m2))/((qm1+m2)) ≡ I0f1(m1,m2,q)
La excentricidad de la órbita final se da
±a(1-e^2) = I´^2/(m´k´)
Luego, Si la órbita inicial es circular está dada por:
a0 = I0^2/mk
Donde m es la masa reducida del binario original y k = (G m1 m2).
Para un valor dado de (m1/m2) y la excentricidad de la órbita final, solo hay un valor permitido de q dado por:
q= 1/(1+e)[1-e/(m1/m2)]
La velocidad regional del binario original:
dA0/dt = 1/2(r0)^2(θ0) = i0/2m
La velocidad del centro de masa después de la explosión se puede obtener con:
Vcm=((1-q)m2*v2)/((qm1+m2))
El centro de masa se mueve en la dirección de movimiento de la estrella sin explotar con la velocidad:
Vcm = m(1-q)/((qm1+m2))((G(m1+m2))/a0)^(1/2)
La velocidad relativa mínima del binario:
Vmin=[((1-e))/(1+e)^(1/2)]v
Las ecuaciones del movimiento son:
d/dt(dx1/dt)=(Gm2/r^2)cosθ
d/dt(dy1/dt)=(Gm2/r^2)sinθ
El ángulo subtendido por los dos cuerpos se obtiene con:
|w2-w1|=v∞∶
cosφ=v1/[v1^2+v∞^2]^(1/2) =m2/[m1^2-2qm1(m1+m2)]^(1/2)
Ecuación de Kepler:
v-esenv=(2pi/T)(T-T0)=n(t-t0)=M
| tarea5/Tarea5-04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Confidence Graphs: Representing Model Uncertainty in Deep Learning
#
# **<NAME>** <br>
# <<EMAIL>> • https://mlwave.com
#
# **<NAME>**<br>
# <<EMAIL>> • https://matheusfacure.github.io/
#
# _Last run 2021-10-08 with kmapper version 2.0.1_
#
# ## Introduction
#
# Variational inference [(MacKay, 2003)](http://www.inference.org.uk/mackay/itila/) gives a computationally tractible measure of uncertainty/confidence/variance for machine learning models, including complex black-box models, like those used in the fields of gradient boosting [(Chen et al, 2016)](https://dl.acm.org/citation.cfm?id=2939785) and deep learning [(Schmidhuber, 2014)](https://arxiv.org/abs/1404.7828).
#
# The $MAPPER$ algorithm [(Singh et al, 2007)](https://research.math.osu.edu/tgda/mapperPBG.pdf) \[.pdf\] from Topological Data Analysis [(Carlsson, 2009)](http://www.ams.org/journals/bull/2009-46-02/S0273-0979-09-01249-X/) turns any data or function output into a graph (or simplicial complex) which is used for data exploration [(Lum et al, 2009)](https://www.nature.com/articles/srep01236), error analysis [(Carlsson et al, 2018)](https://arxiv.org/abs/1803.00384), serving as input for higher-level machine learning algorithms [(Hofer et al, 2017)](https://arxiv.org/abs/1707.04041), and more.
#
# Dropout [(Srivastava et al, 2014)](http://jmlr.org/papers/v15/srivastava14a.html) can be viewed as an ensemble of many different sub-networks inside a single neural network, which, much like bootstrap aggregation of decision trees [(Breiman, 1996)](https://dl.acm.org/citation.cfm?id=231989), aims to combat overfit. Viewed as such, dropout is applicable as a Bayesian approximation [(Rubin, 1984)](https://projecteuclid.org/euclid.aos/1176346785) in the variational inference framework [(Gal, 2016)](http://www.cs.ox.ac.uk/people/yarin.gal/website/thesis/thesis.pdf) (.pdf)
#
# Interpretability is useful for detecting bias in and debugging errors of machine learning models. Many methods exist, such as tree paths [(Saabas, 2014)](http://blog.datadive.net/interpreting-random-forests/), saliency maps, permutation feature importance [(Altmann et al, 2010)](https://academic.oup.com/bioinformatics/article/26/10/1340/193348), locally-fit white box models [(van Veen, 2015)](https://github.com/MLWave/Black-Boxxy) [(Ribeiro et al, 2016)](https://arxiv.org/abs/1602.04938). More recent efforts aim to combine a variety of methods [(Korobov et al, 2016)](https://github.com/TeamHG-Memex/eli5) [(Olah et al, 2018)](https://distill.pub/2018/building-blocks/).
#
# ## Motivation
#
# Error analysis surfaces different subsets/types of the data where a model makes fundamental errors. When building policies and making financial decisions based on the output of a model it is not only useful to study the errors of a model, but also the confidence:
# - Correct, but low-confidence, predictions for a cluster of data tells us where to focus our active learning [(Dasgupta et al, 2009)](http://hunch.net/~active_learning/) - and data collection efforts, so as to make the model more certain.
# - Incorrect, but high-confidence predictions, surface fundamental error types that can more readily be fixed by a correction layer [(Schapire, 1999)](http://rob.schapire.net/papers/Schapire99c.pdf) \[.pdf\], or redoing feature engineering [(Guyon et al, 2006)](https://dl.acm.org/citation.cfm?id=1208773).
# - Every profit-maximizing model has a prediction threshold where a decision is made [(Hardt et al, 2016)](https://arxiv.org/abs/1610.02413). However, given two equal predictions, the more confident predictions are preferred.
# - Interpretability methods have focussed either on explaining the model in general, or explaining a single sample. To our knowledge, not much focus has gone in a holistic view of modeled data, including explanations for subsets of similar samples (for whatever pragmatic definition of "similar", like "similar age", "similar spend", "similar transaction behavior"). The combination of interpretability and unsupervised exploratory analysis is attractive, because it catches unexpected behavior early on, as opposed to acting on faulty model output, and digging down to find a cause.
#
#
# ## Experimental setup
#
# We will use the MNIST dataset [(LeCun et al, 1999)](http://yann.lecun.com/exdb/mnist/), Keras [(Chollet et al, 2015)](https://keras.io/) with TensorFlow [(Abadi et al, 2016)](https://arxiv.org/abs/1603.04467), NumPy [(van der Walt et al., 2011)](https://arxiv.org/abs/1102.1523), Pandas [(McKinney, 2010)](http://conference.scipy.org/proceedings/scipy2010/mckinney.html), Scikit-Learn [(Pedregosa et al, 2011)](http://scikit-learn.org/), Matplotlib [(Hunter, 2007)](https://matplotlib.org/), and KeplerMapper [(Saul et al, 2017)](https://github.com/MLWave/kepler-mapper).
#
# - To classify between the digits 3 and 5, we will train a Multi-Layer Perceptron [(Ivakhnenko et al, 1965)](http://www.worldcat.org/title/cybernetic-predicting-devices/oclc/23815433) with 2 hidden layers, Backprop [(LeCun et al, 1998)](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) (pdf), RELU activation [(Nair et al, 2010)](https://dl.acm.org/citation.cfm?id=3104425), ADAM optimizer [(Kingma et al, 2014)](https://arxiv.org/abs/1412.6980), dropout of 0.5, and softmax output, to classify between the digits 3 and 5.
#
# - We perform a 1000 forward passes to get the standard deviation and variance ratio of our predictions as per [(Gal, 2016, page 51)](http://www.cs.ox.ac.uk/people/yarin.gal/website/thesis/thesis.pdf) [.pdf].
#
# - Closely following the $FiFa$ method from [(Carlsson et al, 2018, page 4)](https://arxiv.org/abs/1803.00384) we then apply $MAPPER$ with the 2D filter function `[predicted probability(x), confidence(x)]` to project the data. We cover this projection with 10 10% overlapping intervals per dimension. We cluster with complete single-linkage agglomerative clustering (`n_clusters=3`) [(Ward, 1963)](https://www.jstor.org/stable/2282967) and use the penultimate layer as the inverse $X$. To guide exploration, we color the graph nodes by `mean absolute error(x)`.
#
# - We also ask predictions for the digit 4 which was never seen during training [(Larochelle et al, 2008)](https://dl.acm.org/citation.cfm?id=1620172), to see how this influences the confidence of the network, and to compare the graphs outputted by KeplerMapper.
#
# - For every graph node we show the original images. Binary classification on MNIST digits is easy enough to resort to a simple interpretability method to show what distinguishes the cluster from the rest of the data: We order each feature by z-score and highlight the top 10% features [(Singh, 2016)](https://www.ayasdi.com/blog/bigdata/5191-2/).
# +
# %matplotlib inline
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam
tf.compat.v1.disable_eager_execution()
import kmapper as km
import numpy as np
import pandas as pd
from sklearn import metrics, cluster, preprocessing
import xgboost as xgb
from matplotlib import pyplot as plt
plt.style.use("ggplot")
# -
# ## Preparing Data
#
# We create train and test data sets for the digits 3, 4, and 5.
# +
# get the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_strange = X_train[y_train == 4]
y_strange = y_train[y_train == 4]
X_train = X_train[np.logical_or(y_train == 3, y_train == 5)]
y_train = y_train[np.logical_or(y_train == 3, y_train == 5)]
X_test = X_test[np.logical_or(y_test == 3, y_test == 5)]
y_test = y_test[np.logical_or(y_test == 3, y_test == 5)]
X_strange = X_strange[:X_test.shape[0]]
y_strange = y_strange[:X_test.shape[0]]
X_train = X_train.reshape(-1, 784)
X_test = X_test.reshape(-1, 784)
X_strange = X_strange.reshape(-1, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_strange = X_strange.astype('float32')
X_train /= 255
X_test /= 255
X_strange /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print(X_strange.shape[0], 'strange samples')
# convert class vectors to binary class matrices
y_train = (y_train == 3).astype(int)
y_test = (y_test == 3).astype(int)
y_mean_test = y_test.mean()
print(y_mean_test, 'y test mean')
# -
# ## Model
# Model is a basic 2-hidden layer MLP with RELU activation, ADAM optimizer, and softmax output. Dropout is applied to every layer but the final.
# +
batch_size = 128
num_classes = 1
epochs = 10
model = Sequential()
model.add(Dropout(0.5, input_shape=(784,)))
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=Adam(),
metrics=['accuracy'])
# -
# ## Fitting and evaluation
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_test, y_test))
score = model.evaluate(X_test, y_test, verbose=0)
score
# ## Perform 1000 forward passes on test set and calculate Variance Ratio and Standard Dev
# +
model.layers[-1].output
#K.function([model.layers[0].input, K.learning_phase()], [model.layers[-1].output])
# +
FP = 1000
predict_stochastic = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-1].output])
y_pred_test = np.array([predict_stochastic([X_test, 1]) for _ in range(FP)])
y_pred_stochastic_test = y_pred_test.reshape(-1,y_test.shape[0]).T
y_pred_std_test = np.std(y_pred_stochastic_test, axis=1)
y_pred_mean_test = np.mean(y_pred_stochastic_test, axis=1)
y_pred_mode_test = (np.mean(y_pred_stochastic_test > .5, axis=1) > .5).astype(int).reshape(-1,1)
y_pred_var_ratio_test = 1 - np.mean((y_pred_stochastic_test > .5) == y_pred_mode_test, axis=1)
test_analysis = pd.DataFrame({
"y_true": y_test,
"y_pred": y_pred_mean_test,
"VR": y_pred_var_ratio_test,
"STD": y_pred_std_test
})
print(metrics.accuracy_score(y_true=y_test, y_pred=y_pred_mean_test > .5))
print(test_analysis.describe())
# -
# ## Plot test set confidence
# +
prediction_cut_off = (test_analysis.y_pred < .96) & (test_analysis.y_pred > .94)
std_diff = test_analysis.STD[prediction_cut_off].max() - test_analysis.STD[prediction_cut_off].min()
vr_diff = test_analysis.VR[prediction_cut_off].max() - test_analysis.VR[prediction_cut_off].min()
num_preds = test_analysis.STD[prediction_cut_off].shape[0]
# STD plot
plt.figure(figsize=(16,8))
plt.suptitle("Standard Deviation of Test Predictions", fontsize=18, weight="bold")
plt.title("For the %d predictions between 0.94 and 0.96 the STD varies with %f"%(num_preds, std_diff),
style="italic")
plt.xlabel("Standard Deviation")
plt.ylabel("Predicted Probability")
plt.scatter(test_analysis.STD, test_analysis.y_pred, alpha=.3)
plt.scatter(test_analysis.STD[prediction_cut_off],
test_analysis.y_pred[prediction_cut_off])
plt.show()
# VR plot
plt.figure(figsize=(16,8))
plt.suptitle("Variance Ratio of Test Predictions", fontsize=18, weight="bold")
plt.title("For the %d predictions between 0.94 and 0.96 the Variance Ratio varies with %f"%(num_preds, vr_diff),
style="italic")
plt.xlabel("Variance Ratio")
plt.ylabel("Predicted Probability")
plt.scatter(test_analysis.VR, test_analysis.y_pred, alpha=.3)
plt.scatter(test_analysis.VR[prediction_cut_off],
test_analysis.y_pred[prediction_cut_off])
plt.show()
# -
# ## Apply $MAPPER$
#
# ### Take penultimate layer activations from test set for the inverse $X$
# +
predict_penultimate_layer = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-2].output])
X_inverse_test = np.array(predict_penultimate_layer([X_test, 1]))[0]
print((X_inverse_test.shape, "X_inverse_test shape"))
# -
# ### Take STD and error as the projected $X$
X_projected_test = np.c_[test_analysis.STD, test_analysis.y_true - test_analysis.y_pred]
print((X_projected_test.shape, "X_projected_test shape"))
# ### Create the confidence graph $G$
mapper = km.KeplerMapper(verbose=2)
G = mapper.map(X_projected_test,
X_inverse_test,
cover = km.Cover(n_cubes=10,
perc_overlap=0.5),
clusterer=cluster.AgglomerativeClustering(n_clusters=2)
)
# ### Create color function output (absolute error)
color_function_output = np.sqrt((y_test-test_analysis.y_pred)**2)
# ### Create image tooltips for samples that are interpretable for humans
# +
import io
import base64
from skimage.util import img_as_ubyte
from PIL import Image
# Create z-scores
hard_predictions = (test_analysis.y_pred > 0.5).astype(int)
o = np.std(X_test, axis=0)
u = np.mean(X_test[hard_predictions == 0], axis=0)
v = np.mean(X_test[hard_predictions == 1], axis=0)
z_scores = (u-v)/o
# scores with lowest z-scores (error) first
scores_0 = sorted([(score,i) for i, score in enumerate(z_scores) if str(score) != "nan"],
reverse=False)
# scores with highest z-scores (error) first
scores_1 = sorted([(score,i) for i, score in enumerate(z_scores) if str(score) != "nan"],
reverse=True)
# Fill RGBA image array with top 200 scores for positive and negative
img_array_0 = np.zeros((28,28,4))
img_array_1 = np.zeros((28,28,4))
## Color the 200 regions with lowest error yellow, with decreasing transparency
for e, (score, i) in enumerate(scores_0[:200]):
y = i % 28
x = int((i - (i % 28))/28)
img_array_0[x][y] = [255,255,0,205-e]
## Color the 200 regions with highest error red, with decreasing transparency
for e, (score, i) in enumerate(scores_1[:200]):
y = i % 28
x = int((i - (i % 28))/28)
img_array_1[x][y] = [255,0,0,205-e]
img_array = (img_array_0 + img_array_1)
# lighten intensity of colors a bit for RGB channels
img_array[:,:,:3] = img_array[:,:,:3] / 2
# Get base64 encoded version of this. Will be displayed under each tooltip image.
output = io.BytesIO()
img_array = img_as_ubyte(img_array.astype('uint8'))
img = Image.fromarray(img_array, 'RGBA').resize((64,64))
img.save(output, format="PNG")
contents = output.getvalue()
explanation_img_encoded = base64.b64encode(contents)
output.close()
from IPython import display
display.Image(base64.b64decode(explanation_img_encoded))
# -
# The testing data -- it is a 28x28 matrix of 0 / 1 (black/white) pixel data.
# Example below for test data 0.
Image.fromarray(img_as_ubyte(X_test[0].reshape((28,28))), 'L').resize((64,64))
# Create tooltips for each digit.
# Overlay the "explanation image" on top of each test data point.
tooltip_s = []
for ys, image_data in zip(y_test, X_test):
output = io.BytesIO()
_image_data = img_as_ubyte(image_data.reshape((28,28))) # Data was a flat row of "pixels".
img = Image.fromarray(_image_data, 'L').resize((64,64))
img.save(output, format="PNG")
contents = output.getvalue()
img_encoded = base64.b64encode(contents)
img_tag = """<div style="width:71px;
height:71px;
overflow:hidden;
float:left;
position: relative;">
<img src="data:image/png;base64,%s" style="position:absolute; top:0; right:0" />
<img src="data:image/png;base64,%s" style="position:absolute; top:0; right:0;
opacity:.75; width: 64px; height: 64px;" />
<div style="position: relative; top: 0; left: 1px; font-size:9px">%s</div>
</div>"""%((img_encoded.decode('utf-8'),
explanation_img_encoded.decode('utf-8'),
ys))
tooltip_s.append(img_tag)
output.close()
tooltip_s = np.array(tooltip_s)
# ### Visualize
_ = mapper.visualize(G,
lens=X_projected_test,
lens_names=["Uncertainty", "Error"],
custom_tooltips=tooltip_s,
color_values=color_function_output.values,
color_function_name=['Absolute error'],
title="Confidence Graph for a MLP trained on MNIST",
path_html="output/confidence_graph_output.html")
# [View the visualization.](../_static/confidence_graph_output.html)
# +
# Uncomment and run the below to view the visualization within the juptyer notebook
#
# from kmapper import jupyter
# jupyter.display("output/confidence_graph_output.html")
# -
# ## Image of former output
# 
#
# ## Link to former output
# http://mlwave.github.io/tda/confidence-graphs.html
# ## Changelog
#
# ### 2021.10.8 -- <NAME> (@deargle)
# - add `tf.compat.v1.disable_eager_execution()` to make runnable by tensorflow 2.x
# - add `color_function_name` to make compatible with kmapper v2.x
# - change perc_overlap from 0.8 to 0.5 to make match the output at <http://mlwave.github.io/tda/confidence-graphs.html> (note that both differ from the descriptive text at the top of the document -- unsure why)
# - replace deprecated `scipy.misc` image functions with `pillow.Image` for loading from an array and for resizing, and img_as_ubyte for bytescaling
# - increase explanation image alpha channel and overlay opacity to make match output at at <http://mlwave.github.io/tda/confidence-graphs.html>
#
# Note that the introductory descriptive text suggests that AgglomerativeClustering n_clusters will be 3, but the code sets it to 2. I suspect that the descriptive text is wrong and the code is right.
| docs/notebooks/Confidence-Graphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# # Characterization of Systems in the Time Domain
#
# *This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [<EMAIL>](mailto:<EMAIL>).*
# -
# ## Step Response
#
# The response of an LTI system to a [Heaviside signal](../continuous_signals/standard_signals.ipynb#Heaviside-Signal) as input signal is known as [*step response*](https://en.wikipedia.org/wiki/Step_response). It is defined as
#
# \begin{equation}
# h_\epsilon(t) = \mathcal{H} \{ \epsilon(t) \}
# \end{equation}
#
# The step response characterizes the properties of a system when the input signal is 'switched on' at $t=0$. It is related to the impulse response by
#
# \begin{equation}
# h_\epsilon(t) = \epsilon(t) * h(t) = \int_{-\infty}^{t} h(\tau) \; d\tau
# \end{equation}
#
# This implies that the impulse response is the derivative of the step response
#
# \begin{equation}
# h(t) = \frac{d h_\epsilon(t)}{dt}
# \end{equation}
#
# Using this result, the output signal $y(t) = \mathcal{H} \{ x(t) \}$ in terms of the step response reads
#
# \begin{equation}
# y(t) = x(t) * \frac{d h_\epsilon(t)}{dt} = \frac{d x(t)}{dt} * h_\epsilon(t)
# \end{equation}
#
# Since a Dirac impulse cannot be realized in practice, the step response is an alternative to measure the properties of an LTI system. It plays an import role in the theory of [control systems](https://en.wikipedia.org/wiki/Control_system).
# + [markdown] nbsphinx="hidden"
# **Copyright**
#
# The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by <NAME>.
| systems_time_domain/step_response.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regular expressions
from tock import *
# Regular expressions in Tock use the following operators:
#
# - `|` or `∪` for union
# - concatenation for concatenation
# - `*` for Kleene star
#
# This is very similar to Unix regular expressions, but because a symbol can have more than one character, consecutive symbols must be separated by a space. Also, for the empty string, you must write `ε` (or `&`). The empty set is written as `∅`.
#
# To create a regular expression from a string (Sipser, Example 1.56):
r = RegularExpression.from_str('(a b|a)*')
r
# However, there isn't much you can do with a `RegularExpression` object other than to convert it to an NFA.
#
# ## From regular expressions to NFAs
m = from_regexp(r) # from RegularExpression object
m = from_regexp('(a b|a)*') # a str is automatically parsed into a RegularExpression
# The regular expression is converted into a finite automaton, which you can view, as usual, as either a graph or a table.
to_graph(m)
to_table(m)
# The states are numbered according to the position in the regular expression they came from (so that listing them in alphabetical order is natural). The letter suffixes are explained below.
#
# We can also pass the `display_steps=True` option to show the automata created for all the subexpressions.
m = from_regexp('(a b|a)*', display_steps=True)
# ## From NFAs to regular expressions
#
# The `to_regexp` function converts in the opposite direction:
e = to_regexp(m)
e
# The resulting regular expression depends a lot on the order in which states are eliminated; Tock eliminates states in reverse alphabetical order.
#
# Again, the `display_steps` option causes all the intermediate steps of the conversion to be displayed.
e = to_regexp(m, display_steps=True)
| docs/source/tutorial/Regexps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
'''
importing needed libraries
'''
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import accuracy_score
import warnings
warnings.filterwarnings('ignore')
'''
reading dataframe using pandas
'''
#making pandas shows max columns to keep track on all of the features
pd.set_option("display.max_columns", None)
listings = pd.read_csv("BonstonDataset/listings.csv")
listings['city_cleansed']=listings['city'].copy()
# -
'''
removing reduandant city data also making this feature as a unique one by filtering the city names
'''
# +
'''
removing redundant data from city values also to clean data
'''
listings['city_cleansed']=listings['city'].copy()
def replace_name(df,old_str,new_str):
df['city_cleansed']=df['city_cleansed'].replace(old_str,new_str)
return df['city_cleansed']
listings['city_cleansed']=replace_name(listings,'ALLSTON','Allston')
listings['city_cleansed']=replace_name(listings,'Roslindale, Boston','Roslindale')
listings['city_cleansed']=replace_name(listings,'dorchester, boston ','Dorchester')
listings['city_cleansed']=replace_name(listings,['Boston ', 'boston'],'Boston')
listings['city_cleansed']=replace_name(listings,'east Boston','East Boston')
listings['city_cleansed']=replace_name(listings,'Boston (Charlestown)','Charlestown')
listings['city_cleansed']=replace_name(listings,'ROXBURY CROSSING','Roxbury Crossing')
listings['city_cleansed']=replace_name(listings,'Brighton ','Brighton')
listings['city_cleansed']=replace_name(listings, ['Jamaica Plain, Boston', 'Jamaica Plain (Boston)',
'Jamaica Plain ', 'Jamaica plain ', 'Boston (Jamaica Plain)',], 'Jamaica Plain')
# +
'''
removing , from price column to parse column to float
ploting cleansed cities according to prices
'''
listings['price']=listings['price'].map(lambda x:int(x[1:-3].replace(',','')))
listings.groupby('city_cleansed')['price'].mean().plot(x='city_cleansed',y='price',kind='bar')
# -
#ploting data grouping with room type and price
listings.groupby('room_type')['price'].mean().plot(x='room_type',y='price',kind='bar')
# +
listings['bathrooms']=listings['bathrooms'].fillna(listings['bathrooms'].median())
print(listings['bathrooms'].median())
listings.groupby('bathrooms')['price'].plot(x='bathrooms',y='price',kind='bar')
# -
listings.groupby(['review_scores_rating','host_is_superhost','beds','bedrooms','bathrooms'])['price'].mean().unstack().plot()
# +
listings['bathrooms']=listings['bathrooms'].fillna(listings['bathrooms'].median())
np.sum(listings['bathrooms'].isna().any())
# +
'''
calculating distribuation of room types using pie plot
'''
room_type_count=listings['room_type'].value_counts()
room_type_count.plot.pie(figsize=(8,8),fontsize=12,autopct='%.2f',title='room type distribution')
# +
'''
calculating correlation matrix using host list count, accommodate, host total listings count,no. bathrooms,no. bedrooms, no. beds,
no. of gusts included, no of reviews
'''
corr=listings[['host_listings_count', 'host_total_listings_count', 'accommodates',
'bathrooms', 'bedrooms', 'beds', 'price', 'guests_included', 'number_of_reviews',
'review_scores_rating']].corr()
sns.heatmap(corr,annot=True,cmap='cubehelix',vmax=1,fmt='.2f')
# -
'''ploting to view distribution of property type inside dataset'''
property_type=listings['property_type'].value_counts()
property_type.plot.bar(color='blue',figsize=(10,5),title='Boston property type')
sns.heatmap(listings.groupby(['property_type','room_type']).price.mean().unstack(),annot=True,fmt='g')
sns.heatmap(listings.groupby(['neighbourhood_cleansed','room_type']).price.mean().unstack(),annot=True,fmt='g')
sns.heatmap(listings.groupby(['neighbourhood_cleansed','property_type']).price.mean().unstack(),annot=True)
sns.heatmap(listings.groupby(['city_cleansed','room_type']).price.mean().unstack(),annot=True)
listings.groupby('neighbourhood_cleansed')['price'].mean().plot(x='neighbourhood_cleansed',y='price',kind='bar')
# +
'''
cleaning amenities to create new features
after removing and cleaning this feature we create a numpy matrix holding unique values to create new categorical features
'''
listings['amenities']=listings['amenities'].map(lambda amns: '|'.join([x.replace('}','').replace('{','').replace('"','') for x in amns.split(',')]))
amenities=np.unique(np.concatenate(listings['amenities'].map(lambda amns:amns.split('|')).values))
amenities_matrix=np.array([listings['amenities'].map(lambda amns:amn in amns).values for amn in amenities])
# +
'''
creating features dataframe used in training and testing
'''
features=listings[['host_listings_count', 'host_total_listings_count', 'accommodates',
'bathrooms', 'bedrooms', 'beds', 'price', 'guests_included', 'number_of_reviews',
'review_scores_rating']]
listings['amenities'].map(lambda amns:amns.split('|')).head()
# +
amenity_arr = np.array([listings['amenities'].map(lambda amns: amn in amns) for amn in amenities])
features=pd.concat([features,pd.DataFrame(data=amenity_arr.T,columns=amenities)],axis=1)
for tf_feature in ['host_is_superhost', 'host_identity_verified', 'host_has_profile_pic',
'is_location_exact', 'requires_license', 'instant_bookable',
'require_guest_profile_picture', 'require_guest_phone_verification']:
features[tf_feature]=listings[tf_feature].map(lambda s:False if s=='f' else True)
for categorical_feature in ['neighbourhood_cleansed', 'property_type', 'room_type', 'bed_type']:
features=pd.concat([features,pd.get_dummies(listings[categorical_feature])],axis=1)
features.head()
# +
for col in features.columns[features.isnull().any()]:
print(col)
for col in features.columns[features.isnull().any()]:
features[col]=features[col].fillna(features[col].median())
features['price'].sort_values().reset_index(drop=True).plot()
# +
'''
using random forest regression it's accuracy 41 dollar range and spliting data to train and test splits
'''
#using -1 in n_jobs to run maximum multiple parallel run
clf=RandomForestRegressor(n_jobs=-1)
'''
also used only house data which had price more than or equal to 300 to increase accuracy
'''
x=features.query('price<=300')
y=x['price'].values
x=x.loc[:,x.columns!='price'].values
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=42)
clf.fit(x_train,y_train)
preds=clf.predict(x_test)
mse=mean_squared_error(y_test,preds)
mse**(1/2)
# -
#also tried linear regression and it's accuracy 42 dollar range
lr=LinearRegression()
lr.fit(x_train,y_train)
mse=mean_squared_error(y_test,lr.predict(x_test))
mse**(1/2)
| Boston.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tiffypabo/OOP-58001/blob/main/Fundamentals_of_Python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="qqMnHEpy976E"
# #Fundamentals of Python
# + [markdown] id="i1TqASBl-DxA"
# Python Variables
# + colab={"base_uri": "https://localhost:8080/"} id="iuO3rOcA-GiF" outputId="f4e468c4-031b-4232-fd27-e14cf0c02538"
x = float (1)
a, b = 0, 1
a, b, c = "Sally", "John", "Anna"
print('This is a sample')
print(a)
print(c)
# + colab={"base_uri": "https://localhost:8080/"} id="8k0gG5Zx-xOR" outputId="ec0091d5-eb6d-4c44-f42c-0ec7d7d524a8"
a, b, c = "Sally", "John", "Ana"
a, b, c = 0, -1, 2
print('This is a sample')
print(a)
print(c)
# + [markdown] id="pzThsvSv_nef"
# Casting
# + colab={"base_uri": "https://localhost:8080/"} id="Ihu680U8_ogS" outputId="048f4084-e664-40a6-b627-f4c3f704f4b4"
print(x)
# + [markdown] id="uIV7S7bu_srq"
# Type() Function
# + colab={"base_uri": "https://localhost:8080/"} id="BUu5qxmg_vSN" outputId="187fa2f0-ff0a-41ba-c600-0d31a56b5c4f"
y = "Johnny"
print(type(y))
print(type(x))
# + [markdown] id="zr9kX2E7__5A"
# Double quotes and Single quotes
# + colab={"base_uri": "https://localhost:8080/"} id="QphcdwPnADeO" outputId="7900c4b4-3cc3-4b20-e81f-b7d97f74b851"
h = "Maria"
v = 1
V = 2
print(h)
print(v)
print(V)
# + [markdown] id="KlAGRBAVATMk"
# Multiple Variables
# + colab={"base_uri": "https://localhost:8080/"} id="Mc5cvjT4AVi2" outputId="deecacb9-8a0a-47c7-f575-efc2ccece5a7"
x,y,z = "one", "two", "three"
print(x)
print(y)
print(z)
print(x,y,z)
# + [markdown] id="JEJSN4bJAm2-"
# One Value to Multiple Variables
# + colab={"base_uri": "https://localhost:8080/"} id="zDm0tUZkArEt" outputId="e7a1700d-2b7c-4218-f64b-da8c26d7162e"
x = y = z = "Stella"
print(x,y,z)
# + [markdown] id="VdfAUlKVAzJ7"
# Output Variables
# + colab={"base_uri": "https://localhost:8080/"} id="irw8KbJwA22W" outputId="6baa966a-a84d-4bb7-d287-4e6a0c2d9242"
x = "enjoying"
print("Python is"+" "+ x)
x = "Hi"
y = "have a nice day <3"
print(x+" "+ y)
# + [markdown] id="USXS3RV8B2HF"
# Arithmetic Operations
# + colab={"base_uri": "https://localhost:8080/"} id="34X8Wtb7B4TG" outputId="4a705052-90c7-4e10-b3a2-a745be3b029d"
f = 1
g = 2
i = 6
print(f+g)
print(f-g)
print(f*i)
print(int(i/g))
print(3/g)
print(3%g)
print(3//g)
print(3**6)
# + [markdown] id="Fmye8dCuCbRW"
# Assignment Operators
# + colab={"base_uri": "https://localhost:8080/"} id="AOswM4IWCdqm" outputId="3245078a-756d-43b5-f3b5-0e66aeb01c60"
k = 2
l = 3
k+=3 #same as k=k+3
print(k)
print(l>>1)
# + [markdown] id="dgiFE8yWC1IO"
# Boolean Operators
# + colab={"base_uri": "https://localhost:8080/"} id="9k3CuunNC3CV" outputId="5ba988da-8a3f-433c-ac7e-27c30a94b609"
k=2
l=3
print(k>>2) #shift right twice
print(k<<2) #shift left twice
# + [markdown] id="Ix6_qfI-DDtP"
# Relational Operators
# + colab={"base_uri": "https://localhost:8080/"} id="Qo4Kh8unDHaG" outputId="fc0239bd-3b04-43e6-b649-860b45317107"
print(v>k) #v=1, k=2
print(v==k)
# + [markdown] id="TRZQfgpiDR84"
# Logical Operators
# + colab={"base_uri": "https://localhost:8080/"} id="JLZQx_qXDUo2" outputId="4464d29b-3437-41c8-e25e-09cb17c568fc"
print(v<k and k==k)
print(v<k or k==v)
print(not (v<k or k==v))
# + [markdown] id="unBbDl3DDkfK"
# Identity Operators
# + colab={"base_uri": "https://localhost:8080/"} id="_JfORrXkDnTe" outputId="b3919bf1-203e-4cea-a727-62d3c8e504bb"
print(v is k)
print(v is not k)
| Fundamentals_of_Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting Known To Contextual Vectors
#
# So far we have seen many word vector representation, including Word2Vec, Glove, and Fasttext. In all these previously discussed methods the vector for any given word will be the same in for entire documents. If the word bank is used for the financial institution it can be the bank of the river. for the above-mentioned technique, the meaning of word bank is the same in both the cases. This property of the word bank to have different meaning as per the context is called as polysemic. Elmo was proposed in the paper Deep contextualized word representations by <NAME> and coworkers.
#
#
# + [markdown] colab_type="text" id="S7jnD7KSPmOM"
# Embeddings from Language Models.Deep contextualized word representations. The aim is to learn representations that model the syntax, semantics and polysemy.
# + [markdown] colab_type="text" id="HnNMc90fqBkl"
# ## Installation
#
# Allen AI has released an official version of the Elmo. By using this API you can use a pre-trained model to get contextual embeddings of the token in the given sentence.
# + colab={} colab_type="code" id="bsJ8IksA3EZa"
# !pip install allennlp
# !pip install google
# + colab={} colab_type="code" id="B3529t4J3ER1"
import google
from allennlp.commands.elmo import ElmoEmbedder
import scipy
elmo = ElmoEmbedder()
# + [markdown] colab_type="text" id="4qXV5gOoqPMT"
# ### 1) Getting Embeddigs
#
# We have four words in the sentence. Form the theory as we already know that the Elmo embedding generates 3 embeddings for each word, 2 from LSTM layer and one from CNN layer. each of these embeddings has a size of 1024 which is the size of the highest number of convolution filters used in Elmo model.
# + colab={} colab_type="code" id="1D74pREyqQ_1"
vectors = elmo.embed_sentence(["My", "name", "is", "Sunil"])
# + colab={} colab_type="code" id="hsmCg5-0qny7"
vectors.shape
# + [markdown] colab_type="text" id="SjpfGO-pqRF1"
# ### 2) Checking Contexual Claim
# + colab={} colab_type="code" id="RK_2jAbn3MyX"
def get_similarity(token1, token2,token1_location,token2_location):
vectors = elmo.embed_sentence(token1)
assert(len(vectors) == 3) # one for each layer in the ELMo output
assert(len(vectors[0]) == len(token1)) # the vector elements correspond with the input tokens
vectors2 = elmo.embed_sentence(token2)
print("="*50)
print("Entity 1 : ",token1[token1_location], " | Entity2 : ", token2[token2_location])
print("Shape of one of the LSTM vector : ", vectors[2][token1_location].shape)
print("="*50)
print("cosine distance of 2nd bilstm layer vector", scipy.spatial.distance.cosine(vectors[2][token1_location], vectors2[2][token1_location]))
print("cosine distance of 1st bilstm layer vector", scipy.spatial.distance.cosine(vectors[1][token1_location], vectors2[1][token1_location]))
print("cosine distance of CNN layer vector", scipy.spatial.distance.cosine(vectors[0][token1_location], vectors2[0][token1_location]))
return
# + colab={} colab_type="code" id="QOqVw0QO3M1d"
get_similarity(["I","ate","an","Apple","."], ["I", "have","an","iPhone","made","by","Apple","Inc","."],3,6)
# + [markdown] colab_type="text" id="0vfXmPMNBwjG"
# Its very clear that the embedding for word "Apple" is different for both sentences. The difference is clear from the cosine diffrence between output genrated by LSTM layers. CNN layer is not contexual and hence the the cosine distance betwenn two "Apple" is the same
# -
# ---
#
# Alternatively ELMo can be used by using Zalandro flair API, A very simple framework for state-of-the-art Natural Language Processing (NLP). Zalandro flair API is an open source project can be accessed at https://github.com/zalandoresearch/flair.
# ### installation
# !pip install flair
# +
from flair.embeddings import ELMoEmbeddings
# init embedding
embedding = ELMoEmbeddings()
# create a sentence
sentence = Sentence('The grass is green .')
# embed words in sentence
print(embedding.embed(sentence))
# -
#
| Chapter06/elmo_bilm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -
# # Vertex Pipelines: Vertex AI Hyperparameter Tuning Job
#
# ## Overview
# This notebook shows how to use the `HyperparameterTuningJobRunOp` to run a hyperparameter tuning job in Vertex AI for a TensorFlow model. While this lab uses TensorFlow for the model code, you could easily replace it with another framework. This sample notebook is based on the [Vertex AI:Hyperparameter Tuning Codelab](https://codelabs.developers.google.com/vertex_hyperparameter_tuning).
#
# To learn more about Vertex AI Hyperparameter Tuning Job see [Vertex AI Hyperparameter Tuning Job](https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning).
#
# For `HyperparameterTuningJobRunOp` interface please see the [souce code here](https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/experimental/hyperparameter_tuning_job/component.py).
# ### Install additional packages
# !pip3 install -U google-cloud-pipeline-components -q
# !pip3 install -U google-cloud-aiplatform -q
# !pip3 install -U kfp -q
# Restart the kernel after pip installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# ### Create directory structure
# !mkdir horses_or_humans
# !mkdir horses_or_humans/trainer
# ## Containerize training application code
#
# The training application code (inner script) will be put in a Docker container and it will be pushed to the Google Container Registry. After that, the hyperparameter tuning job will be submitted to Vertex by using the `HyperparameterTuningJobRunOp` in a Kubeflow Pipeline. Using this approach, you can tune hyperparameters for a model built with any framework.
#
# First, the files below will be created under the a `horses_or_humans` directory. There are several files under that folder:
# + Dockerfile
# + trainer/
# + task.py
# ### Set your Project ID and Pipeline Root
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
REGION = "us-central1"
# ### Create a Dockerfile
# +
# %%file horses_or_humans/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-gpu.2-5
WORKDIR /
# Installs hypertune library
RUN pip install cloudml-hypertune
# Copies the trainer code to the docker image.
COPY trainer /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
# -
# The Dockerfile uses the [Deep Learning Container TensorFlow Enterprise 2.5 GPU Docker image](https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container#choose_a_container_image_type?utm_campaign=CDR_sar_aiml_ucaiplabs_011321&utm_source=external&utm_medium=web). The Deep Learning Containers on Google Cloud come with many common ML and data science frameworks pre-installed. After downloading that image, this Dockerfile sets up the entrypoint for the training code.
# ### Add model training code
# +
# %%file horses_or_humans/trainer/task.py
import tensorflow as tf
import tensorflow_datasets as tfds
import argparse
import hypertune
NUM_EPOCHS = 10
def get_args():
'''Parses args. Must include all hyperparameters you want to tune.'''
parser = argparse.ArgumentParser()
parser.add_argument(
'--learning_rate',
required=True,
type=float,
help='learning rate')
parser.add_argument(
'--momentum',
required=True,
type=float,
help='SGD momentum value')
parser.add_argument(
'--num_neurons',
required=True,
type=int,
help='number of units in last hidden layer')
args = parser.parse_args()
return args
def preprocess_data(image, label):
'''Resizes and scales images.'''
image = tf.image.resize(image, (150,150))
return tf.cast(image, tf.float32) / 255., label
def create_dataset():
'''Loads Horses Or Humans dataset and preprocesses data.'''
data, info = tfds.load(name='horses_or_humans', as_supervised=True, with_info=True)
# Create train dataset
train_data = data['train'].map(preprocess_data)
train_data = train_data.shuffle(1000)
train_data = train_data.batch(64)
# Create validation dataset
validation_data = data['test'].map(preprocess_data)
validation_data = validation_data.batch(64)
return train_data, validation_data
def create_model(num_neurons, learning_rate, momentum):
'''Defines and complies model.'''
inputs = tf.keras.Input(shape=(150, 150, 3))
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(num_neurons, activation='relu')(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=momentum),
metrics=['accuracy'])
return model
def main():
args = get_args()
train_data, validation_data = create_dataset()
model = create_model(args.num_neurons, args.learning_rate, args.momentum)
history = model.fit(train_data, epochs=NUM_EPOCHS, validation_data=validation_data)
# DEFINE METRIC
hp_metric = history.history['val_accuracy'][-1]
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=hp_metric,
global_step=NUM_EPOCHS)
if __name__ == "__main__":
main()
# -
# The Python file `task.py` is an inner script that contains the model training code. There are a few components that are specific to using the hyperparameter tuning service.
#
# 1. The script imports the `hypertune` library. Note that the Dockerfile included instructions to pip install this library.
#
#
# 2. The function `get_args()` defines a command-line argument for each hyperparameter you want to tune. In this example, the hyperparameters that will be tuned are the learning rate, the momentum value in the optimizer, and the number of neurons in the last hidden layer of the model. While these are the only hyperparameters targeted here, you are free to modify others. The value passed in those arguments is then used to set the corresponding hyperparameter in the code.
#
# 3. At the end of the `main()` function, the `hypertune` library is used to define the metric you want to optimize. In TensorFlow, the keras `model.fit` method returns a `History` object. The `History.history` attribute is a record of training loss values and metrics values at successive epochs. If you pass validation data to `model.fit` the `History.history` attribute will include validation loss and metrics values as well. For example, if you trained a model for three epochs with validation data and provided `accuracy` as a metric, the `History.history` attribute would look similar to the following dictionary.
# ```
# {
# "accuracy": [
# 0.7795261740684509,
# 0.9471358060836792,
# 0.9870933294296265
# ],
# "loss": [
# 0.6340447664260864,
# 0.16712145507335663,
# 0.04546636343002319
# ],
# "val_accuracy": [
# 0.3795261740684509,
# 0.4471358060836792,
# 0.4870933294296265
# ],
# "val_loss": [
# 2.044623374938965,
# 4.100203514099121,
# 3.0728273391723633
# ]
# ```
# If you want the hyperparameter tuning service to discover the values that maximize the model's validation accuracy, you can define the metric as the last entry (or `NUM_EPOCHS - 1`) of the `val_accuracy` list. Then, pass this metric to an instance of `HyperTune`. You can pick whatever string you like for the `hyperparameter_metric_tag`, but you’ll need to use the string again later when you kick off the hyperparameter tuning job.
# ### Build and push the container to the Google Container Registry
IMAGE_URI=f"gcr.io/{PROJECT_ID}/horse-human:hypertune"
# %cd horses_or_humans
# !docker build ./ -t {IMAGE_URI}
# !docker push {IMAGE_URI}
# ## Launch Hyperparameter Tuning Job
# ### Import libraries
from google.cloud.aiplatform import hyperparameter_tuning as hpt
from google_cloud_pipeline_components.experimental import hyperparameter_tuning_job
from kfp.components import load_component_from_url
from kfp.v2 import dsl
from kfp.v2 import compiler
from kfp.v2.google.client import AIPlatformClient
# ### Instantiate an API client object
api_client = AIPlatformClient(
project_id=PROJECT_ID,
region=REGION,
)
# ### Define specs for Hyperparameter Tuning
# +
# The spec of the worker pools including machine type and Docker image
worker_pool_specs = [{
"machine_spec": {
"machine_type": "n1-standard-4",
"accelerator_type": "NVIDIA_TESLA_T4",
"accelerator_count": 1
},
"replica_count": 1,
"container_spec": {
"image_uri": IMAGE_URI
}
}]
# Dictionary representing metrics to optimize.
# The dictionary key is the metric_id, which is reported by your training job,
# and the dictionary value is the optimization goal of the metric.
metric_spec={'accuracy':'maximize'}
# List serialized from the parameter dictionary. The dictionary
# represents parameters to optimize. The dictionary key is the parameter_id,
# which is passed into your training job as a command line key word argument, and the
# dictionary value is the parameter specification of the metric.
parameter_spec = hyperparameter_tuning_job.serialize_parameters({
"learning_rate": hpt.DoubleParameterSpec(min=0.001, max=1, scale="log"),
"momentum": hpt.DoubleParameterSpec(min=0, max=1, scale="linear"),
"num_neurons": hpt.DiscreteParameterSpec(values=[64, 128, 512], scale=None)
})
# -
# ### Define the pipeline
# +
PIPELINE_ROOT = 'gs://[your-base-output-directory]' #@param {type:"string"}
HyperparameterTuningJobRunOp = load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/google-cloud/google_cloud_pipeline_components/experimental/hyperparameter_tuning_job/component.yaml')
@dsl.pipeline(pipeline_root=PIPELINE_ROOT, name='hp-tune-pipeline')
def hp_tune_pipeline():
hp_tuning_task = HyperparameterTuningJobRunOp(
display_name='hp-job',
project=PROJECT_ID,
location=REGION,
worker_pool_specs=worker_pool_specs,
study_spec_metrics=metric_spec,
study_spec_parameters=parameter_spec,
max_trial_count=15,
parallel_trial_count=3,
base_output_directory=PIPELINE_ROOT
)
# -
# ### Compile and run the pipeline
# +
compiler.Compiler().compile(
pipeline_func=hp_tune_pipeline, package_path="hp_tune_pipeline_job.json"
)
response = api_client.create_run_from_job_spec(
job_spec_path="hp_tune_pipeline_job.json",
# pipeline_root=PIPELINE_ROOT # this argument is necessary if you did not specify PIPELINE_ROOT as part of the pipeline definition.
)
| components/google-cloud/google_cloud_pipeline_components/experimental/hyperparameter_tuning_job/hp_tuning_job_sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Graph Colouring
#
# ### Definition
#
# We are given an undirected graph with vertex set $V$ and edge set $E$ and a set of $n$ colours.
#
# Our aim is to find whether we can colour every node of the graph in one of these $n$ colours such that no edge connects nodes of the same colour.
#
# ### Applications
# Graph Colouring appears in a variety of real life problems like map colouring, scheduling, register allocation, frequency assignment, communication networks and timetables.
#
# ### Path to solving the problem
# Graph Colouring can be formulated as a minimization problem and its cost function can be cast to a QUBO problem with its respective Hamiltonian (see the [Introduction](./introduction_combinatorial_optimization.ipynb) and a [reference](https://arxiv.org/abs/1302.5843)),
# $$ \displaystyle \large
# H = \textstyle\sum\limits_{v}\displaystyle \left( 1 -\textstyle\sum\limits_{i=1}^{n} x_{v,i} \right) ^2 + \textstyle\sum\limits_{uv \in E} \textstyle\sum\limits_{i=1}^{n} x_{u,i} x_{v,i}
# $$
#
# where $v, u \in V$ and $x_{v,i}$ is a binary variable, which is $1$ if vertex $v$ has colour $i$ and $0$ otherwise.
#
# The QLM allows us to encode a problem in this Hamiltonian form by using the `GraphColouring` class for a given graph and a number of colours. We can then create a job from the problem and send it to a heuristic Simulated Quantum Annealer (SQA) wrapped inside a Quantum Processing Unit (QPU) like the rest of the QPUs on the QLM. The SQA will minimize the $H$, hence we find the best solution to our problem.
#
# For a more detailed explanation and a step-by-step guidance, please follow the sections below.
#
# ### Quantum resources
# To represent the problem as QUBO the QLM would need $nN$ spins, where $N$ is the number of vertices of the graph. The classical complexity of the [best known approximation algorithm](https://www.sciencedirect.com/science/article/abs/pii/0020019093902466?via%3Dihub) for this problem is $O(N(log log N)^2(log N)^3)$
# # Example problem
#
# Imagine we are given $3$ colours and a graph with $4$ vertices and $5$ edges, as shown below (left). This problem has a simple solution $-$ nodes $0$ and $3$ will share one colour and the other two colours are assigned to nodes $1$ and $2$ (right).
#
# <br><img src="./graph_colouring_example_solution2.png" style="width: 850px"><br>
#
# Let us describe how one can reach this answer using tools from the QLM.
# However, the approach will be applicable to finding the Graph Colouring of whatever graph we are given !
#
# We will use the `networkx` library to specify our graph (or in fact any graph, as the library is quite rich).
# +
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
# Specify the graph
# First example
graph = nx.Graph()
graph.add_nodes_from(np.arange(4))
graph.add_edges_from([(0, 1), (0, 2), (1, 2), (1, 3), (2, 3)])
# # Second example - one can try with 4 or 5 colours
# graph = nx.gnm_random_graph(15, 40)
# Specify the number of colours
number_of_colours = 3
number_of_nodes = len(graph.nodes())
number_of_spins = number_of_colours * number_of_nodes
# Draw the graph
nodes_positions = nx.spring_layout(graph, iterations=number_of_nodes * 60)
plt.figure(figsize=(10, 6))
nx.draw_networkx(graph,
pos=nodes_positions,
node_color='#4EEA6A',
node_size=600,
font_size=16)
plt.show()
# -
# To encode the problem in a QUBO form, as described above, we will call the `GraphColouring` class.
# +
from qat.opt import GraphColouring
graph_colouring_problem = GraphColouring(graph, number_of_colours)
# -
# # Solution
# Once an instance of the problem is created, we can proceed to compute the solution of the problem by following the steps:
#
# 1. Extract the best SQA parameters found for Graph Colouring by calling the method `get_best_parameters()`.
#
# The number of Monte Carlo updates is the total number of updates performed for each temperature (and gamma) on the spins of the equivalent 2D classical system. These updates are the product of the number of annealing steps $-$ `n_steps`, the number of "Trotter replicas" $-$ `n_trotters`, and the problem size, i.e. the number of qubits needed. Hence, we can use these parameters to get the best inferred value for `n_steps`. In general, the more these steps are, the finer and better the annealing will be. However this will cause the process to take longer to complete.
#
# Similarly for the `n_trotters` field in `SQAQPU` $-$ the higher it is, the better the final solution could be, but the more time taken by the annealer to reach an answer.
#
#
# 2. Create a temperature and a gamma schedule for the annealing.
#
# We use the extracted max and min temperatures and gammas to create a (linear) temperature and a (linear) gamma schedule. These schedules evolve in time from higher to lower values since we simulate the reduction of temperatures and magnetic fields. If one wishes to vary them it may help if the min values are close to $0$, as this will cause the Hamiltonian to reach a lower energy state, potentially closer to its ground state (where the solution is encoded).
#
# It should be noted that non-linear schedules may be investigated too, but for the same number of steps they could lead to a slower annealing. The best min and max values for gamma and the temperature were found for linear schedules.
#
#
# 3. Generate the SQAQPU and create a job for the problem. The job is then sent to the QPU and the annealing is performed.
#
#
# 4. Present the solution spin configuration.
#
#
# 5. Show a dictionary of vertices for each colour.
#
#
# 6. Draw the coloured graph.
#
# When we look at the final spin configuration, we see spins on each row and column. The rows represent the vertices, i.e. the second row (counting from $0$) is for the second vertex (again counting from $0$). The spins of each row are then the binary representation of the colour of that vertex, but with spin values, i.e. $\{1, -1\}$ instead of $\{1, 0\}$.
#
# So if a spin at position $(2,1)$ is $1$, this means that the second vertex has the first colour (again counting from $0$).
# +
from qat.core import Variable
from qat.sqa import SQAQPU
from qat.sqa.sqa_qpu import integer_to_spins
# 1. Extract parameters for SQA
problem_parameters_dict = graph_colouring_problem.get_best_parameters()
n_monte_carlo_updates = problem_parameters_dict["n_monte_carlo_updates"]
n_trotters = problem_parameters_dict["n_trotters"]
n_steps = int(n_monte_carlo_updates /
(n_trotters * number_of_spins)) # the last one is the number of spins, i.e. the problem size
temp_max = problem_parameters_dict["temp_max"]
temp_min = problem_parameters_dict["temp_min"]
gamma_max = problem_parameters_dict["gamma_max"]
gamma_min = problem_parameters_dict["gamma_min"]
# 2. Create a temperature and a gamma schedule
tmax = 1.0
t = Variable("t", float)
temp_t = temp_min * (t / tmax) + temp_max * (1 - t / tmax)
gamma_t = gamma_min * (t / tmax) + gamma_max * (1 - t / tmax)
# 3. Create a job and send it to a QPU
problem_job = graph_colouring_problem.to_job(gamma_t=gamma_t, tmax=tmax, nbshots=1)
sqa_qpu = SQAQPU(temp_t=temp_t, n_steps=n_steps, n_trotters=n_trotters)
problem_result = sqa_qpu.submit(problem_job)
# 4. Present best configuration
state_int = problem_result.raw_data[0].state.int # raw_data is a list of Samples - one per shot
solution_configuration = integer_to_spins(state_int, number_of_spins)
solution_configuration_reshaped = solution_configuration.reshape((number_of_nodes, number_of_colours))
print("Solution configuration: \n" + str(solution_configuration_reshaped) + "\n")
# 5. Show a list of nodes for each colour
from itertools import product
vertices_dictionary = {colour:[] for colour in range(number_of_colours)}
for row, col in product(range(number_of_nodes), range(number_of_colours)):
if solution_configuration[row * number_of_colours + col] == 1:
vertices_dictionary[col].append(row)
print("Dictionary of vertices for each colour:\n" + str(vertices_dictionary) + "\n")
# 6. Draw the coloured graph
import random
colour_list = ['#'+''.join(random.sample('0123456789ABCDEF', 6)) for i in range(number_of_colours)]
plt.figure(figsize=(10, 6))
for node in graph.nodes():
if graph.nodes[node]: del graph.nodes[node]['colour']
for i in range (number_of_colours):
colour_i = colour_list[i]
nodes_with_colour_i = vertices_dictionary[i]
for node in nodes_with_colour_i: graph.nodes[node]['colour'] = i
nx.draw_networkx(graph,
pos=nodes_positions,
nodelist=nodes_with_colour_i,
node_color=colour_i,
node_size=600,
font_size=16)
nx.draw_networkx_edges(graph, pos=nodes_positions)
plt.show()
# -
# # Solution analysis
#
# In order to examine the output colouring, one may choose to visually inspect the graphs. However, this becomes impractical for very big graphs with large dictionaries. We can therefore add some simple checks to assess the solution.
#
# One such check would be if each vertex has exactly one colour (which may not be the case, for example when the value $1$ appears more than once in a row of the solution spin array).
# +
corrupted_nodes_list = []
corrupted_nodes_colours_dict = {}
for node in range(number_of_nodes):
node_colourings_row = solution_configuration_reshaped[node][:]
colours_array = np.argwhere(node_colourings_row == 1)
number_of_colours_of_node = len(colours_array)
if number_of_colours_of_node != 1:
corrupted_nodes_list.append(node)
corrupted_nodes_colours_dict[node] = colours_array.flatten().tolist()
if len(corrupted_nodes_list) == 0:
print("Each vertex is assigned only one colour !")
else:
print("There are " + "\033[1m" + str(len(corrupted_nodes_list)) +
"\033[0;0m" + " vertices not assigned exactly one colour:\n" +
str(corrupted_nodes_list))
print("\nVertices and their colours:")
for node, colours in corrupted_nodes_colours_dict.items(): print(node, ':', colours)
# -
# Yet, it may be that the final colour distribution nevertheless produces a valid colouring. So let us check if each edge connects vertices of different colours. If this is not the case, we can output the corrupted edges and their respective colours.
# +
corrupted_edges_colours_dict = {}
for (node_i, node_j) in graph.edges():
if 'colour' not in graph.nodes[node_i]:
if 'colour' not in graph.nodes[node_j]:
corrupted_edges_colours_dict[(node_i, node_j)] = ["no colour", "no colour"]
else:
corrupted_edges_colours_dict[(node_i, node_j)] = ["no colour", graph.nodes[node_j]['colour']]
else:
node_i_colour = graph.nodes[node_i]['colour']
if 'colour' not in graph.nodes[node_j]:
corrupted_edges_colours_dict[(node_i, node_j)] = [node_i_colour, "no colour"]
else:
node_j_colour = graph.nodes[node_j]['colour']
if node_i_colour == node_j_colour:
corrupted_edges_colours_dict[(node_i, node_j)] = [node_i_colour, node_j_colour]
if len(corrupted_edges_colours_dict) == 0:
print("The graph is perfectly coloured !")
else:
print("There are " + "\033[1m" + str(len(corrupted_edges_colours_dict)) +
"\033[0;0m" + " corrupted edges with colours:")
for edge, colours in corrupted_edges_colours_dict.items(): print(edge, ' : ', colours)
| misc/notebooks/tutorials/combinatorial_optimization/graph_colouring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import libraries
import pandas as pd
import numpy as np
# ### Combine summary statistics and subseting
#load dataset
pop = pd.read_csv('../datasets/clean/pop.csv', index_col=0, header=0)
display(pop.head())
# For each store type, aggregate weekly_sales: get min, max, mean, and median
pop_stats = pop.groupby('Element')['Value'].agg([min, max, np.mean, np.median])
print(pop_stats)
# #### Pivot table
# Pivot for mean and median population value for each element
mean_pop_by_element = pop.pivot_table(values = 'Value', index ='Element', aggfunc=[np.mean, np.median])
#the default aggfunc is mean
print(mean_pop_by_element)
# Print the mean population value for each element and country; fill missing values with 0s;
print(pop.pivot_table(values="Value", index="Element", columns="Area", fill_value=0))
# sum all rows and cols
print('\nTotal from mean:\n', pop.pivot_table(values="Value", index="Element", columns="Area", fill_value=0).sum(0).sum())
# ### Index
# +
# see head of original pop dataframe again to compare
display(pop.head())
# Index by country (area)
pop_e = pop.set_index('Element')
display(pop_e.head())
# Reset the index, keeping its contents
display(pop_e.reset_index(drop=False).head())
# Reset the index, dropping its contents
display(pop_e.reset_index(drop=True).head())
# -
# to subset using a list of values we used subset
pop_ru = pop[pop['Element'].isin(['Rural population','Urban population'])]
display(pop_ru.head())
# if the elements of the list are in the index we can just use simple notation
display(pop_e.loc[['Rural population','Urban population']])
# multilevel index, access with tuples
pop_ea = pop.set_index(['Element', 'Area'])
display(pop_ea.head())
display(pop_ea.loc[['Rural population', 'Urban population']])
display(pop_ea.loc[[('Rural population', 'Afghanistan'), ('Rural population', 'Zimbabwe')]])
display(pop_ea.loc[pd.IndexSlice[:, 'Zimbabwe'],:]) # to ignore index at level 0
pop.dtypes
# subset by setting a range
display(pop.loc[:,'Year':'Value'].head())
display(pop_e[(pop_e.Area == 'Zimbabwe') & (pop_e.Year == 2018)].loc['Total Population - Both sexes':'Rural population'])
| notes_on_data_manipulation/notes_on_Pandas2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# ### Create Causal Graphs using ggdag und ggplot2
library(Zelig)
install.packages("devtools", destdir = "C:/Users/Karli/OneDrive - uni-bonn.de/Uni Bonn//2. Semester/Microeconometrics/Replication study/Github Replication/student-project-LeonardMK/CRAN-packages")
library(devtools)
install_version("Zelig", "4.2-1")
library(Zelig)
| .ipynb_checkpoints/Causal Graphs-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gathering Data and Generating Images for Classification
from IPython.display import FileLink, FileLinks
# +
import os
import shutil
import sys
from time import time
from uuid import uuid4
import numpy as np
import pandas as pd
from data_manager import file_processor
# from returns_quantization import add_returns_in_place
# from utils import *
import datetime
import matplotlib
# +
# np.set_printoptions(threshold=np.nan)
# pd.set_option('display.height', 1000)
# pd.set_option('display.max_rows', 500)
# pd.set_option('display.max_columns', 500)
# pd.set_option('display.width', 1000)
matplotlib.use('Agg')
# -
PATH = 'data/btc/'
# ## Gather Cryptocurrency Data from Exchange APIs
# +
# # TODO?
# -
# # Functions
def compute_returns(p):
close_prices = p['price_close']
close_prices_returns = 100 * ((close_prices.shift(-1) - close_prices) / close_prices).fillna(0.0)
return close_prices_returns.shift(1).fillna(0)
def plot_p(df):
import matplotlib.pyplot as plt
from matplotlib.finance import candlestick2_ohlc
fig, ax = plt.subplots()
candlestick2_ohlc(ax,
df['price_open'].values,
df['price_high'].values,
df['price_low'].values,
df['price_close'].values,
width=0.6,
colorup='g',
colordown='r',
alpha=1)
plt.show()
print('Done.')
def save_to_file(df, filename):
import matplotlib.pyplot as plt
from matplotlib.finance import candlestick2_ohlc
fig, ax = plt.subplots()
candlestick2_ohlc(ax,
df['price_open'].values,
df['price_high'].values,
df['price_low'].values,
df['price_close'].values,
width=0.6,
colorup='g',
colordown='r',
alpha=1)
plt.savefig(filename)
plt.close(fig)
def mkdir_p(path):
import os
import errno
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
# # Process Tick Data
data_file = f'{PATH}coinbaseUSD.csv'
data_output_folder = f'{PATH}btcgraphs/'
# ### This is def file_processor Fucntion
# def file_processor(data_file):
# +
print('Reading bitcoin market data file here: {}.'.format(data_file))
# create df from tick data
# [unix timestamp, price, volume]
# use the timestamp as the index
d = pd.read_table(data_file, sep=',', header=None, index_col=0, names=['price', 'volume'])
# map the index to datetime
d.index = d.index.map(lambda ts: datetime.datetime.fromtimestamp(int(ts)))
d.index.names = ['DateTime_UTC']
# split the prices into 5 minute groups
p = pd.DataFrame(d['price'].resample('5Min').ohlc().bfill())
p.columns = ['price_open', 'price_high', 'price_low', 'price_close']
# sum volume by 5 minute chunks
v = pd.DataFrame(d['volume'].resample('5Min').sum())
v.columns = ['volume']
p['volume'] = v['volume']
# # drop NaN values.
# p = p.dropna()
p.isnull().sum()
print('Done')
# -
print(p.isnull().sum())
# +
# choosing everything starting after 2015.... no data for first 6 days unfortunately... might need to find new data source
# p = p.loc[p.index >= datetime.datetime(2015,1,1,0,0,0)]
# p.head(n=5)
# -
# # Generate the Data
# ### This is the generate_cnn_dataset function
# def generate_cnn_dataset(data_folder, bitcoin_file, get_class_name):
data_folder = data_output_folder
# compute_returns(p)
close_prices = p['price_close']
close_prices_returns = 100 * ((close_prices.shift(-1) - close_prices) / close_prices).fillna(0.0)
close_prices_returns = close_prices_returns.shift(1).fillna(0)
close_prices_returns.head(n=5)
# +
# def add_returns_in_place(p):
# close_prices_returns = compute_returns(p)
num_bins = 10
returns_bins = pd.qcut(close_prices_returns, num_bins)
bins_categories = returns_bins.values.categories
returns_labels = pd.qcut(close_prices_returns, num_bins, labels=False)
p['close_price_returns'] = close_prices_returns
p['close_price_returns_bins'] = returns_bins
p['close_price_returns_labels'] = returns_labels
# -
bins_categories
p.tail(n=20)
# return df, bins_categories
p.to_csv(f"{PATH}btc-out.csv", sep = "\t")
# +
# btc_df, levels = add_returns_in_place(btc_df)
levels = bins_categories
print('-' * 80)
print('Those values should be roughly equal to 1/len(levels):')
for ii in range(len(levels)):
print(ii, np.mean((p['close_price_returns_labels'] == ii).values))
print(levels)
print('-' * 80)
# -
# Two class UP/DOWN version
def get_price_direction(btc_df, btc_slice, i, slice_size):
last_price = btc_slice[-2:-1]['price_close'].values[0]
next_price = btc_df[i + slice_size:i + slice_size + 1]['price_close'].values[0]
if last_price < next_price:
class_name = 'UP'
else:
class_name = 'DOWN'
return class_name
# Three class version UP/DOWN/HOLD
movement_threshold = 1e-4 # this is a $1.00 movement at BTC = $10,000
def get_price_direction2(btc_df, btc_slice, i, slice_size):
last_price = btc_slice[-2:-1]['price_close'].values[0]
next_price = btc_df[i + slice_size:i + slice_size + 1]['price_close'].values[0]
dif = next_price - last_price
if dif > movement_threshold:
class_name = 'UP'
elif dif < -movement_threshold:
class_name = 'DOWN'
else:
class_name = 'HOLD'
return class_name
# +
# number of periods in our input samples
slice_size = 40
# 1/10 data "chuncks" will be for testing
test_every_steps = 10
# number of 5-minute periods we are creating chunks from,
# need to not start chunk within last 40 or will run out of space
n = len(p) - slice_size
shutil.rmtree(data_folder, ignore_errors=True)
# this is the number of samples we are going to make from the data
cycles = 1e6
# -
btc_df = p
for epoch in range(int(cycles)):
st = time()
# choose a random starting point
i = np.random.choice(n)
# take following 40 time periods (total 41)
btc_slice = btc_df[i:i + slice_size]
if btc_slice.isnull().values.any():
# sometimes prices are discontinuous and nothing happened in one 5min bucket.
# in that case, we consider this slice as wrong and we raise an exception.
# it's likely to happen at the beginning of the data set where the volumes are low.
raise Exception('NaN values detected. Please remove them.')
class_name = get_price_direction(btc_df, btc_slice, i, slice_size)
save_dir = os.path.join(data_folder, 'train', class_name)
if epoch % test_every_steps == 0:
save_dir = os.path.join(data_folder, 'test', class_name)
mkdir_p(save_dir)
filename = save_dir + '/' + str(uuid4()) + '.png'
save_to_file(btc_slice, filename=filename)
print('epoch = {0}, time = {1:.3f}, filename = {2}'.format(str(epoch).zfill(8), time() - st, filename))
| old_notebooks/generate-data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import os
import cPickle as pickle
import numpy as np
import pandas as pd
from sklearn.cross_validation import train_test_split
import sys
sys.path.append('../')
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pylab as plt
from src.TTRegression import TTRegression
import urllib
# -
train_fraction = 0.8
def get_dummies(d, col):
dd = pd.get_dummies(d.ix[:, col])
dd.columns = [str(col) + "_%s" % c for c in dd.columns]
return(dd)
# +
# Reproducability.
np.random.seed(0)
dataset_path = 'car.data'
if (not os.path.isfile(dataset_path)):
dataset_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data'
print('Downloading data from %s' % dataset_url)
urllib.urlretrieve(dataset_url, dataset_path)
print('... loading data')
car_names = ['buying', 'maint', 'doors', 'persons', 'lug_boot', 'safety', 'target']
car_data = pd.read_csv(dataset_path, names=car_names, header=None)
print "dataset len: %d\n" % len(car_data)
print "Original targets:"
print car_data.target.value_counts()
# Make binary classification problem.
car_target = car_data['target']
car_target_binarized = (car_target.values != 'unacc') * 1
car_features = car_data.ix[:, :6]
car_features_one_hot = pd.concat([get_dummies(car_features, col) for col in list(car_features.columns.values)], axis = 1)
car_features_one_hot = car_features_one_hot.as_matrix()
# Shuffle.
idx_perm = np.random.permutation(len(car_data))
X, y = car_features_one_hot[idx_perm, :], car_target_binarized[idx_perm]
num_objects = y.size
train_size = np.round(num_objects * train_fraction).astype(int)
X_train = X[:train_size, :]
y_train = y[:train_size]
X_val = X[train_size:, :]
y_val = y[train_size:]
# -
print(X_train.shape)
print(X_val.shape)
# # Train
# +
plain_sgd = {}
riemannian_sgd = {}
for batch_size in [-1, 100, 500]:
# To use the same order of looping through objects for all runs.
np.random.seed(0)
model = TTRegression('all-subsets', 'logistic', 4, 'sgd', max_iter=10000, verbose=1,
fit_intercept=False, batch_size=batch_size, reg=0.)
model.fit_log_val(X_train, y_train, X_val, y_val)
plain_sgd[batch_size] = model
np.random.seed(0)
# To use the same order of looping through objects for all runs.
rieamannian_model = TTRegression('all-subsets', 'logistic', 4, 'riemannian-sgd', max_iter=800, verbose=1,
batch_size=batch_size, fit_intercept=False, reg=0.)
rieamannian_model.fit_log_val(X_train, y_train, X_val, y_val)
riemannian_sgd[batch_size] = rieamannian_model
# -
# ## Train from random init
# +
import tt
np.random.seed(0)
w_init = tt.rand(2 * np.ones(X.shape[1]), r=4)
# Divide by norm to make sure the norm is reasonable,
# round to make all the ranks are valid.
w_init = ((1 / w_init.norm()) * w_init).round(eps=0)
plain_sgd_rand = {}
riemannian_sgd_rand = {}
batch_size = -1
# To use the same order of looping through objects for all runs.
np.random.seed(0)
model_rand = TTRegression('all-subsets', 'logistic', 4, 'sgd', max_iter=5000, verbose=1,
fit_intercept=False, batch_size=batch_size, reg=0., coef0=w_init)
model_rand.fit_log_val(X_train, y_train, X_val, y_val)
plain_sgd_rand[batch_size] = model_rand
np.random.seed(0)
# To use the same order of looping through objects for all runs.
riemannian_model_rand = TTRegression('all-subsets', 'logistic', 4, 'riemannian-sgd', max_iter=1600, verbose=1,
batch_size=batch_size, fit_intercept=False, reg=0., coef0=w_init)
riemannian_model_rand.fit_log_val(X_train, y_train, X_val, y_val)
riemannian_sgd_rand[batch_size] = riemannian_model_rand
# -
# # Save
with open('data/riemannian_vs_baseline_car.pickle', 'wb') as f:
obj = {'plain_sgd': plain_sgd, 'riemannian_sgd': riemannian_sgd,
'plain_sgd_rand': plain_sgd_rand, 'riemannian_sgd_rand': riemannian_sgd_rand,
'X_train': X_train, 'y_train': y_train, 'X_val': X_val, 'y_val': y_val}
pickle.dump(obj, f, protocol=pickle.HIGHEST_PROTOCOL)
# # Plot train
# +
params = {
'axes.labelsize': 8,
'font.size': 8,
'legend.fontsize': 10,
'xtick.labelsize': 10,
'ytick.labelsize': 10,
'text.usetex': False,
'figure.figsize': [4, 3]
}
mpl.rcParams.update(params)
colors = [(31, 119, 180), (44, 160, 44), (255, 127, 14), (255, 187, 120)]
# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts.
for i in range(len(colors)):
r, g, b = colors[i]
colors[i] = (r / 255., g / 255., b / 255.)
# +
with open('data/riemannian_vs_baseline_car.pickle', 'rb') as f:
logs = pickle.load(f)
fig = plt.figure()
plt.loglog(logs['plain_sgd'][-1].logger.time_hist,
logs['plain_sgd'][-1].logger.loss_hist['train']['logistic'], label='Cores GD',
linewidth=2, color=colors[0])
plt.loglog(logs['plain_sgd'][100].logger.time_hist, logs['plain_sgd'][100].logger.loss_hist['train']['logistic'],
label='Cores SGD 100', linewidth=2, color=colors[1])
plt.loglog(logs['plain_sgd'][500].logger.time_hist, logs['plain_sgd'][500].logger.loss_hist['train']['logistic'],
label='Cores SGD 500', linewidth=2, color=colors[2])
grid = np.array([0.01, 1, 5, 30, 60]) / 2.5
x = logs['riemannian_sgd'][-1].logger.time_hist
marker_indices = np.searchsorted(x, grid)
plt.loglog(logs['riemannian_sgd'][-1].logger.time_hist,
logs['riemannian_sgd'][-1].logger.loss_hist['train']['logistic'],
marker='o', markevery=marker_indices, label='Riemann GD', linewidth=2, color=colors[0])
grid = np.array([0.05, 2, 12, 30]) / 2.5
x = logs['riemannian_sgd'][100].logger.time_hist
marker_indices = np.searchsorted(x, grid)
plt.loglog(logs['riemannian_sgd'][100].logger.time_hist,
logs['riemannian_sgd'][100].logger.loss_hist['train']['logistic'],
marker='o', markevery=marker_indices, label='Riemann 100', linewidth=2, color=colors[1])
grid = np.array([0.1, 7.5, 60]) / 2.5
x = logs['riemannian_sgd'][500].logger.time_hist
marker_indices = np.searchsorted(x, grid)
plt.loglog(logs['riemannian_sgd'][500].logger.time_hist,
logs['riemannian_sgd'][500].logger.loss_hist['train']['logistic'],
marker='o', markevery=marker_indices, label='Riemann 500', linewidth=2, color=colors[2])
grid = np.array([0.1, 3, 20, 53])
x = riemannian_sgd_rand[-1].logger.time_hist
marker_indices = np.searchsorted(x, grid)
plt.loglog(riemannian_sgd_rand[-1].logger.time_hist,
riemannian_sgd_rand[-1].logger.loss_hist['train']['logistic'],
marker='s', markevery=marker_indices, label='Riemann GD rand init', linewidth=2, color=colors[0])
# plt.loglog(plain_sgd_rand[-1].logger.time_hist,
# plain_sgd_rand[-1].logger.loss_hist['train']['logistic'],
# marker='v', markevery=marker_indices, label='Cores GD rand init', linewidth=2, color=colors[0])
legend = plt.legend(loc='upper left', bbox_to_anchor=(1, 1.04), frameon=False)
plt.xlabel('time (s)')
plt.ylabel('logistic loss')
plt.minorticks_off()
ax = plt.gca()
ax.set_xlim([0.02, 100])
ax.set_ylim([1e-17, 2])
fig.tight_layout()
# -
fig.savefig('data/riemannian_vs_plain_car_train.pdf', bbox_extra_artists=(legend,), bbox_inches='tight')
# # Plot validation
# +
with open('data/riemannian_vs_baseline_car.pickle', 'rb') as f:
logs = pickle.load(f)
fig = plt.figure()
plt.loglog(logs['plain_sgd'][-1].logger.time_hist,
logs['plain_sgd'][-1].logger.loss_hist['valid']['logistic'], label='Cores GD',
linewidth=2, color=colors[0])
plt.loglog(logs['plain_sgd'][100].logger.time_hist, logs['plain_sgd'][100].logger.loss_hist['valid']['logistic'],
label='Cores SGD 100', linewidth=2, color=colors[1])
plt.loglog(logs['plain_sgd'][500].logger.time_hist, logs['plain_sgd'][500].logger.loss_hist['valid']['logistic'],
label='Cores SGD 500', linewidth=2, color=colors[2])
grid = np.array([0.01, 1, 5, 30, 60]) / 2.5
x = logs['riemannian_sgd'][-1].logger.time_hist
marker_indices = np.searchsorted(x, grid)
plt.loglog(logs['riemannian_sgd'][-1].logger.time_hist,
logs['riemannian_sgd'][-1].logger.loss_hist['valid']['logistic'],
marker='o', markevery=marker_indices, label='Riemann GD', linewidth=2, color=colors[0])
grid = np.array([0.05, 2, 12, 30]) / 2.5
x = logs['riemannian_sgd'][100].logger.time_hist
marker_indices = np.searchsorted(x, grid)
plt.loglog(logs['riemannian_sgd'][100].logger.time_hist,
logs['riemannian_sgd'][100].logger.loss_hist['valid']['logistic'],
marker='o', markevery=marker_indices, label='Riemann 100', linewidth=2, color=colors[1])
grid = np.array([0.1, 7.5, 60]) / 2.5
x = logs['riemannian_sgd'][500].logger.time_hist
marker_indices = np.searchsorted(x, grid)
plt.loglog(logs['riemannian_sgd'][500].logger.time_hist,
logs['riemannian_sgd'][500].logger.loss_hist['valid']['logistic'],
marker='o', markevery=marker_indices, label='Riemann 500', linewidth=2, color=colors[2])
grid = np.array([0.1, 3, 20, 53])
x = riemannian_sgd_rand[-1].logger.time_hist
marker_indices = np.searchsorted(x, grid)
plt.loglog(riemannian_sgd_rand[-1].logger.time_hist,
riemannian_sgd_rand[-1].logger.loss_hist['valid']['logistic'],
marker='s', markevery=marker_indices, label='Riemann GD rand init', linewidth=2, color=colors[0])
# plt.loglog(plain_sgd_rand[-1].logger.time_hist,
# plain_sgd_rand[-1].logger.loss_hist['valid']['logistic'],
# marker='v', label='Cores GD rand init', linewidth=2, color=colors[0])
legend = plt.legend(loc='upper left', bbox_to_anchor=(1, 1.04), frameon=False)
plt.xlabel('time (s)')
plt.ylabel('validation logistic loss')
plt.minorticks_off()
ax = plt.gca()
ax.set_xlim([0.02, 100])
ax.set_ylim([1e-17, 2])
fig.tight_layout()
# -
fig.savefig('data/riemannian_vs_plain_car_validation.pdf', bbox_extra_artists=(legend,), bbox_inches='tight')
| experiments/Riemannian_vs_baseline_car.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Style Transfer with Deep Neural Networks
#
#
# In this notebook, I’ll *recreate* a style transfer method that is outlined in the paper, [Image Style Transfer Using Convolutional Neural Networks, by Gatys](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf) in PyTorch.
#
# In this paper, style transfer uses the features found in the 19-layer VGG Network, which is comprised of a series of convolutional and pooling layers, and a few fully-connected layers.
#
# ### Separating Style and Content
#
# Style transfer relies on separating the content and style of an image. Given one content image and one style image, we aim to create a new _target_ image which should contain our desired content and style components:
# * objects and their arrangement are similar to that of the **content image**
# * style, colors, and textures are similar to that of the **style image**
#
# In this notebook, I'll use a pre-trained VGG19 Net to extract content or style features from an image.
# +
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
from os import path
import pickle
import cv2
import torch
from torchvision import models
from helper import *
from transfer import *
# %matplotlib inline
# -
# ### Setup wheter I am using Google Colab or not
# +
##### Change the next variable to False if you are not using Google Colab
using_colab = False
#########################################################################
if using_colab:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
drive_path = '/content/drive/My Drive/Colab Notebooks/'
# -
# ### Pick input video and style to be transfered
video_file_name = 'mendelson.mp4'
style_file_name = 'starry_night.jpg'
# ### I will use the VGG19 pretrained model
# +
from ipywidgets import IntProgress
# get the "features" portion of VGG19 (we will not need the "classifier" portion)
vgg = models.vgg19(pretrained=True).features
# freeze all VGG parameters since I'm only optimizing the target image
for param in vgg.parameters():
param.requires_grad_(False)
# -
# ### The following cell checks if there is a GPU available
# +
# move the model to GPU, if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f'===== Using {device} =====')
vgg = vgg.to(device);
if torch.cuda.is_available():
print(f'GPU in use: {torch.cuda.get_device_name(0)}')
# -
# ### Let's see the style picture
# +
# Style to be applied
style_file_name = 'styles/' + style_file_name
if using_colab:
style_file_name = drive_path + 'frames/' + style_file_name
style = load_image(style_file_name).to(device)
plt.imshow(im_convert(style))
plt.show()
# -
# ### Only run the following cell if you don't have the video frames
#
# This cell extracts the frames from the input video and save them to disk.
# +
img_array = []
# Opens the Video file
cap = cv2.VideoCapture('input videos/' + video_file_name)
# https://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html
# https://stackoverflow.com/questions/39953263/get-video-dimension-in-python-opencv/39953739
width = int(cap.get(3))
height = int(cap.get(4))
fps = round(cap.get(5))
total_frames = int(cap.get(7))
# Saving info to file
with open('properties.pkl', 'wb') as f:
pickle.dump([width, height, fps, total_frames], f)
current_frame_number = 0
while(cap.isOpened()):
ret, frame = cap.read()
if ret == False:
break
img_array.append(frame)
file_name = 'frames/input frames/input_frame_{:08d}.jpg'.format(current_frame_number)
matplotlib.image.imsave(file_name, img_array[-1])
current_frame_number += 1
cap.release()
cv2.destroyAllWindows()
img_array.clear()
# -
# ### Reading frames saved to disk from the input video.
# +
# Loading frames from files
with open('properties.pkl', 'rb') as f:
width, height, fps, total_frames = pickle.load(f)
# height, width, fps, total_frames = pickle.load(f)
input_frames = []
current_frame_number = 0
file_name = 'frames/input frames/input_frame_{:08d}.jpg'.format(current_frame_number)
if using_colab:
file_name = drive_path + file_name
while path.exists(file_name):
input_frames.append(load_image(file_name).to(device))
current_frame_number += 1
file_name = 'frames/input frames/input_frame_{:08d}.jpg'.format(current_frame_number)
if using_colab:
file_name = drive_path + file_name
# -
# ### If you already have the stylized frames, skip the next cell
#
# This cell performs the style transfer on each frame and save it to disk.
# Transfering style and saving to file
for idx, image in enumerate(input_frames):
print(f'Currently evaluating frame {idx + 1} of {total_frames}')
frame = transfer_to_frame(image, style, vgg, device)
file_name = 'frames/style frames/stylized_frame_{:08d}.jpg'.format(idx)
if using_colab:
file_name = drive_path + file_name
current = frame.to('cpu').detach()
matplotlib.image.imsave(file_name, im_convert(current))
# ### Reading stylized frames from the disk
# +
stylized_frames = []
# Load stylized frames from file
current_frame_number = 0
file_name = 'frames/style frames/stylized_frame_{:08d}.jpg'.format(current_frame_number)
if using_colab:
file_name = drive_path + file_name
while path.exists(file_name):
stylized_frames.append(load_image(file_name).to(device))
current_frame_number += 1
file_name = 'frames/style frames/stylized_frame_{:08d}.jpg'.format(current_frame_number)
if using_colab:
file_name = drive_path + file_name
# -
# ### Prepare the frames and join them into a sequence of frames
# +
final_frames = []
# Convert tensors back to numpy arrays
for tensor_frame in stylized_frames:
temp_frame = (im_convert(tensor_frame)*255).astype(np.uint8)
final_frames.append(cv2.resize(temp_frame, dsize=(width, height), interpolation=cv2.INTER_CUBIC))
# -
# ### Write the final stylized video to disk
# +
out_name = 'stylized_video.mp4'
if using_colab:
out_name = drive_path + 'frames/' + out_name
out = cv2.VideoWriter(out_name, cv2.VideoWriter_fourcc(*'X264'), fps, (width, height))
for i in range(len(final_frames)):
out.write(final_frames[i])
out.release()
| video_style.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
data = pd.read_csv("./thegurus-opendata-renfe-trips.csv",nrows=3579770)
data.to_csv("renfe_trimmed.csv",index=False)
data.head()
data = data.drop(columns=['id','seats','meta','company','duration'])
data.head()
cols = list(data.columns)
cols = [cols[-1]] + cols[:-1]
data = data[cols]
data.head()
column_dict = {
"departure":"start_date",
"arrival":"end_date",
"vehicle_type":"train_type",
"vehicle_class":"train_class"
}
data = data.rename(columns = column_dict)
data.head()
data.to_csv("renfe.csv",index=False)
| Spanish Renfe Data Preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regression
#
# A regression is a predictive model that looks for a functional relationship between a set of variables (X) and a continuous outcome variable (y).
#
# In other word, given an input array we try to predict a numerical value.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Weight - Height dataset
df = pd.read_csv('../data/weight-height.csv')
df.head()
# ### Visualize the dataset
plt.figure(figsize=(15,10))
plt.scatter(df['Height'], df['Weight'], alpha = 0.2)
plt.title('Humans', size=20)
plt.xlabel('Height (in)', size=20)
plt.ylabel('Weight (lbs)', size=20)
# ## Visualize male and female populations
#
# This could be done in many ways, below are two examples.
# +
# males = df[df['Gender'] == 'Male']
# females = df[df['Gender'] == 'Female']
males = df.query('Gender == "Male"')
females = df.query('Gender == "Female"')
plt.figure(figsize=(15,10))
plt.scatter(males['Height'], males['Weight'], alpha = 0.3, label = 'males', c = 'c')
plt.scatter(females['Height'], females['Weight'], alpha = 0.3, label = 'females', c = 'pink')
plt.title('Humans', size = 20)
plt.xlabel('Height (in)', size = 20)
plt.ylabel('Weight (lbs)', size = 20)
plt.legend()
# -
# ## Linear regression
from sklearn.linear_model import LinearRegression
# +
# create instance of linear regression class
regr = LinearRegression()
# what's the purpose of the next line?
# try to print out df['Height'].values and x
# to figure it out
x = df[['Height']].values
y = df['Weight']
# split data in 2 parts (20% test / 80% train)
n_data = len(y)
ind = np.arange(n_data)
np.random.shuffle(ind)
split_point = n_data // 5
test_ind = ind[:split_point]
train_ind = ind[split_point:]
x_train = x[train_ind]
x_test = x[test_ind]
y_train = y[train_ind]
y_test = y[test_ind]
regr.fit(x_train, y_train)
# -
# The coefficients
print("Slope: %.2f" % regr.coef_)
print("Intercept: %.2f" % regr.intercept_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(x_test) - y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(x_test, y_test))
plt.scatter(x_test, y_test)
plt.plot(x_test, regr.predict(x_test), color = 'red')
plt.title('Humans')
plt.xlabel('Height (in)')
plt.ylabel('Weight (lbs)')
# ## Exercise 1
#
# In this exercise we extend what we have learned about linear regression to a dataset with more than one feature. Here are the steps to complete it:
# - Load the dataset ../data/housing-data.csv
# - plot the histograms for each feature using `pandas.plotting.scatter_matrix`
# - create 2 variables called X and y: X shall be a matrix with 3 columns (sqft,bdrms,age) and y shall be a vector with 1 column (price)
# - create a linear regression model
# - split the data into train and test with a 20% test size
# - train the model on the training set and check its R2 coefficient on training and test set
# - how's your model doing?
# This dataset contains multiple columns:
# - sqft
# - bdrms
# - age
# - price
#
# + tags=["solution", "empty"]
df = pd.read_csv('../data/housing-data.csv')
# + tags=["solution"]
df.head()
# + tags=["solution"]
from pandas.plotting import scatter_matrix
# + tags=["solution"]
_ = scatter_matrix(df, alpha=0.5, figsize=(8, 8))
# + tags=["solution"]
regr = LinearRegression()
# + tags=["solution"]
X = df[['sqft', 'bdrms', 'age']]
y = df['price']
# + tags=["solution"]
regr.fit(X, y)
# + tags=["solution"]
regr.coef_
# + tags=["solution"]
regr.intercept_
# + tags=["solution"]
regr.score(X, y)
# + tags=["solution"]
regr.predict([[2000, 3, 20]])
# -
# ## Exercise 2
#
# - split your housing dataset into training and test sets using [`train_test_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) with a test size of 30% and a random_state=42
# - Train the previous model on the training set and check the R2 score on the test set
# - Train a regularized regression model like [`Ridge`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) or [`Lasso`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) on the trainin dataset and test the score on the test set
# - does regularization improve the score?
# - Try changing the regularization strength alpha
# + tags=["solution", "empty"]
from sklearn.linear_model import Ridge, Lasso
from sklearn.model_selection import train_test_split
# + tags=["solution"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
for model in [LinearRegression(), Ridge(), Lasso()]:
model.fit(X_train, y_train)
print(model)
score = model.score(X_train, y_train)
print("Train Score: {:0.3f}".format(score))
score = model.score(X_test, y_test)
print("Test Score: {:0.3f}".format(score))
print()
# -
# *Copyright © 2017 CATALIT LLC. All rights reserved.*
| solutions_do_not_open/Lab_03_ML Regression_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/PacktPublishing/Modern-Computer-Vision-with-PyTorch/blob/master/Chapter05/Resnet_block_architecture.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="cuS9OIFRPtEW"
import torch
from torch import nn
# + id="S339gpWBP06u"
class ResLayer(nn.Module):
def __init__(self,ni,no,kernel_size,stride=1):
super(ResLayer, self).__init__()
padding = kernel_size - 2
self.conv = nn.Sequential(
nn.Conv2d(ni, no, kernel_size, stride,
padding=padding),
nn.ReLU()
)
def forward(self, x):
return self.conv(x) + x
| Chapter05/41_Resnet_block_architecture.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# Import libraries
## Data Science libraries
import numpy as np
import pandas as pd
# +
# Load csv files into DataFrames
df_accidents = pd.read_csv("./dataset_raw/accidents_2017.csv")
# df_air_quality = pd.read_csv("./dataset/air_quality_Nov2017.csv")
# df_air_stations = pd.read_csv("./dataset/air_stations_Nov2017.csv")
# df_births = pd.read_csv("./dataset/births.csv")
# df_bus_stops = pd.read_csv("./dataset/bus_stops.csv")
# df_deaths = pd.read_csv("./dataset/deaths.csv")
# df_immigrants_by_nationality = pd.read_csv("./dataset/immigrants_by_nationality.csv")
# df_immigrants_emigrants_by_age = pd.read_csv("./dataset/immigrants_emigrants_by_age.csv")
# df_immigrants_emigrants_by_destination = pd.read_csv("./dataset/immigrants_emigrants_by_destination.csv")
# df_immigrants_emigrants_by_destination2 = pd.read_csv("./dataset/immigrants_emigrants_by_destination2.csv")
# df_immigrants_emigrants_by_sex = pd.read_csv("./dataset/immigrants_emigrants_by_sex.csv")
# df_life_expectancy = pd.read_csv("./dataset/life_expectancy.csv")
# df_most_frequent_baby_names = pd.read_csv("./dataset/most_frequent_baby_names.csv")
# df_most_frequent_names = pd.read_csv("./dataset/most_frequent_names.csv")
# df_population = pd.read_csv("./dataset/population.csv")
# df_transports = pd.read_csv("./dataset/transports.csv")
# df_unemployment = pd.read_csv("./dataset/unemployment.csv")
# +
# List of dataframes for EDA
df_list = [
df_accidents,
# df_air_quality,
# df_air_stations,
# df_births,
# df_bus_stops,
# df_deaths,
# df_immigrants_by_nationality,
# df_immigrants_emigrants_by_age,
# df_immigrants_emigrants_by_destination,
# df_immigrants_emigrants_by_destination2,
# df_immigrants_emigrants_by_sex,
# df_life_expectancy,
# df_most_frequent_baby_names,
# df_most_frequent_names,
# df_population,
# df_transports,
# df_unemployment
]
# +
# Join immigrants emigrants by destination
# print(df_immigrants_emigrants_by_destination.columns == df_immigrants_emigrants_by_destination2.columns)
# print(df_immigrants_emigrants_by_destination.shape)
# print(df_immigrants_emigrants_by_destination2.shape)
# +
# df_immigrants_emigrants_by_destination = df_immigrants_emigrants_by_destination.merge(
# df_immigrants_emigrants_by_destination2,
# left_on=['from', 'to', 'weight'],
# right_on=['from', 'to', 'weight'],
# how='outer', sort=False)
# +
# Accidents
# + tags=[]
print("Shape")
df_accidents.shape
# -
df_accidents.info(verbose=False, memory_usage="deep")
print("Head")
df_accidents.head()
# +
# Column names have spaces
print("Columns")
df_accidents.columns
# +
# Data types can be improved to achieve less memory usage
print("Data Types")
df_accidents.dtypes
# +
# We don't have null values, but there are some columns with a "Unknown" string
print("Null values")
df_accidents.isna().sum()
# +
# We can deduce when to use the category data type looking at the number of uniques
print("Number of uniques")
df_accidents.nunique(axis=0)
# +
# Fix column names
df_accidents.columns = [column \
.strip() \
.lower() \
.replace(' ', '_') \
.replace('(', '') \
.replace(')', '') for column in df_accidents.columns.values.tolist()]
df_accidents.columns
# +
# Clean the ID column, which have spaces
print(df_accidents.loc[0, ["id"]].values)
df_obj = df_accidents.select_dtypes(['object'])
df_accidents[df_obj.columns] = df_obj.apply(lambda x: x.str.strip())
print(df_accidents.loc[0, ["id"]].values[0])
# +
# Deal with duplicated
print(df_accidents.duplicated().sum())
df_accidents = df_accidents[~df_accidents.duplicated()]
print(df_accidents.duplicated().sum())
# + tags=[]
# Create a dates list and insert a datetime compliant string
date = []
for index, row in df_accidents.iterrows():
time = pd.Timestamp(f'2017-{row["month"]}-{row["day"]} {row["hour"]}:00:00')
date.append(time)
df_accidents = df_accidents.assign(date=date)
# df_accidents.insert(loc=1, column="dates", value=dates)
# df_accidents.loc[:, "dates"] = dates
# + tags=[]
# The disctrict column has "Unknowns" -> Nulls, and given that we have longitude + latitude,
# this column will not be kept in the final DataFrame
df_accidents.query('district_name == "Unknown"')
# +
# Prepare geopy and test it
import re
import geopy
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="name-of-your-user-agent")
location = geolocator.reverse(f'{str(df_accidents.loc[0, "latitude"])}, {str(df_accidents.loc[0, "longitude"])}')
print(location.address)
print(dir(location))
geolocator = Nominatim(user_agent="name-of-your-user-agent")
location = geolocator.geocode("Puerta del Sol, Madrid")
print(type(location))
print(location.address)
print(location.raw)
location = geolocator.geocode("Pul Sol, Madrid")
# Invalid location returns None
print(location)
# +
# As we can see in the above example, querying geopy is very time consuming
# In fact, there's a limitation of 10 request per second on the server side
# To make this a little bit quickier, we are going to create a multi-thread
# process to span the requests to be performed on two different processes
# We have 10335 rows, so we could use
#
# CPU times: user 2min 27s, sys: 4.94 s, total: 2min 32s
# Wall time: 1h 50min 9s
# +
# Set the id column as index, but without losing it
#df_accidents.set_index(df_accidents["id"], inplace=True)
# Make a copy
# Check if is copy or view from the original
df_copy = df_accidents[ ["id", "longitude", "latitude"] ].copy()
print(df_copy.values.base is df_accidents[ ["id", "longitude", "latitude"] ].values.base)
# Print a sample row
#df_copy.iloc[df_accidents["id"][0], ["id", "latitude", "longitude"] ].values
# Initialize the column zipcode with dummy values
dummy_list = np.arange(df_accidents.shape[0], dtype='object')
print(len(dummy_list) == df_accidents.shape[0])
# Change all values of zipcode to be 0
df_accidents['zipcode'] = 0
df_accidents.head(3)
# +
# There are rows containing invalid entries, like
# 2017S008429
# Baardheere بااردىرآ, Gedo جدو, Soomaaliya الصومال
# So we need to calculate this locations and return None
# +
# Simple function that processes a single row
def process_location(index):
"""process a single row"""
# fetch the data
#r = requests.get(url_t % id)
# parse the JSON reply
#data = r.json()
# and update some data with PUT
#requests.put(url_t % id, data=data)
_id = df_accidents.iloc[index]["id"]
latitude = df_accidents.iloc[index]["latitude"]
longitude = df_accidents.iloc[index]["longitude"]
data = geolocator.reverse(f'{latitude}, {longitude}')
if data is None:
df_accidents.iloc[index, df_accidents.columns.get_loc('zipcode')] = None
return None
#zipcodes.append(None)
else:
if re.findall(r"(?<=(?:Barcelona,\sCatalunya,\s))\d{5}(?=(?:,\sEspaña))", data.address):
found = re.findall(r"(?<=(?:Barcelona,\sCatalunya,\s))\d{5}(?=(?:,\sEspaña))", data.address)
df_accidents.iloc[index, df_accidents.columns.get_loc('zipcode')] = found
else:
df_accidents.iloc[index, df_accidents.columns.get_loc('zipcode')] = None
# +
# Simple function that processes a range of rows:
def process_range(row_range):
"""process a number of ids"""
#print(type(row_range))
for id in row_range:
process_location(id)
# -
process_range(range(0,20))
# +
# We need to pass a list of ranges to the process_range funcion, that can be generated this way
nthreads = 3
id_range = range(100)
for i in range(nthreads):
print(id_range[i::nthreads])
# +
# map sub-ranges onto threads to allow some number of requests to be concurrent
from threading import Thread
def threaded_process_range(nthreads, id_range):
"""process the id range in a specified number of threads"""
threads = []
# create the threads
for i in range(nthreads):
ids = id_range[i::nthreads]
t = Thread(target=process_range, args=(ids,))
print(t)
threads.append(t)
# start the threads
[ t.start() for t in threads ]
# wait for the threads to finish
[ t.join() for t in threads ]
# +
# The most optimal combination is 2 threads and a range of 15
# # %%time
# threaded_process_range(nthreads=2, id_range=range(15))
# CPU times: user 170 ms, sys: 12.3 ms, total: 182 ms
# Wall time: 9.71 s
# # %%time
# threaded_process_range(nthreads=2, id_range=range(53))
# <Thread(Thread-53, initial)>
# <Thread(Thread-54, initial)>
# CPU times: user 568 ms, sys: 42.4 ms, total: 610 ms
# Wall time: 39.1 s
# # %%time
# threaded_process_range(nthreads=2, id_range=range(39))
# <Thread(Thread-51, initial)>
# <Thread(Thread-52, initial)>
# CPU times: user 478 ms, sys: 21.7 ms, total: 499 ms
# Wall time: 29.2 s
# threaded_process_range(nthreads=2, id_range=range(50))
# CPU times: user 447 ms, sys: 31.7 ms, total: 479 ms
# Wall time: 30.2 s
# + tags=[]
# %%time
for i in range(0, df_accidents.shape[0], 15):
threaded_process_range(nthreads=2, id_range=range(i, i+15))
# CPU times: user 2min 27s, sys: 4.94 s, total: 2min 32s
# Wall time: 1h 50min 9s
print("Done!")
df_accidents.head(3)
# +
# Check that a random row has the zipcode inserted
df_accidents.iloc[6876]
# +
# Drop unused columns
df_accidents.drop(columns=["id",
"district_name",
"neighborhood_name",
"street",
"weekday",
"month",
"day",
"hour",
"part_of_the_day",
"mild_injuries",
"serious_injuries"
],
axis="columns", inplace=True)
# -
df_accidents.head(3)
# +
# Reducing Pandas memory usage
df_accidents.memory_usage()
# -
df_accidents.info(verbose=False, memory_usage="deep")
df_accidents.dtypes
df_accidents["victims"] = df_accidents["victims"].astype("int8")
df_accidents["vehicles_involved"] = df_accidents["vehicles_involved"].astype("int8")
df_accidents["longitude"] = df_accidents["longitude"].astype("float32")
df_accidents["latitude"] = df_accidents["latitude"].astype("float32")
df_accidents["zipcodes"] = df_accidents["zipcodes"].astype("object")
df_accidents.info(verbose=False, memory_usage="deep")
# +
# Insert data in MongoDB
# -
import pymongo
from pymongo import MongoClient
from pymongo.errors import ConnectionFailure, ConfigurationError
#from pymongo.errors import
# +
# Prepare environment
## OS and Clients
import os
from dotenv import load_dotenv
load_dotenv()
DATABASE_URL = os.getenv("DATABASE_URL")
DATABASE_URL_DEV = os.getenv("DATABASE_URL_DEV")
# +
# Generate Client
# Change this variable to feed one or another DB
connection = "development"
# connection = "production
if connection == "development":
url_connection = DATABASE_URL_DEV
else:
url_connection = DATABASE_URL
#url_connection = DATABASE_URL
#url_connection_dev = DATABASE_URL_DEV
try:
client = MongoClient(url_connection)
db = client.get_database("open_data_bcn")
except ConnectionFailure:
print(f"Connection cannot be established")
except ConfigurationError as error:
print(f"There's a configuration error. Check your environment variables")
print(error)
accidents_collection = db["accidents"]
# air_quality_collection = db["air_quality"]
# air_stations_collection = db["air_stations"]
# births_collection = db["births"]
# bus_stops_collection = db["bus_stops"]
# deaths_collection = db["deaths"]
# inmigrants_by_nationality_collection = db["immigrants_by_nationality"]
# inmigrants_emigrants_by_age_collection = db["immigrants_emigrants_by_age"]
# inmigrants_emigrants_by_destination_collection = db["immigrants_emigrants_by_destination"]
# inmigrants_emigrants_by_sex_collection = db["immigrants_emigrants_by_sex"]
# life_expectancy_collection = db["life_expectancy"]
# most_frequent_baby_names_collection = db["most_frequent_baby_names"]
# most_frequent_names_collection = db["most_frequent_names"]
# population_collection = db["population"]
# transports_collection = db["transports"]
# unemployment_collection = db["unemployment"]
# -
print(db)
print(accidents_collection)
# +
# Test insertion
# accidents_collection.insert_one({"hello":"world"})
accidents_collection.drop()
# +
# Create 2dsphere index to allow geospatial queries
accidents_collection.create_index([("location", pymongo.GEOSPHERE)])
#accidents_collection.create_index([("victims", ])
# +
accident_documents = []
for _, accident in df_accidents.iterrows():
document = {
"victims": accident["victims"],
"vehicles_involved": accident["vehicles_involved"],
"date": accident["date"],
"location": {
"type": "Point",
"coordinates": [accident["longitude"], accident["latitude"]]
}
}
accident_documents.append(document)
# -
accidents_collection.insert_many(accident_documents)
db.get_collection("accidents")
db.accidents.find_one()
# +
# The last step is to export the DataFrame
# +
def location_fixer(document):
longitude = document["longitude"]
latitude = document["latitude"]
location = {
"coordinates": [
longitude,
latitude
],
"type": "Point"
}
del document["longitude"]
del document["latitude"]
document["location"] = location
return document
document_raw = {"victims":2,"vehicles_involved":2,"longitude":2.1256244183,"latitude":41.340045929,"date":1507881600000}
document_fixed = location_fixer(document_raw)
document_fixed
# -
df_accidents.to_json(r'./accidents.json', orient='records', indent=None)
# + tags=[]
import json
with open('accidents.json', 'r') as datafile:
data = json.load(datafile)
with open('../database/accidents_fixed.json', 'w+') as datafile_fixed:
data_fixed = map(location_fixer, data)
for obj in data_fixed:
print(obj)
datafile_fixed.write(json.dumps(obj) + "\n")
# -
| data/eda_accidents.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
import glob, re
import numpy as np
import pandas as pd
from datetime import datetime
from tqdm import tqdm
from sklearn import *
# +
data = {
'tra': pd.read_csv('../data/air_visit_data.csv'),
'as': pd.read_csv('../data/air_store_info.csv'),
'hs': pd.read_csv('../data/hpg_store_info.csv'),
'ar': pd.read_csv('../data/air_reserve.csv'),
'hr': pd.read_csv('../data/hpg_reserve.csv'),
'id': pd.read_csv('../data/store_id_relation.csv'),
'tes': pd.read_csv('../data/sample_submission.csv'),
'hol': pd.read_csv('../data/date_info.csv').rename(columns={'calendar_date':'visit_date'})
}
data['hr'] = pd.merge(data['hr'], data['id'], how='inner', on=['hpg_store_id'])
# -
for df in tqdm(['ar','hr']):
data[df]['visit_datetime'] = pd.to_datetime(data[df]['visit_datetime'])
data[df]['visit_datetime'] = data[df]['visit_datetime'].dt.date
data[df]['reserve_datetime'] = pd.to_datetime(data[df]['reserve_datetime'])
data[df]['reserve_datetime'] = data[df]['reserve_datetime'].dt.date
data[df]['reserve_datetime_diff'] = data[df].apply(lambda r: (r['visit_datetime'] - r['reserve_datetime']).days, axis=1)
tmp1 = data[df].groupby(['air_store_id','visit_datetime'], as_index=False)[['reserve_datetime_diff', 'reserve_visitors']].sum().rename(columns={'visit_datetime':'visit_date', 'reserve_datetime_diff': 'rs1', 'reserve_visitors':'rv1'})
tmp2 = data[df].groupby(['air_store_id','visit_datetime'], as_index=False)[['reserve_datetime_diff', 'reserve_visitors']].mean().rename(columns={'visit_datetime':'visit_date', 'reserve_datetime_diff': 'rs2', 'reserve_visitors':'rv2'})
data[df] = pd.merge(tmp1, tmp2, how='inner', on=['air_store_id','visit_date'])
# +
data['tra']['visit_date'] = pd.to_datetime(data['tra']['visit_date'])
data['tra']['dow'] = data['tra']['visit_date'].dt.dayofweek
data['tra']['year'] = data['tra']['visit_date'].dt.year
data['tra']['month'] = data['tra']['visit_date'].dt.month
data['tra']['visit_date'] = data['tra']['visit_date'].dt.date
data['tes']['visit_date'] = data['tes']['id'].map(lambda x: str(x).split('_')[2])
data['tes']['air_store_id'] = data['tes']['id'].map(lambda x: '_'.join(x.split('_')[:2]))
data['tes']['visit_date'] = pd.to_datetime(data['tes']['visit_date'])
data['tes']['dow'] = data['tes']['visit_date'].dt.dayofweek
data['tes']['year'] = data['tes']['visit_date'].dt.year
data['tes']['month'] = data['tes']['visit_date'].dt.month
data['tes']['visit_date'] = data['tes']['visit_date'].dt.date
unique_stores = data['tes']['air_store_id'].unique()
stores = pd.concat([pd.DataFrame({'air_store_id': unique_stores, 'dow': [i]*len(unique_stores)}) for i in range(7)], axis=0, ignore_index=True).reset_index(drop=True)
#sure it can be compressed...
tmp = data['tra'].groupby(['air_store_id','dow'], as_index=False)['visitors'].min().rename(columns={'visitors':'min_visitors'})
stores = pd.merge(stores, tmp, how='left', on=['air_store_id','dow'])
tmp = data['tra'].groupby(['air_store_id','dow'], as_index=False)['visitors'].mean().rename(columns={'visitors':'mean_visitors'})
stores = pd.merge(stores, tmp, how='left', on=['air_store_id','dow'])
tmp = data['tra'].groupby(['air_store_id','dow'], as_index=False)['visitors'].median().rename(columns={'visitors':'median_visitors'})
stores = pd.merge(stores, tmp, how='left', on=['air_store_id','dow'])
tmp = data['tra'].groupby(['air_store_id','dow'], as_index=False)['visitors'].max().rename(columns={'visitors':'max_visitors'})
stores = pd.merge(stores, tmp, how='left', on=['air_store_id','dow'])
tmp = data['tra'].groupby(['air_store_id','dow'], as_index=False)['visitors'].count().rename(columns={'visitors':'count_observations'})
stores = pd.merge(stores, tmp, how='left', on=['air_store_id','dow'])
stores = pd.merge(stores, data['as'], how='left', on=['air_store_id'])
# -
stores.head()
def Major_Cit(name):
if name.startswith("Tōkyō-to"):
return "Tokyo"
elif name.startswith("Fukuoka-ken"):
return "Fukuoka-ken"
elif name.startswith("Ōsaka-fu"):
return "Ōsaka-fu"
elif name.startswith("Hokkaidō"):
return "Hokkaidō"
elif name.startswith("Hyōgo-ken"):
return "Hyōgo-ken"
elif name.startswith("Niigata-ken"):
return "Niigata-ken"
elif name.startswith("Hiroshima-ken"):
return "Hiroshima-ken"
elif name.startswith("Niigata-ken"):
return "Niigata-ken"
elif name.startswith("Shizuoka-ken"):
return "Niigata-ken"
else:
return "Other"
stores['city_name'] = stores['air_area_name'].apply(Major_Cit)
# NEW FEATURES FROM <NAME>
stores['air_genre_name'] = stores['air_genre_name'].map(lambda x: str(str(x).replace('/',' ')))
stores['air_area_name'] = stores['air_area_name'].map(lambda x: str(str(x).replace('-',' ')))
lbl = preprocessing.LabelEncoder()
for i in range(10):
stores['air_genre_name'+str(i)] = lbl.fit_transform(stores['air_genre_name'].map(lambda x: str(str(x).split(' ')[i]) if len(str(x).split(' '))>i else ''))
stores['air_area_name'+str(i)] = lbl.fit_transform(stores['air_area_name'].map(lambda x: str(str(x).split(' ')[i]) if len(str(x).split(' '))>i else ''))
stores['air_genre_name'] = lbl.fit_transform(stores['air_genre_name'])
stores['air_area_name'] = lbl.fit_transform(stores['air_area_name'])
# +
lbl = preprocessing.LabelEncoder()
stores['city_name'] = lbl.fit_transform(stores['city_name'])
data['hol']['visit_date'] = pd.to_datetime(data['hol']['visit_date'])
data['hol']['day_of_week'] = lbl.fit_transform(data['hol']['day_of_week'])
data['hol']['visit_date'] = data['hol']['visit_date'].dt.date
train = pd.merge(data['tra'], data['hol'], how='left', on=['visit_date'])
test = pd.merge(data['tes'], data['hol'], how='left', on=['visit_date'])
train = pd.merge(train, stores, how='left', on=['air_store_id','dow'])
test = pd.merge(test, stores, how='left', on=['air_store_id','dow'])
# -
train.head()
plt.figure(figsize=(16,9))
sns.heatmap(train.corr())
alldata = pd.concat([train,test])
begin_date = alldata.visit_date.min()
train['Abosolute_days'] = (train.visit_date - begin_date).dt.days
test['Abosolute_days'] = (test.visit_date - begin_date).dt.days
train.head()
| Recruit Restaurant Visitor Forecasting/Notebooks/.ipynb_checkpoints/WELCOME BACK-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
#import time
import os, sys, shutil, importlib, glob, subprocess
from tqdm.notebook import tqdm
# %config InlineBackend.figure_format = 'retina'
plt.rcParams['figure.figsize'] = (15,7)
plt.rcParams["savefig.dpi"] = 200
# +
tmp = "./tmp"
output = "./output"
os.makedirs(tmp, exist_ok=True)
os.makedirs(output, exist_ok=True)
# -
# Please set reference genome name
ref_genome = "sacCer3"
# # 1. Download and unzip annotation data
# URLs for genome annotation data
url_dictionary = {"mm10": "http://homer.ucsd.edu/homer/data/genomes/mm10.v6.0.zip",
"mm9": "http://homer.ucsd.edu/homer/data/genomes/mm9.v6.0.zip",
"hg19": "http://homer.ucsd.edu/homer/data/genomes/hg19.v6.0.zip",
"hg38": "http://homer.ucsd.edu/homer/data/genomes/hg38.v6.0.zip",
"sacCer2": "http://homer.ucsd.edu/homer/data/genomes/sacCer2.v6.4.zip",
"sacCer3": "http://homer.ucsd.edu/homer/data/genomes/sacCer3.v6.4.zip"#S.cerevisiae
}
url_dictionary[ref_genome]
# download data
# ! wget http://homer.ucsd.edu/homer/data/genomes/sacCer3.v6.4.zip
# Unzip data
# ! unzip sacCer3.v6.4.zip
# # 2. Make tss bed file
# +
def make_tss_bed_file(ref_genome):
tss = pd.read_csv(f"data/genomes/{ref_genome}/{ref_genome}.basic.annotation",
header=None, delimiter="\t")
tss = tss[tss[5] == "P"]
print("1. raw_tss_data")
print(tss.head())
print("2. save tss info as a bed file")
tss = tss.reset_index(drop=False)
tss[[1, 2, 3, "index", 5, 4]].to_csv(os.path.join(tmp, f"{ref_genome}_tss.bed"),
sep='\t', header=False, index=False)
print(" tss bed file was saved as " + os.path.join(tmp, f"{ref_genome}_tss.bed"))
make_tss_bed_file(ref_genome=ref_genome)
# -
# # 2. Process peaks with homer
# +
# command
input_bed = os.path.join(tmp, f"{ref_genome}_tss.bed")
out_bed = os.path.join(tmp, f"{ref_genome}_tss_with_annot.bed")
command = f'annotatePeaks.pl {input_bed} {ref_genome} >{out_bed}'
print(command)
# +
# Install genome data
# -
# ! perl /home/k/anaconda3/envs/pandas1/share/homer-4.11-1/.//configureHomer.pl -install sacCer3
# process tss file with homer
# !annotatePeaks.pl ./tmp/sacCer3_tss.bed sacCer3 >./tmp/sacCer3_tss_with_annot.bed
# # 3.load and process
out_bed
def process_tss_info():
# load file
tss_with_annot = pd.read_csv(out_bed, delimiter="\t", index_col=0)
# process
tss_with_annot.Start = tss_with_annot.Start - 1
tss_with_annot.index.name = None
tss_with_annot = tss_with_annot.reset_index(drop=False)
# select info
tss_with_annot = tss_with_annot[["Chr", "Start", "End", "Gene Name", 'Distance to TSS', "Strand"]]
return tss_with_annot
tss_ref = process_tss_info()
tss_ref.head()
tss_ref
# +
tss_ref.to_csv(os.path.join(output, f"{ref_genome}_tss_info.bed"), sep='\t', header=False, index=False)
# -
| docs/notebooks/01_ATAC-seq_data_processing/option1_scATAC-seq_data_analysis_with_cicero/misc/make_tss_referenece_from_homer_data-S cerevisiae.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Create a template background model
#
# ## Context
#
# DL3 data is usually shipped with a background IRF. However, sometimes it is necessary to be able to build background templates from scratch.
#
# In this notebook, we show a very basic example of how this can be done using off runs supplied within the HESS data release.
#
# Real life implementations can be found [here](https://www.aanda.org/articles/aa/abs/2019/12/aa36452-19/aa36452-19.html) and a slightly different approach [here](https://www.aanda.org/articles/aa/full_html/2019/12/aa36010-19/aa36010-19.html).
#
# ## Proposed approach
#
#
# We will use the "off observations", i.e. those without significant gamma-ray emission sources in the field of view from the [H.E.S.S. first public test data release](https://www.mpi-hd.mpg.de/hfm/HESS/pages/dl3-dr1/). This model could then be used in the analysis of sources from that dataset (not done here).
#
# We will make a background model that is radially symmetric in the field of view, i.e. only depends on field of view offset angle and energy. At the end, we will save the model in the `BKG_2D` as defined in the [spec](https://gamma-astro-data-formats.readthedocs.io/en/latest/irfs/full_enclosure/bkg/index.html).
#
# Note that this is just a very simplified example. Actual background model production is done with more sophistication usually using 100s or 1000s of off runs, e.g. concerning non-radial symmetries, binning and smoothing of the distributions, and treating other dependencies such as zenith angle, telescope configuration or optical efficiency. Another aspect not shown here is how to use AGN observations to make background models, by cutting out the part of the field of view that contains gamma-rays from the AGN.
#
# We will mainly be using the following classes:
#
# * `~gammapy.data.DataStore` to load the runs to use to build the bkg model.
# * `~gammapy.irf.Background2D` to represent and write the background model.
# ## Setup
#
# As always, we start the notebook with some setup and imports.
# %matplotlib inline
import matplotlib.pyplot as plt
from copy import deepcopy
import numpy as np
import astropy.units as u
from astropy.io import fits
from astropy.table import Table, vstack
from pathlib import Path
from gammapy.maps import MapAxis
from gammapy.data import DataStore
from gammapy.irf import Background2D
# ## Select off data
#
# We start by selecting the observations used to estimate the background model.
#
# In this case, we just take all "off runs" as defined in the observation table.
data_store = DataStore.from_dir("$GAMMAPY_DATA/hess-dl3-dr1")
# Select just the off data runs
obs_table = data_store.obs_table
obs_table = obs_table[obs_table["TARGET_NAME"] == "Off data"]
observations = data_store.get_observations(obs_table["OBS_ID"])
print("Number of observations:", len(observations))
# ## Background model
#
# The background model we will estimate is a differential background rate model in unit `s-1 MeV-1 sr-1` as a function of reconstructed energy and field of fiew offset.
#
# We estimate it by histogramming off data events and then smoothing a bit (not using a good method) to get a less noisy estimate. To get the differential rate, we divide by observation time and also take bin sizes into account to get the rate per energy and solid angle. So overall we fill two arrays called `counts` and `exposure` with `exposure` filled so that `background_rate = counts / exposure` will give the final background rate we're interested in.
#
# The processing can be done either one observation at a time, or first for counts and then for exposure. Either way is fine. Here we do one observation at a time, starting with empty histograms and then accumulating counts and exposure. Since this is a multi-step algorithm, we put the code to do this computation in a `BackgroundModelEstimator` class.
#
class BackgroundModelEstimator:
""""""
def __init__(self, energy, offset):
self.counts = self._make_bkg2d(energy, offset, unit="")
self.exposure = self._make_bkg2d(energy, offset, unit="s MeV sr")
@staticmethod
def _make_bkg2d(energy, offset, unit):
shape = len(energy.center), len(offset.center)
return Background2D(axes=[energy, offset], unit=unit)
def run(self, observations):
for obs in observations:
self.fill_counts(obs)
self.fill_exposure(obs)
def fill_counts(self, obs):
events = obs.events
energy_bins = self.counts.axes["energy"].edges
offset_bins = self.counts.axes["offset"].edges
counts = np.histogram2d(
x=events.energy.to("MeV"),
y=events.offset.to("deg"),
bins=(energy_bins, offset_bins),
)[0]
self.counts.data += counts
def fill_exposure(self, obs):
axes = self.exposure.axes
offset = axes["offset"].center
time = obs.observation_time_duration
exposure = 2 * np.pi * offset * time * axes.bin_volume()
self.exposure.quantity += exposure
@property
def background_rate(self):
rate = deepcopy(self.counts)
rate.quantity /= self.exposure.quantity
return rate
# %%time
energy = MapAxis.from_energy_bounds(0.1, 100, 20, name="energy", unit="TeV")
offset = MapAxis.from_bounds(
0, 3, nbin=9, interp="sqrt", unit="deg", name="offset"
)
estimator = BackgroundModelEstimator(energy, offset)
estimator.run(observations)
# Let's have a quick look at what we did ...
estimator.background_rate.plot()
# +
# You could save the background model to a file like this
# estimator.background_rate.to_fits().writeto('background_model.fits', overwrite=True)
# -
# ## Zenith dependence
#
# The background models used in H.E.S.S. usually depend on the zenith angle of the observation. That kinda makes sense because the energy threshold increases with zenith angle, and since the background is related to (but not given by) the charged cosmic ray spectrum that is a power-law and falls steeply, we also expect the background rate to change.
#
# Let's have a look at the dependence we get for this configuration used here (Hillas reconstruction, standard cuts, see H.E.S.S. release notes for more information).
x = obs_table["ZEN_PNT"]
y = obs_table["SAFE_ENERGY_LO"]
plt.plot(x, y, "o")
plt.xlabel("Zenith (deg)")
plt.ylabel("Energy threshold (TeV)");
x = obs_table["ZEN_PNT"]
y = obs_table["EVENT_COUNT"] / obs_table["ONTIME"]
plt.plot(x, y, "o")
plt.xlabel("Zenith (deg)")
plt.ylabel("Rate (events / sec)")
plt.ylim(0, 10);
# The energy threshold increases, as expected. It's a bit surprising that the total background rate doesn't decreases with increasing zenith angle. That's a bit of luck for this configuration, and because we're looking at the rate of background events in the whole field of view. As shown below, the energy threshold increases (reducing the total rate), but the rate at a given energy increases with zenith angle (increasing the total rate). Overall the background does change with zenith angle and that dependency should be taken into account.
#
# The remaining scatter you see in the plots above (in energy threshold and rate) is due to dependence on telescope optical efficiency, atmospheric changes from run to run and other effects. If you're interested in this, [2014APh....54...25H](https://ui.adsabs.harvard.edu/abs/2014APh....54...25H) has some infos. We'll not consider this futher.
#
# When faced with the question whether and how to model the zenith angle dependence, we're faced with a complex optimisation problem: the closer we require off runs to be in zenith angle, the fewer off runs and thus event statistic we have available, which will lead do noise in the background model. The choice of zenith angle binning or "on-off observation mathching" strategy isn't the only thing that needs to be optimised, there's also energy and offset binnings and smoothing scales. And of course good settings will depend on the way you plan to use the background model, i.e. the science measurement you plan to do. Some say background modeling is the hardest part of IACT data analysis.
#
# Here we'll just code up something simple: make three background models, one from the off runs with zenith angle 0 to 20 deg, one from 20 to 40 deg, and one from 40 to 90 deg.
# +
zenith_bins = [
{"min": 0, "max": 20},
{"min": 20, "max": 40},
{"min": 40, "max": 90},
]
def make_model(observations):
energy = MapAxis.from_energy_bounds(
0.1, 100, 20, name="energy", unit="TeV"
)
offset = MapAxis.from_bounds(
0, 3, nbin=9, interp="sqrt", unit="deg", name="offset"
)
estimator = BackgroundModelEstimator(energy, offset)
estimator.run(observations)
return estimator.background_rate
def make_models():
for zenith in zenith_bins:
mask = zenith["min"] <= obs_table["ZEN_PNT"]
mask &= obs_table["ZEN_PNT"] < zenith["max"]
obs_ids = obs_table["OBS_ID"][mask]
observations = data_store.get_observations(obs_ids)
yield make_model(observations)
# -
# %%time
models = list(make_models())
models[0].plot()
models[2].plot()
# + nbsphinx-thumbnail={"tooltip": "Build a background template using off runs supplied within the first H.E.S.S. data release."}
y = models[0].evaluate(energy=energy.center, offset="0.5 deg")
plt.plot(energy.center, y, label="0 < zen < 20")
y = models[1].evaluate(energy=energy.center, offset="0.5 deg")
plt.plot(energy.center, y, label="20 < zen < 40")
y = models[2].evaluate(energy=energy.center, offset="0.5 deg")
plt.plot(energy.center, y, label="40 < zen < 90")
plt.loglog()
plt.xlabel("Energy (TeV)")
plt.ylabel("Bkg rate (s-1 sr-1 MeV-1)")
plt.legend();
# -
# ## Index tables
#
# So now we have radially symmetric background models for three zenith angle bins. To be able to use it from the high-level Gammapy classes like e.g. the MapMaker though, we also have to create a [HDU index table](https://gamma-astro-data-formats.readthedocs.io/en/latest/data_storage/hdu_index/index.html) that declares which background model to use for each observation.
#
# It sounds harder than it actually is. Basically you have to some code to make a new `astropy.table.Table`. The most tricky part is that before you can make the HDU index table, you have to decide where to store the data, because the HDU index table is a reference to the data location. Let's decide in this example that we want to re-use all existing files in `$GAMMAPY_DATA/hess-dl3-dr1` and put all the new HDUs (for background models and new index files) bundled in a single FITS file called `hess-dl3-dr3-with-background.fits.gz`, which we will put in `$GAMMAPY_DATA/hess-dl3-dr1`.
# +
filename = "hess-dl3-dr3-with-background.fits.gz"
# Make a new table with one row for each observation
# pointing to the background model HDU
rows = []
for obs_row in data_store.obs_table:
row = {
"OBS_ID": obs_row["OBS_ID"],
"HDU_TYPE": "bkg",
"HDU_CLASS": "bkg_2d",
"FILE_DIR": "",
"FILE_NAME": filename,
"HDU_NAME": "BKG0",
}
rows.append(row)
hdu_table_bkg = Table(rows=rows)
# +
# Make a copy of the original HDU index table
hdu_table = data_store.hdu_table.copy()
hdu_table.meta.pop("BASE_DIR")
# Add the rows for the background HDUs
hdu_table = vstack([hdu_table, hdu_table_bkg])
hdu_table.sort("OBS_ID")
# -
hdu_table[8:14]
# +
# Put index tables and background models in a FITS file
hdu_list = fits.HDUList()
hdu = fits.BinTableHDU(hdu_table)
hdu.name = "HDU_INDEX"
hdu_list.append(hdu)
hdu = fits.BinTableHDU(data_store.obs_table)
hdu_list.append(hdu)
for idx, model in enumerate(models):
hdu = model.to_table_hdu()
hdu.name = f"BKG{idx}"
hdu_list.append(hdu)
print([_.name for _ in hdu_list])
import os
path = (
Path(os.environ["GAMMAPY_DATA"])
/ "hess-dl3-dr1/hess-dl3-dr3-with-background.fits.gz"
)
hdu_list.writeto(path, overwrite=True)
# -
# Let's see if it's possible to access the data
ds2 = DataStore.from_file(path)
ds2.info()
obs = ds2.obs(20137)
# the events
obs.events.select_offset([0, 3] * u.deg).peek()
# the effective area
obs.aeff.peek()
# the background
obs.bkg.peek()
# ## Exercises
#
# - Play with the parameters here (energy binning, offset binning, zenith binning)
# - Try to figure out why there are outliers on the zenith vs energy threshold curve.
# - Does azimuth angle or optical efficiency have an effect on background rate?
# - Use the background models for a 3D analysis (see "hess" notebook).
| docs/tutorials/background_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# # reading dataset------------------
df = pd.read_csv('data.csv', encoding = "ISO-8859-1")
df
df.isnull().sum()
df.dropna(inplace=True)
df
# # preprocessing------------
# Removing punctuations
data=df.iloc[:,2:27]
data.replace("[^a-zA-Z]"," ",regex=True, inplace=True)
data['Top1']
data.index, data.shape, data.columns
for col in data.columns:
data[col]=data[col].str.lower()
data.head(1)
headlines = []
for row in range(0,len(data.index)):
headlines.append(' '.join(str(x) for x in data.iloc[row,0:25]))
headlines[0]
# # bag of words model creation---------------
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# +
## implement BAG OF WORDS
#countvector=CountVectorizer(ngram_range=(2,2), max_features=10000)
#traindata=countvector.fit_transform(headlines)
# -
## implement TFidf
tfidfvector = TfidfVectorizer()
traindata = tfidfvector.fit_transform(headlines)
traindata.shape, df['Label'].shape
#train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(traindata, df['Label'], test_size = 0.20, random_state = 0)
X_train.shape, X_test.shape
# # model creation--------------RandomForest Classifier------------
from sklearn.ensemble import RandomForestClassifier
randomclassifier=RandomForestClassifier(n_estimators=200,criterion='entropy')
randomclassifier.fit(X_train, y_train)
predictions = randomclassifier.predict(X_test)
## Import library to check accuracy
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
matrix=confusion_matrix(y_test,predictions)
score=accuracy_score(y_test,predictions)
report=classification_report(y_test,predictions)
print(matrix, score,)
print(report)
| stock_up_down_prediction_based_on_news.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="c6C3Kfcex_b2" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1591825753179, "user_tz": -180, "elapsed": 1780, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="bf8c6ada-7dd3-4480-ebef-3a92dbcb540f"
# %tensorflow_version 1.x
# + [markdown] id="Inwd6YKKqQGY"
# # Clasificarea categoriilor de varste ale persoanelor
# <b><i>Categoriile vor fi impartite in modul urmator:</i></b>
# <ol>
# <li><b>label 0:</b> 04 - 06 ani (early childhood)</li>
# <li><b>label 1:</b> 07 - 08 ani (middle childhood)</li>
# <li><b>label 2:</b> 09 - 11 ani (late childhood)</li>
# <li><b>label 3:</b> 12 - 19 ani (adolescence)</li>
# <li><b>label 4:</b> 20 - 27 ani (early adulthood)</li>
# <li><b>label 5:</b> 28 - 35 ani (middle adulthood)</li>
# <li><b>label 6:</b> 36 - 45 ani (midlife)</li>
# <li><b>label 7:</b> 46 - 60 ani (mature adulthood)</li>
# <li><b>label 8:</b> 61 - 75 ani (late adulthood)</li>
# </ol>
# + [markdown] id="qeREEmHnqQGa"
# ### Salvam matricile imaginilor si etichetele in train_images, train_labes, test_images, test_labels (local).
# + id="fbL04HwWqQGb" outputId="868b8a3b-3df2-4594-cbd5-1ae1a57b5d78"
import os
import cv2
import numpy as np
import random
import matplotlib.pyplot as plt
path_imagini_main = "./imagini_VGG_cu_appa"
lista_foldere = ["04-06", "07-08", "09-11", "12-19", "20-27", "28-35", "36-45", "46-60", "61-75"]
lista_subfoldere = ['f', 'm']
histo = [0] * len(lista_foldere)
lista_imag_label = [[],[],[],[],[],[],[],[],[]]
# va fi o matrice tridimenionala ce contine cele 9 clase, fiecare clasa avand [imaginea, label, real_age]
# lista_imag_label[i][j][0/1/2] unde
# i = clasa [0,1,2,3,4,5,6,7,8 sau 9]
# j = a cata imagine din clasa [0....1500]
# 0 = imaginea matrice, 1 = label pt acea imagine, 2 = varsta reala
print("Loading", end="")
for nume_folder in lista_foldere:
for nume_subfolder in lista_subfoldere:
path_imagini = "{}/{}/{}".format(path_imagini_main, nume_folder, nume_subfolder)
for dirname, subdirnames, filenames in os.walk(path_imagini):
for filename in filenames:
if dirname == "{}/04-06/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[0] += 1
age = filename[:2].replace('_', '')
lista_imag_label[0].append([image, 0, age])
elif dirname == "{}/07-08/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[1] += 1
age = filename[:2].replace('_', '')
lista_imag_label[1].append([image, 1, age])
elif dirname == "{}/09-11/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[2] += 1
age = filename[:2].replace('_', '')
lista_imag_label[2].append([image, 2, age])
elif dirname == "{}/12-19/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[3] += 1
age = filename[:2]
lista_imag_label[3].append([image, 3, age])
elif dirname == "{}/20-27/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[4] += 1
age = filename[:2]
lista_imag_label[4].append([image, 4, age])
elif dirname == "{}/28-35/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[5] += 1
age = filename[:2]
lista_imag_label[5].append([image, 5, age])
elif dirname == "{}/36-45/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[6] += 1
age = filename[:2]
lista_imag_label[6].append([image, 6, age])
elif dirname == "{}/46-60/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[7] += 1
age = filename[:2]
lista_imag_label[7].append([image, 7, age])
elif dirname == "{}/61-75/{}".format(path_imagini_main, nume_subfolder):
image = cv2.imread("{}/{}".format(dirname, filename))
image = cv2.resize(image, (224,224))
histo[8] += 1
age = filename[:2]
lista_imag_label[8].append([image, 8, age])
# Print Loading
print(".", end="")
print()
# random.shuffle(lista_imag_label)
# len_total = (len(lista_imag_label))
# len_train = int(0.8 * len_total)
len_train = []
for i in range(len(lista_imag_label)):
len_train.append(int(0.8 * len(lista_imag_label[i])))
plt.figure()
plt.bar(lista_foldere, histo)
print("Au fost gasite in total {} imagini.".format(len_total))
# train_images = [lista_imag_label[i][0] for i in range(0, len_train)]
# train_labels = [lista_imag_label[i][1] for i in range(0, len_train)]
# test_images = [lista_imag_label[i][0] for i in range(len_total - 1, len_train, -1)]
# test_labels = [lista_imag_label[i][1] for i in range(len_total - 1, len_train, -1)]
train_images = []
train_labels = []
for i in range(len(lista_imag_label)):
for j in range(0, len_train[i]):
train_images.append(lista_imag_label[i][j][0])
train_labels.append(lista_imag_label[i][j][1])
test_images = []
test_labels = []
for i in range(len(lista_imag_label)):
for j in range(len_train[i], len(lista_imag_label[i])):
test_images.append(lista_imag_label[i][j][0])
test_labels.append(lista_imag_label[i][j][1])
# optional, pt referinta si acuratete proprie
train_ages = [lista_imag_label[i][j][2] for i in range(0, len(lista_imag_label)) for j in range(len(lista_imag_label[i]))]
test_ages = [lista_imag_label[i][j][2] for i in range(0, len(lista_imag_label)) for j in range(len(lista_imag_label[i]))]
print("Au fost incarcate {} pentru antrenare.".format(len(train_images)))
print("Au fost incarcate {} pentru testare.".format(len(test_images)))
# + id="8i9bUA0QqQGh" outputId="053cf9c3-1ef0-45a4-ac4d-17bf365dd4ac"
print(len_train)
print(len(train_images)+len(test_images))
# print(train_ages)
# + [markdown] id="ca_8b3KZqQGl"
# ##### Salvam listele de imagini "train_images", "test_images", respectiv listele cu labels "train_labels", "test_labels" ca fisiere NPZ (Save our dataset as NPZ files)
# - Vom folosi numpy's savez function pentru aceasta:
# + id="3a7C1RtRqQGm" outputId="6bb0ad20-1727-4db5-f4ff-d2abda169993"
np.savez('AgeClass_train_data_224.npz', np.array(train_images))
np.savez('AgeClass_train_labels_224.npz', np.array(train_labels))
np.savez('AgeClass_test_data_224.npz', np.array(test_images))
np.savez('AgeClass_test_labels_224.npz', np.array(test_labels))
np.savez('AgeClass_train_age_pt_accproprie.npz', np.array(train_ages))
np.savez('AgeClass_test_age_pt_accproprie.npz', np.array(test_ages))
print("Done!")
# + [markdown] id="pJQeoxeRqQGr"
# ---
# ---
# ##### Loader Function:
# + id="ePQL6WaDqQGs"
import numpy as np
def load_data_training_and_test(datasetname):
npzfile = np.load(datasetname + "_train_data_224.npz")
train = npzfile['arr_0']
npzfile = np.load(datasetname + "_train_labels_224.npz")
train_labels = npzfile['arr_0']
npzfile = np.load(datasetname + "_test_data_224.npz")
test = npzfile['arr_0']
npzfile = np.load(datasetname + "_test_labels_224.npz")
test_labels = npzfile['arr_0']
return (train, train_labels), (test, test_labels)
# + [markdown] id="hsKPKlvNqQGw"
# ##### Loader Function (for ages arrays only) -- Asigura-te ca array-urile de train_images, test_images, train_labels etc.. corespund cu train_ages, test_ages !!!
# + id="Nw2HmyaAqQGx"
import numpy as np
def load_age_training_and_test(datasetname):
npzfile = np.load(datasetname + "_train_age_pt_accproprie.npz")
train_ages = npzfile['arr_0']
npzfile = np.load(datasetname + "_test_age_pt_accproprie.npz")
test_ages = npzfile['arr_0']
return (train_ages, test_ages)
# + [markdown] id="81GsrALmq6h1"
# #### In Google Colab:
# - MOUNT GOOGLE DRIVE
# - IMPORT NPZ FILES FROM GOOGLE DRIVE (dureaza 30 secunde)
# <b> [ Totodata, acest cod bloc trebuie rulat inainte de a testa modelul: model.predict_classes(...) ] </b> <br>
#
# + id="aNDrDOpnumQy" colab={"base_uri": "https://localhost:8080/", "height": 125} executionInfo={"status": "ok", "timestamp": 1591826028778, "user_tz": -180, "elapsed": 37369, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="dca1d02b-e059-41d5-aa8e-95ed597b3fea"
from google.colab import drive
drive.mount('/content/drive')
# !sudo rm -r sample_data
# + id="FJF3R-nQze0l" colab={"base_uri": "https://localhost:8080/", "height": 105} executionInfo={"status": "ok", "timestamp": 1591826097300, "user_tz": -180, "elapsed": 104019, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="75fcce5d-0268-4039-a828-f3440867a8fc"
# Incarca fisierele .npz din Gdrive in Colab
# %cp './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_train_age_pt_accproprie.npz' 'AgeClass_train_age_pt_accproprie.npz'
# %cp './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_test_age_pt_accproprie.npz' 'AgeClass_test_age_pt_accproprie.npz'
# %cp './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_train_data_224.npz' 'AgeClass_train_data_224.npz'
# %cp './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_train_labels_224.npz' 'AgeClass_train_labels_224.npz'
# %cp './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_test_data_224.npz' 'AgeClass_test_data_224.npz'
# %cp './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_test_labels_224.npz' 'AgeClass_test_labels_224.npz'
print("Fisiere incarcate!")
# %ls
# + [markdown] id="AIVUFUvwqQG0"
# ##### Let's get our data ready in the format expected by Keras
#
# - x_train = training images in format expected by keras
# - y_train = labels of training images in format expected by keras
# - x_test = test images in format expected by keras
# - y_test = labels of test images in format expected by keras
#
# ### HOT ONE ENCODING
# <b> [ Acest cod bloc trebuie rulat inainte de a testa modelul: model.predict_classes(...) ] </b>
# + id="5u__RMe_qQG1" colab={"base_uri": "https://localhost:8080/", "height": 123} executionInfo={"status": "ok", "timestamp": 1591826888637, "user_tz": -180, "elapsed": 46613, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="2cc574f5-0e64-468a-8ff3-11455eb9fc04"
from keras.utils import to_categorical # One Hot Enconding (pentru clasificare non-binara)
# https://stackoverflow.com/questions/49392972/error-when-checking-target-expected-dense-3-to-have-shape-3-but-got-array-wi
from datetime import datetime
x = datetime.now()
(x_train, y_train), (x_test, y_test) = load_data_training_and_test("AgeClass")
# Reshaping our label data with One Hot Encoding: from (600,) to (600, 8)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Change our image type to float32 data type
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# Normalize our data by changing the range from (0 to 255) to (0 to 1)
x_train /= 255
x_test /= 255
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
y = datetime.now()
print("Durata totala de incarcare: {}".format(abs(y-x)))
# + [markdown] id="PkoxMmqdqQHB"
# ---
# ### CREAREA RETELEI CONVOLUTIONALE (CNN) - Arhitectura modelului
# + id="S5up4tMwqQHC"
from keras import applications
mobile = applications.MobileNetV2(
input_shape=None,
alpha=1.0,
include_top=False,
weights=None,
input_tensor=None,
pooling=None,
)
# + id="rnZSysyfqQHG"
mobile.summary()
# + id="bUZ0YAV5qQHL" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1591827130474, "user_tz": -180, "elapsed": 1408, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="72b89b84-fd57-49b9-818a-2fcc80f33d71"
from keras.layers import Dense, GlobalAveragePooling2D, Flatten, Dropout
from keras.models import Model
x = mobile.layers[-1].output # -6 in loc de -63 pentru a folosi intreaga retea.
x = GlobalAveragePooling2D(data_format=None)(x)
# x = Dense(27)(x)
x = Dropout(0.5)(x)
predictions = Dense(9, activation='softmax')(x)
model = Model(inputs=mobile.input, output=predictions)
model.summary()
# + id="k57_z2raqQHP" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1591843140318, "user_tz": -180, "elapsed": 1403, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="1f1cdb36-5e9b-43fa-b42f-e7a74ba17444"
from keras.optimizers import SGD
optim = SGD(learning_rate=0.001, momentum=0.9, nesterov=True) # lr pentru conversia in tf_lite, learning_rate pentru a putea face load la model
model.compile(optimizer=optim,
loss='categorical_crossentropy',
metrics=['accuracy'])
print("ok!")
# + [markdown] id="4t1LpzK8qQHT"
# ### ANTRENAREA MODELULUI:
# + id="UqUW25eCqQHV" colab={"base_uri": "https://localhost:8080/", "height": 425} executionInfo={"status": "ok", "timestamp": 1591844348961, "user_tz": -180, "elapsed": 115545, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="76fe3c40-18b3-4eb3-a132-12c5e74e0f96"
from datetime import datetime
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
x = datetime.now()
print(x)
###### Checkpoint - Save the model with the lowest val_loss ! ######
callbacks_list = [
EarlyStopping(monitor = 'val_loss', patience = 20),
ModelCheckpoint(filepath='AgeClass_best.h5', verbose=1, monitor='val_loss', save_best_only=True),
ReduceLROnPlateau(monitor = 'val_loss', factor = 0.1, patience = 10, min_delta=1E-7, verbose=1)
]
epochs = 30
batch_size = 2
try:
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True,
callbacks=callbacks_list)
except:
model.save("AgeClass.h5")
print("\nInterrupted... But don't worry, the model was still saved !")
# Salvarea modelului: include arhitectura, weights, training configuration (loss, optimizer), the state of optimizer
model.save("AgeClass.h5")
# Evaluate the performance the trained model
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss: {}'.format(scores[0]))
print('Test accuracy: {} \n'.format(scores[1]))
y = datetime.now()
print(y)
print("Durata totala de antrenare: {}".format(abs(y-x)))
# + [markdown] id="nZnMjQp94XbF"
# Salvare model in Drive
# + id="hwZeoA4i4UJO" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1591841693772, "user_tz": -180, "elapsed": 6448, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="e6b460c0-20b3-4af8-b39d-bdbb8b9847d3"
# %cp 'AgeClass_best.h5' './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_best.h5'
print("Saved!")
# + [markdown] id="PPijzcGmS_2N"
# Salvare model în local:
# + [markdown] id="wJcX9dP0qQHy"
# ### Afisare grafic LOSS si ACCURACY (imediat dupa antrenare):
# + id="IOx4RSk0qQH0" colab={"base_uri": "https://localhost:8080/", "height": 542} executionInfo={"status": "ok", "timestamp": 1591844363781, "user_tz": -180, "elapsed": 1459, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="e1fca630-7bd6-4970-dbc8-6e55f7ccb6d7"
import matplotlib.pyplot as plt
# %matplotlib inline
history_dict = history.history
### Grafic LOSS:
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1)
line1 = plt.plot(epochs, val_loss_values, label='Validation/Test Loss')
line2 = plt.plot(epochs, loss_values, label='Training Loss')
plt.setp(line1, linewidth=2.0, marker = '+', markersize=10.0)
plt.setp(line2, linewidth=2.0, marker = '4', markersize=10.0)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.grid(True)
plt.legend()
plt.show()
### Grafic ACCURACY
acc_values = history_dict['accuracy']
val_acc_values = history_dict['val_accuracy']
epochs = range(1, len(loss_values) + 1)
line1 = plt.plot(epochs, val_acc_values, label='Validation/Test Accuracy')
line2 = plt.plot(epochs, acc_values, label='Training Accuracy')
plt.setp(line1, linewidth=2.0, marker = '+', markersize=10.0)
plt.setp(line2, linewidth=2.0, marker = '4', markersize=10.0)
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.grid(True)
plt.legend()
plt.show()
# + [markdown] id="gFUgEL9gqQIg"
# ---
# # Determinare acuratete proprie !!!
# + id="mQU5EYYuqQIi" colab={"base_uri": "https://localhost:8080/", "height": 194} executionInfo={"status": "ok", "timestamp": 1591842866761, "user_tz": -180, "elapsed": 41790, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="c4016467-3f36-4022-ffc3-3cabb425f6aa"
import numpy as np
from keras.models import load_model
from datetime import datetime
x = datetime.now()
nr_cazuri_corecte = 0
nr_cazuri_corecte_cu_limite = 0
nr_cazuri_totale = len(x_test)
classifier = load_model("AgeClass.h5")
(train_ages, test_ages) = load_age_training_and_test("AgeClass")
for i in range(0, nr_cazuri_totale):
input_im = x_test[i]
input_im = input_im.reshape(1,224,224,3)
real_label = int(np.argmax(y_test[i]))
real_age = int(test_ages[i])
### Get Prediction
res = list(classifier.predict(input_im, 1, verbose = 0)[0])
predict_label = int(np.argmax(res))
if predict_label == real_label:
nr_cazuri_corecte += 1
nr_cazuri_corecte_cu_limite += 1
elif predict_label == 3: # 12-19 ani
if real_age in [10, 11, 20, 21]:
nr_cazuri_corecte_cu_limite += 1
elif predict_label == 4: # 20-27 ani
if real_age in [18, 19, 28, 29]:
nr_cazuri_corecte_cu_limite += 1
elif predict_label == 5: # 28-35 ani)
if real_age in [26, 27, 36, 37]:
nr_cazuri_corecte_cu_limite += 1
elif predict_label == 6: # 36-45 ani
if real_age in [34, 35, 46, 47]:
nr_cazuri_corecte_cu_limite += 1
elif predict_label == 7: # 46-60 ani
if real_age in [44, 45, 61, 62]:
nr_cazuri_corecte_cu_limite += 1
elif predict_label == 8: # 61-75 ani
if real_age in [58, 59, 60]:
nr_cazuri_corecte_cu_limite += 1
print()
print("{} corecte din {} in total".format(nr_cazuri_corecte, nr_cazuri_totale))
print(nr_cazuri_corecte / nr_cazuri_totale)
print()
print("{} corecte (cu limite) din {} in total".format(nr_cazuri_corecte_cu_limite, nr_cazuri_totale))
print(nr_cazuri_corecte_cu_limite / nr_cazuri_totale)
print()
print("{} poze clasificate in plus in mod corect (cu limita)".format(nr_cazuri_corecte_cu_limite-nr_cazuri_corecte))
print()
# for i in range(len(test_ages)):
# print(int(np.argmax(y_test[i])), test_ages[i])
y = datetime.now()
print("Durata totala pentru a determina acuratetea: {}".format(abs(y-x)))
# + [markdown] id="ZyBIfYZiqQIm"
# ---
# ---
# ## Confusion Matrix
# + id="jC5pulUeqQIp" colab={"base_uri": "https://localhost:8080/", "height": 501} executionInfo={"status": "ok", "timestamp": 1591841963129, "user_tz": -180, "elapsed": 24544, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="dfc5a850-40ed-47f5-8e21-d7ea1356261d"
from sklearn.metrics import confusion_matrix
from keras.models import load_model
import seaborn as sn
import matplotlib.pyplot as plt
# %matplotlib inline
most_recent = "AgeClass.h5"
most_recent2 = "AgeClass_best.h5"
model = load_model(most_recent)
y_pred = np.int32([np.argmax(r) for r in model.predict(x_test)])
plt.figure(figsize=(9,8))
cm = confusion_matrix(y_true = y_test.argmax(axis=1),
y_pred = y_pred)
# cm = cm / cm.sum(axis=1) # Show in procents %
sn.heatmap(cm, annot=True, cmap="Blues")
# + id="eN-7ASQdqQIw" colab={"base_uri": "https://localhost:8080/", "height": 502} executionInfo={"status": "ok", "timestamp": 1591841964064, "user_tz": -180, "elapsed": 20554, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="5931b881-aa07-472a-fb54-5ad0225174bb"
### Afisare matrice confuzie ca procent:
cm = cm / cm.sum(axis=1) # Show in procents %
plt.figure(figsize=(9,8))
sn.heatmap(cm, annot=True, cmap="Blues")
# + [markdown] id="8g6SZ_l5qQIA"
# ---
# ---
# ---
# ### (1) TESTAREA CLASIFICATORULUI (vizualizand imaginile x_test)
# ---
# <li><b>label 0:</b> 4 - 6 ani (early childhood)</li>
# <li><b>label 1:</b> 7 - 8 ani (middle childhood)</li>
# <li><b>label 2:</b> 9 - 11 ani (late childhood)</li>
# <li><b>label 3:</b> 12 - 19 ani (adolescence)</li>
# <li><b>label 4:</b> 20 - 27 ani (early adulthood)</li>
# <li><b>label 5:</b> 28 - 35 ani (middle adulthood)</li>
# <li><b>label 6:</b> 36 - 45 ani (midlife)</li>
# <li><b>label 7:</b> 46 - 60 ani (mature adulthood)</li>
# <li><b>label 8:</b> 61 - 75 ani (late adulthood)</li>
# + id="RaZsDIB6qQIC"
import cv2
import numpy as np
from keras.models import load_model
simple = "AgeClass-2019-08-24-21-00.h5"
mobile = "AgeClass-2019-08-23-22-00_MobileDropout.h5"
google = "AgeClass-google-2019-08-29-23-00.h5"
most_recent = "AgeClass.h5"
most_recent2 = "AgeClass_best4762.h5"
classifier = load_model(most_recent2)
(train_ages, test_ages) = load_age_training_and_test("AgeClass")
def string_prezicere(pred):
if pred == 0:
pred = "04 - 06 ani"
if pred == 1:
pred = "07 - 08 ani"
if pred == 2:
pred = "09 - 11 ani"
if pred == 3:
pred = "12 - 19 ani"
if pred == 4:
pred = "20 - 27 ani"
if pred == 5:
pred = "28 - 35 ani"
if pred == 6:
pred = "36 - 45 ani"
if pred == 7:
pred = "46 - 60 ani"
if pred == 8:
pred = "61 - 75 ani"
return pred
def draw_test(name, pred, pred_real, input_age, input_im):
black_color = [0,0,0]
pred = string_prezicere(pred)
pred_real = string_prezicere(pred_real)
expanded_image = cv2.copyMakeBorder(input_im, 0, 0, 0, imageL.shape[0], cv2.BORDER_CONSTANT,value=black_color)
cv2.putText(expanded_image, "prediction: {}".format(pred), (500, 30) , cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (255,255,255), 1)
cv2.putText(expanded_image, "real_class: {}".format(pred_real), (500, 60) , cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (255,255,255), 1)
cv2.putText(expanded_image, "real_age: {}".format(input_age), (500, 90) , cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (255,255,255), 1)
cv2.imshow(name, expanded_image)
for i in range(0,18):
rand = np.random.randint(0,len(x_test))
input_im = x_test[rand]
input_label = y_test[rand]
input_label = int(np.argmax(input_label))
input_age = test_ages[rand]
imageL = cv2.resize(input_im, None, fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
input_im = input_im.reshape(1,224,224,3)
### Get Prediction
res = list(classifier.predict(input_im, 1, verbose = 0)[0])
index = int(np.argmax(res))
#print(res)
draw_test("Prediction", index, input_label, input_age, imageL)
cv2.waitKey(0)
cv2.destroyAllWindows()
# + [markdown] id="_Or8lhuVqQII"
# ### (2) TESTAREA CLASIFICATORULUI (pe o singura imagine)
# + id="iSwxe88pqQIJ"
import cv2
import numpy as np
from keras.models import load_model
simple = "AgeClass-2019-08-24-21-00.h5"
mobile = "AgeClass-2019-08-23-22-00_MobileDropout.h5"
google = "AgeClass-google-2019-08-29-23-00.h5"
most_recent = "AgeClass.h5"
most_recent2 = "AgeClass_best4762.h5"
classifier = load_model(most_recent2)
def draw_test(name, pred, input_im):
black_color = [0,0,0]
if pred == 0:
pred = "04 - 06 ani"
if pred == 1:
pred = "07 - 08 ani"
if pred == 2:
pred = "09 - 11 ani"
if pred == 3:
pred = "12 - 19 ani"
if pred == 4:
pred = "20 - 27 ani"
if pred == 5:
pred = "28 - 35 ani"
if pred == 6:
pred = "36 - 45 ani"
if pred == 7:
pred = "46 - 60 ani"
if pred == 8:
pred = "61 - 75 ani"
expanded_image = cv2.copyMakeBorder(input_im, 0, 0, 0, imageL.shape[0], cv2.BORDER_CONSTANT,value=black_color)
cv2.putText(expanded_image, str(pred), (500, 50) , cv2.FONT_HERSHEY_COMPLEX_SMALL, 2, (233,233,233), 2)
cv2.imshow(name, expanded_image)
input_im = cv2.imread("poza3.jpg")
input_im = cv2.resize(input_im, (224,224))
imageL = cv2.resize(input_im, None, fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
input_im = input_im.reshape(1,224,224,3)
## Get Prediction
res = list(classifier.predict(input_im, verbose = 0)[0])
print(res)
index = int(np.argmax(res))
draw_test("Prediction", index, imageL)
cv2.waitKey(0)
cv2.destroyAllWindows()
# + [markdown] id="JuMRPKqYqQIQ"
# ---
# ## (3) TESTAREA CLASIFICATORULUI (FOLOSIND O POZA CAPTURATA DE WEBCAM)
# + id="-jxTyNPlqQIR"
from keras.models import load_model
simple = "AgeClass-2019-08-24-21-00.h5"
mobile = "AgeClass-2019-08-23-22-00_MobileDropout.h5"
google = "AgeClass-google-2019-08-29-23-00.h5"
most_recent = "AgeClass.h5"
most_recent2 = "AgeClass_best4762.h5"
# classifier = load_model("AgeClass-google-2019-08-29-16-00_full_mobile.h5")
classifier = load_model(most_recent2)
# + id="ygmV2e8yqQIW" outputId="107fa2f9-0175-4345-ce6b-5ac17591a353"
import cv2
import numpy as np
def draw_test(name, pred, input_im):
black_color = [0,0,0]
if pred == 0:
pred = "04 - 06 ani"
if pred == 1:
pred = "07 - 08 ani"
if pred == 2:
pred = "09 - 11 ani"
if pred == 3:
pred = "12 - 19 ani"
if pred == 4:
pred = "20 - 27 ani"
if pred == 5:
pred = "28 - 35 ani"
if pred == 6:
pred = "36 - 45 ani"
if pred == 7:
pred = "46 - 60 ani"
if pred == 8:
pred = "61 - 75 ani"
expanded_image = cv2.copyMakeBorder(input_im, 0, 0, 0, imageL.shape[0], cv2.BORDER_CONSTANT,value=black_color)
#expanded_image = cv2.cvtColor(expanded_image, cv2.COLOR_GRAY2BGR)
cv2.putText(expanded_image, str(pred), (500, 50) , cv2.FONT_HERSHEY_COMPLEX_SMALL, 2, (233,233,233), 2)
cv2.imshow(name, expanded_image)
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
webcam = cv2.VideoCapture(0)
webcam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
webcam.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
while True:
_, frame = webcam.read()
gray_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray_img, scaleFactor = 1.2, minNeighbors=5)
for x,y,w,h in faces:
cv2.rectangle(frame, (x,y), (x+w,y+h), (255,255,255), 3)
cv2.imshow("Capturing", frame)
key = cv2.waitKey(1)
if key == ord('s') or key == ord('S'):
saved_image = frame
faces = face_cascade.detectMultiScale(saved_image, scaleFactor = 1.2, minNeighbors=5)
for x,y,w,h in faces:
cv2.rectangle(saved_image, (x,y), (x+w,y+h), (255,255,255), 3)
input_im = saved_image[y:y+h, x:x+w]
webcam.release()
cv2.destroyAllWindows()
break
elif key == ord('q') or key == ord('Q'):
webcam.release()
cv2.destroyAllWindows()
break
input_im = cv2.resize(input_im, (224,224))
cv2.waitKey(0)
cv2.destroyAllWindows()
imageL = cv2.resize(input_im, None, fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
input_im = input_im.reshape(1,224,224,3)
## Get Prediction
res = list(classifier.predict(input_im, verbose = 0)[0])
print(res)
index = int(np.argmax(res))
draw_test("Prediction", index, imageL)
cv2.waitKey(0)
cv2.destroyAllWindows()
# + id="PNVSW5A3qQId"
cv2.destroyAllWindows()
# + [markdown] id="yS7GWSElqQI0"
# ---
# ---
# ## Convert .h5 model to tensorflow lite!
# + id="bT0ag6ktgTRk"
# %tensorflow_version 2.x
# + id="5LtkDH-jSxVK" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1590904525123, "user_tz": -180, "elapsed": 481, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="d433b1b5-ea0c-48e8-da7d-7d83a6dfdcf3"
from google.colab import drive
drive.mount('/content/drive')
# + id="nstFY5gmSks1" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1590904464917, "user_tz": -180, "elapsed": 2226, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="3e390b13-3a65-42e9-aefd-701b7d6633b4"
# %cp './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_best.h5' 'AgeClass_best.h5'
print("Imported!")
# + id="g8GBniJAqQI1"
import tensorflow as tf
# converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file("AgeClass_best.h5") # module 'tensorflow' has no attribute 'contrib'
# converter = tf.lite.TFLiteConverter.from_keras_model_file("AgeClass_best.h5") # 'TFLiteConverterV2' has no attribute 'from_keras_model_file'
model = tf.keras.models.load_model('AgeClass_best_06_11-05-25.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model) # doar pentru versiunea tensorflow 2.x
tflite_model = converter.convert()
with open('AgeClass_best_06_11-05-25.tflite', 'wb') as f:
f.write(tflite_model)
print("Done!")
# + id="qksioKn8qQI5" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1590845154590, "user_tz": -180, "elapsed": 4420, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="8d9290f5-8a72-4542-c95c-db6ee913e858"
# %cp 'AgeClass_best.tflite' './drive/My Drive/Colab_Notebooks/Licenta/AgeClass_best.tflite'
print("Saved!")
# + id="zhiCZrb_gDCR" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1592827822699, "user_tz": -180, "elapsed": 561, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08045879380098281335"}} outputId="18a81561-1b01-4d4f-b6c9-a9691922bb4d"
import tensorflow as tf
print(tf.__version__)
| AgeClass_MobileNetV2_SGD_v2_final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Data Science)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0
# ---
# # Chapter 3: Data preparation at scale using Amazon SageMaker Data Wrangler and Amazon SageMaker Processing
#
# In this notebook we'll perform the following steps:
#
# * Create a table in the Glue catalog for our data steps
# * Run a SageMaker Processing job to prepare the full data set
#
# You need to define the following variables:
#
# * `s3_bucket`: Bucket with the data set
# * `glue_db_name`: Glue database name
# * `glue_tbl_name`: Glue table name
# * `s3_prefix_parquet`: Location of the Parquet tables in the S3 bucket
# * `s3_output_prefix`: Location to store the prepared data in the S3 bucket
# * `s3_prefix`: Location of the JSON data in the S3 bucket
#
# ## Glue Catalog
# +
s3_bucket = 'MyBucket'
glue_db_name = 'MyDatabase'
glue_tbl_name = 'openaq'
s3_prefix = 'openaq/realtime'
s3_prefix_parquet = 'openaq/realtime-parquet-gzipped/tables'
s3_output_prefix = 'prepared'
import boto3
s3 = boto3.client('s3')
# -
glue = boto3.client('glue')
response = glue.create_database(
DatabaseInput={
'Name': glue_db_name,
}
)
response = glue.create_table(
DatabaseName=glue_db_name,
TableInput={
'Name': glue_tbl_name,
'StorageDescriptor': {
'Columns': [
{
"Name": "date",
"Type": "struct<utc:string,local:string>"
},
{
"Name": "parameter",
"Type": "string"
},
{
"Name": "location",
"Type": "string"
},
{
"Name": "value",
"Type": "double"
},
{
"Name": "unit",
"Type": "string"
},
{
"Name": "city",
"Type": "string"
},
{
"Name": "attribution",
"Type": "array<struct<name:string,url:string>>"
},
{
"Name": "averagingperiod",
"Type": "struct<value:double,unit:string>"
},
{
"Name": "coordinates",
"Type": "struct<latitude:double,longitude:double>"
},
{
"Name": "country",
"Type": "string"
},
{
"Name": "sourcename",
"Type": "string"
},
{
"Name": "sourcetype",
"Type": "string"
},
{
"Name": "mobile",
"Type": "boolean"
}
],
'Location': 's3://' + s3_bucket + '/' + s3_prefix + '/',
'InputFormat': 'org.apache.hadoop.mapred.TextInputFormat',
'OutputFormat': 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat',
'Compressed': False,
'SerdeInfo': {
'SerializationLibrary': 'org.openx.data.jsonserde.JsonSerDe',
"Parameters": {
"paths": "attribution,averagingPeriod,city,coordinates,country,date,location,mobile,parameter,sourceName,sourceType,unit,value"
}
},
'Parameters': {
"classification": "json",
"compressionType": "none",
},
'StoredAsSubDirectories': False,
},
'PartitionKeys': [
{
"Name": "aggdate",
"Type": "string"
},
],
'TableType': 'EXTERNAL_TABLE',
'Parameters': {
"classification": "json",
"compressionType": "none",
}
}
)
partitions_to_add = []
response = s3.list_objects_v2(
Bucket=s3_bucket,
Prefix=s3_prefix + '/'
)
for r in response['Contents']:
partitions_to_add.append(r['Key'])
while response['IsTruncated']:
token = response['NextContinuationToken']
response = s3.list_objects_v2(
Bucket=s3_bucket,
Prefix=s3_prefix,
ContinuationToken=token
)
for r in response['Contents']:
partitions_to_add.append(r['Key'])
if response['IsTruncated']:
oken = response['NextContinuationToken']
print("Getting next batch")
print(f"Need to add {len(partitions_to_add)} partitions")
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
def get_part_def(p):
part_value = p.split('/')[-2]
return {
'Values': [
part_value
],
'StorageDescriptor': {
'Columns': [
{
"Name": "date",
"Type": "struct<utc:string,local:string>"
},
{
"Name": "parameter",
"Type": "string"
},
{
"Name": "location",
"Type": "string"
},
{
"Name": "value",
"Type": "double"
},
{
"Name": "unit",
"Type": "string"
},
{
"Name": "city",
"Type": "string"
},
{
"Name": "attribution",
"Type": "array<struct<name:string,url:string>>"
},
{
"Name": "averagingperiod",
"Type": "struct<value:double,unit:string>"
},
{
"Name": "coordinates",
"Type": "struct<latitude:double,longitude:double>"
},
{
"Name": "country",
"Type": "string"
},
{
"Name": "sourcename",
"Type": "string"
},
{
"Name": "sourcetype",
"Type": "string"
},
{
"Name": "mobile",
"Type": "boolean"
}
],
'Location': f"s3://{s3_bucket}/{s3_prefix}/{part_value}/",
'InputFormat': 'org.apache.hadoop.mapred.TextInputFormat',
'OutputFormat': 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat',
'Compressed': False,
'SerdeInfo': {
'SerializationLibrary': 'org.openx.data.jsonserde.JsonSerDe',
"Parameters": {
"paths": "attribution,averagingPeriod,city,coordinates,country,date,location,mobile,parameter,sourceName,sourceType,unit,value"
}
},
'StoredAsSubDirectories': False
},
'Parameters': {
"classification": "json",
"compressionType": "none",
},
}
for batch in chunks(partitions_to_add, 100):
response = glue.batch_create_partition(
DatabaseName=glue_db_name,
TableName=glue_tbl_name,
PartitionInputList=[get_part_def(p) for p in batch]
)
# ## Processing Job
# +
import logging
import sagemaker
from time import gmtime, strftime
sagemaker_logger = logging.getLogger("sagemaker")
sagemaker_logger.setLevel(logging.INFO)
sagemaker_logger.addHandler(logging.StreamHandler())
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
# +
from sagemaker.spark.processing import PySparkProcessor
spark_processor = PySparkProcessor(
base_job_name="spark-preprocessor",
framework_version="3.0",
role=role,
instance_count=15,
instance_type="ml.m5.4xlarge",
max_runtime_in_seconds=7200,
)
configuration = [
{
"Classification": "spark-defaults",
"Properties": {"spark.executor.memory": "18g",
"spark.yarn.executor.memoryOverhead": "3g",
"spark.driver.memory": "18g",
"spark.yarn.driver.memoryOverhead": "3g",
"spark.executor.cores": "5",
"spark.driver.cores": "5",
"spark.executor.instances": "44",
"spark.default.parallelism": "440",
"spark.dynamicAllocation.enabled": "false"
},
},
{
"Classification": "yarn-site",
"Properties": {"yarn.nodemanager.vmem-check-enabled": "false",
"yarn.nodemanager.mmem-check-enabled": "false"},
}
]
spark_processor.run(
submit_app="scripts/preprocess.py",
submit_jars=["s3://crawler-public/json/serde/json-serde.jar"],
arguments=['--s3_input_bucket', s3_bucket,
'--s3_input_key_prefix', s3_prefix_parquet,
'--s3_output_bucket', s3_bucket,
'--s3_output_key_prefix', s3_output_prefix],
spark_event_logs_s3_uri="s3://{}/{}/spark_event_logs".format(s3_bucket, 'sparklogs'),
logs=True,
configuration=configuration
)
| Chapter04/PrepareData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: TensorFlow-GPU
# language: python
# name: tf-gpu
# ---
# + colab={} colab_type="code" executionInfo={"elapsed": 354, "status": "ok", "timestamp": 1593797695491, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "18095948978091425523"}, "user_tz": 240} id="_qj_MMBTt4AS"
#Importing libraries
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from os import getcwd
from tensorflow import keras
from tensorflow.keras import layers, Sequential
from tensorflow.keras.layers import LSTM, Dense, Bidirectional, Embedding, Dropout
from sklearn import preprocessing
# + colab={"base_uri": "https://localhost:8080/", "height": 402} colab_type="code" executionInfo={"elapsed": 2192, "status": "ok", "timestamp": 1593797697653, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="fonCKTyjxx9f" outputId="c44216c4-f774-4da2-cd25-ae21e725adf1"
#Reading data
url = "https://hub.mph.in.gov/dataset/bd08cdd3-9ab1-4d70-b933-41f9ef7b809d/resource/afaa225d-ac4e-4e80-9190-f6800c366b58/download/covid_report_county_date.xlsx?raw=true"
cases_data = pd.read_excel(url)
population_data = pd.read_csv(f"{getcwd()}/drive/My Drive/Colab Notebooks/COVID forecasting/indiana countywise population.csv")
#Data preprocessing
population_data["County"] = population_data["County"].str.upper()
population_data["County"] = population_data["County"].str.replace('.', '')
population_data["County"] = population_data["County"].str.replace(' ', '')
cases_data["COUNTY_NAME"] = cases_data["COUNTY_NAME"].str.replace(' ','')
data = pd.merge(cases_data, population_data, how = 'inner', left_on = 'COUNTY_NAME', right_on = 'County')
data = data[["COUNTY_NAME", "DATE", "COVID_COUNT", "COVID_DEATHS", "COVID_TEST", "Population"]]
data
# + colab={"base_uri": "https://localhost:8080/", "height": 284} colab_type="code" executionInfo={"elapsed": 1869, "status": "ok", "timestamp": 1593797697654, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="QUZtkvvyx7Ox" outputId="d9d86e64-7f47-45e2-ce30-8201b3f4fb5f"
data.describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 1199, "status": "ok", "timestamp": 1593797697655, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="TFfFV182zKmO" outputId="d95d7f60-de97-44ba-8a93-b2b7ebf373ba"
county_name = data["COUNTY_NAME"].unique()
county_name
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 746, "status": "ok", "timestamp": 1593797697655, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="fMHklST5CO1Q" outputId="64392bb0-544b-4bcf-d5f3-602ad6e52c8e"
print('Length of the sample: ', len(data))
# + colab={} colab_type="code" executionInfo={"elapsed": 315, "status": "ok", "timestamp": 1593797699813, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="af0L6opd4ZGc"
def create_dataset(X, y, time_steps=1):
Xs, ys = [], []
for i in range(len(X) - time_steps):
v = X.iloc[i:(i + time_steps)].values
Xs.append(v)
ys.append(y.iloc[i + time_steps])
return np.array(Xs), np.array(ys)
# + colab={"base_uri": "https://localhost:8080/", "height": 402} colab_type="code" executionInfo={"elapsed": 386, "status": "ok", "timestamp": 1593797701620, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="voojcsL8lQnR" outputId="1f50f470-e033-4b48-f763-d1a99d0caf2b"
#Scaling the data
scaler = preprocessing.StandardScaler()
scaled_data = scaler.fit_transform(data[['COVID_COUNT', 'COVID_DEATHS', 'COVID_TEST', 'Population']])
data["COVID_COUNT"] = scaled_data[:, 0]
data["COVID_DEATHS"] = scaled_data[:, 1]
data["COVID_TEST"] = scaled_data[:, 2]
data["Population"] = scaled_data[:, 3]
data
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 4167, "status": "ok", "timestamp": 1593797708436, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="lB_adjDk7rUN" outputId="dbdd589d-1dc6-4198-e87b-48e29f257bc2"
# reshape to [samples, time_steps, n_features]
time_steps = 21
X_train, y_train = [], []
for county in county_name:
filtered_data = data.loc[data["COUNTY_NAME"] == county]
train_size = len(filtered_data)
filtered_train = filtered_data.iloc[0:train_size]
filtered_X_train, filtered_y_train = create_dataset(filtered_train[['COVID_COUNT', 'COVID_DEATHS', 'COVID_TEST', 'Population']],
filtered_train[['COVID_COUNT', 'COVID_DEATHS']], time_steps)
if(len(X_train) == 0):
X_train, y_train = filtered_X_train, filtered_y_train
else:
X_train = np.vstack((X_train, filtered_X_train))
y_train = np.vstack((y_train, filtered_y_train))
print(X_train.shape, y_train.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 487} colab_type="code" executionInfo={"elapsed": 17033, "status": "ok", "timestamp": 1593796967696, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="jSAsML7ZR5if" outputId="449f09ea-2d22-4d49-fad1-09aa4453cdeb"
# Loading model
model = keras.models.load_model(f"{getcwd()}/drive/My Drive/Colab Notebooks/COVID forecasting/checkpoint.h5")
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 487} colab_type="code" executionInfo={"elapsed": 2499, "status": "ok", "timestamp": 1593797716372, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="Oc5Cnw_S9Mrn" outputId="a39dfc59-3d02-4805-ffa8-274668b377cf"
# #Defining Model
# model = Sequential()
# model.add(LSTM(1024, return_sequences=True, input_shape=(X_train.shape[1], X_train.shape[2])))
# model.add(Dropout(0.1))
# model.add(LSTM(512, return_sequences=True))
# model.add(Dropout(0.1))
# model.add(LSTM(256, return_sequences=True))
# model.add(Dropout(0.1))
# model.add(LSTM(64, return_sequences=False))
# model.add(Dropout(0.1))
# model.add(Dense(32))
# model.add(Dense(2))
# model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 3141414, "status": "ok", "timestamp": 1593802474365, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="nfswIV3MsWTe" outputId="88648d58-78c5-428f-bf94-9d57db629c9a"
model.compile(loss = 'mean_squared_error', optimizer = 'rmsprop')
history = model.fit(
X_train, y_train,
epochs = 500
)
# + colab={} colab_type="code" executionInfo={"elapsed": 2702, "status": "ok", "timestamp": 1593802606839, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="vpV2SMHXwehL"
checkpoint_path = f"{getcwd()}/drive/My Drive/Colab Notebooks/COVID forecasting/checkpoint.h5"
model.save(checkpoint_path)
# + colab={} colab_type="code" executionInfo={"elapsed": 41522, "status": "ok", "timestamp": 1593802646897, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhJbKrDE2AXAF9Ho64tx_d-Q_xri8XcE8jiHAAjWQ=s64", "userId": "18095948978091425523"}, "user_tz": 240} id="poNrfVZmTqG6"
import datetime as dt
for county in county_name:
demo_data = data[data['COUNTY_NAME'] == county]
demo_data = demo_data[['COVID_COUNT', 'COVID_DEATHS', 'COVID_TEST', 'Population']]
demo_data = demo_data.tail(time_steps + 1)
demo_X_test, demo_y_test = create_dataset(demo_data[['COVID_COUNT', 'COVID_DEATHS', 'COVID_TEST', 'Population']],
demo_data[['COVID_COUNT', 'COVID_DEATHS']], time_steps)
predictions = np.concatenate((model.predict(demo_X_test), np.zeros((1, 2))), axis = 1)
predictions = np.intc(scaler.inverse_transform(predictions))
# print(county, ':', predictions[:, 0:2])
transformed_demo_data = np.intc(scaler.inverse_transform(demo_data))
demo_data['COVID_COUNT'] = transformed_demo_data[:, 0]
demo_data['COVID_DEATHS'] = transformed_demo_data[:, 1]
demo_data = demo_data[['COVID_COUNT', 'COVID_DEATHS']]
demo_data = np.concatenate((demo_data, predictions[:, 0:2]), axis = 0)
date_data = data[['DATE']]
date_data = date_data.tail(time_steps + 2)
date_data['DATE'] = pd.to_datetime(date_data['DATE']).apply(pd.DateOffset(1))
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.suptitle('COVID FORECASTING for ' + county + ' County')
fig.set_size_inches(15, 6)
plt.setp(ax1.xaxis.get_majorticklabels(), rotation=90)
plt.setp(ax2.xaxis.get_majorticklabels(), rotation=90)
ax1.plot(date_data, demo_data[:, 0], linestyle='-', marker='o', color = 'red', label = 'COVID CASES')
ax1.grid(True)
ax1.set_xlabel('Time')
ax1.set_ylabel('Count')
ax1.legend()
ax2.plot(date_data, demo_data[:, 1], linestyle='-', marker='o', color = 'blue', label = 'COVID DEATHS')
ax2.grid(True)
ax2.set_xlabel('Time')
ax2.set_ylabel('Count')
ax2.legend()
# plt.show()
fig.savefig(f"{getcwd()}/drive/My Drive/Colab Notebooks/COVID forecasting/output images/"+county+".png", bbox_inches='tight')
plt.close()
# + colab={} colab_type="code" id="SjVaSvxFjrxU"
| starter-template/COVID Forecasting/.ipynb_checkpoints/COVID Forecasting-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp models.MLP
# -
# # MLP
#
# > This is an unofficial PyTorch implementation by <NAME> (<EMAIL>) based on **<NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2019). Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, 33(4), 917-963.** Official MLP TensorFlow implementation in https://github.com/hfawaz/dl-4-tsc/blob/master/classifiers/mlp.py
#export
from tsai.imports import *
from tsai.models.layers import *
#export
class MLP(Module):
def __init__(self, c_in, c_out, seq_len, layers=[500,500,500], ps=[0.1, 0.2, 0.2], act_cls=nn.ReLU(inplace=True),
use_bn=False, bn_final=False, lin_first=False, fc_dropout=0.5, y_range=None):
layers, ps = L(layers), L(ps)
if len(ps) <= 1: ps = ps * len(layers)
assert len(layers) == len(ps), '#layers and #ps must match'
self.flatten = Reshape(-1)
nf = [c_in * seq_len] + layers
self.mlp = nn.ModuleList()
for i in range(len(layers)): self.mlp.append(LinBnDrop(nf[i], nf[i+1], bn=use_bn, p=ps[i], act=act_cls, lin_first=lin_first))
_head = [LinBnDrop(nf[-1], c_out, bn=bn_final, p=fc_dropout)]
if y_range is not None: _head.append(SigmoidRange(*y_range))
self.head = nn.Sequential(*_head)
def forward(self, x):
x = self.flatten(x)
for mlp in self.mlp: x = mlp(x)
return self.head(x)
bs = 16
nvars = 3
seq_len = 128
c_out = 2
xb = torch.rand(bs, nvars, seq_len)
model = MLP(nvars, c_out, seq_len)
test_eq(model(xb).shape, (bs, c_out))
model
#hide
out = create_scripts()
beep(out)
| nbs/103_models.MLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from scipy.integrate import odeint
import numpy as np
import theano
from theano import *
import matplotlib.pyplot as plt
import pymc3 as pm
THEANO_FLAGS='optimizer=fast_compile'
# -
# # Lotka-Volterra with manual gradients
#
# by [<NAME>](https://www.mrc-bsu.cam.ac.uk/people/in-alphabetical-order/a-to-g/sanmitra-ghosh/)
# Mathematical models are used ubiquitously in a variety of science and engineering domains to model the time evolution of physical variables. These mathematical models are often described as ODEs that are characterised by model structure - the functions of the dynamical variables - and model parameters. However, for the vast majority of systems of practical interest it is necessary to infer both the model parameters and an appropriate model structure from experimental observations. This experimental data often appears to be scarce and incomplete. Furthermore, a large variety of models described as dynamical systems show traits of sloppiness (see [Gutenkunst et al., 2007](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.0030189)) and have unidentifiable parameter combinations. The task of inferring model parameters and structure from experimental data is of paramount importance to reliably analyse the behaviour of dynamical systems and draw faithful predictions in light of the difficulties posit by their complexities. Moreover, any future model prediction should encompass and propagate variability and uncertainty in model parameters and/or structure. Thus, it is also important that the inference methods are equipped to quantify and propagate the aforementioned uncertainties from the model descriptions to model predictions. As a natural choice to handle uncertainty, at least in the parameters, Bayesian inference is increasingly used to fit ODE models to experimental data ([<NAME>, 2008](https://www.sciencedirect.com/science/article/pii/S030439750800501X)). However, due to some of the difficulties that I pointed above, fitting an ODE model using Bayesian inference is a challenging task. In this tutorial I am going to take up that challenge and will show how PyMC3 could be potentially used for this purpose.
#
# I must point out that model fitting (inference of the unknown parameters) is just one of many crucial tasks that a modeller has to complete in order to gain a deeper understanding of a physical process. However, success in this task is crucial and this is where PyMC3, and probabilistic programming (ppl) in general, is extremely useful. The modeller can take full advantage of the variety of samplers and distributions provided by PyMC3 to automate inference.
#
# In this tutorial I will focus on the fitting exercise, that is estimating the posterior distribution of the parameters given some noisy experimental time series.
# ## Bayesian inference of the parameters of an ODE
#
# I begin by first introducing the Bayesian framework for inference in a coupled non-linear ODE defined as
# $$
# \frac{d X(t)}{dt}=\boldsymbol{f}\big(X(t),\boldsymbol{\theta}\big),
# $$
# where $X(t)\in\mathbb{R}^K$ is the solution, at each time point, of the system composed of $K$ coupled ODEs - the state vector - and $\boldsymbol{\theta}\in\mathbb{R}^D$ is the parameter vector that we wish to infer. $\boldsymbol{f}(\cdot)$ is a non-linear function that describes the governing dynamics. Also, in case of an initial value problem, let the matrix $\boldsymbol{X}(\boldsymbol{\theta}, \mathbf{x_0})$ denote the solution of the above system of equations at some specified time points for the parameters $\boldsymbol{\theta}$ and initial conditions $\mathbf{x_0}$.
#
# Consider a set of noisy experimental observations $\boldsymbol{Y} \in \mathbb{R}^{T\times K}$ observed at $T$ experimental time points for the $K$ states. We can obtain the likelihood $p(\boldsymbol{Y}|\boldsymbol{X})$, where I use the symbol $\boldsymbol{X}:=\boldsymbol{X}(\boldsymbol{\theta}, \mathbf{x_0})$, and combine that with a prior distribution $p(\boldsymbol{\theta})$ on the parameters, using the Bayes theorem, to obtain the posterior distribution as
# $$
# p(\boldsymbol{\theta}|\boldsymbol{Y})=\frac{1}{Z}p(\boldsymbol{Y}|\boldsymbol{X})p(\boldsymbol{\theta}),
# $$
# where $Z=\int p(\boldsymbol{Y}|\boldsymbol{X})p(\boldsymbol{\theta}) d\boldsymbol{\theta} $ is the intractable marginal likelihood. Due to this intractability we resort to approximate inference and apply MCMC.
#
# For this tutorial I have chosen two ODEs:
# 1. The [__Lotka-Volterra predator prey model__ ](http://www.scholarpedia.org/article/Predator-prey_model)
# 2. The [__Fitzhugh-Nagumo action potential model__](http://www.scholarpedia.org/article/FitzHugh-Nagumo_model)
#
# I will showcase two distinctive approaches (__NUTS__ and __SMC__ step methods), supported by PyMC3, for the estimation of unknown parameters in these models.
# ## Lotka-Volterra predator prey model
#
# The Lotka Volterra model depicts an ecological system that is used to describe the interaction between a predator and prey species. This ODE given by
# $$
# \begin{aligned}
# \frac{d x}{dt} &=\alpha x -\beta xy \\
# \frac{d y}{dt} &=-\gamma y + \delta xy,
# \end{aligned}
# $$
# shows limit cycle behaviour and has often been used for benchmarking Bayesian inference methods. $\boldsymbol{\theta}=(\alpha,\beta,\gamma,\delta, x(0),y(0))$ is the set of unknown parameters that we wish to infer from experimental observations of the state vector $X(t)=(x(t),y(t))$ comprising the concentrations of the prey and the predator species respectively. $x(0), y(0)$ are the initial values of the states needed to solve the ODE, which are also treated as unknown quantities. The predator prey model was recently used to demonstrate the applicability of the NUTS sampler, and the Stan ppl in general, for inference in ODE models. I will closely follow [this](https://mc-stan.org/users/documentation/case-studies/lotka-volterra-predator-prey.html) Stan tutorial and thus I will setup this model and associated inference problem (including the data) exactly as was done for the Stan tutorial. Let me first write down the code to solve this ODE using the SciPy's `odeint`. Note that the methods in this tutorial is not limited or tied to `odeint`. Here I have chosen `odeint` to simply stay within PyMC3's dependencies (SciPy in this case).
class LotkaVolterraModel(object):
def __init__(self, y0=None):
self._y0 = y0
def simulate(self, parameters, times):
alpha, beta, gamma, delta, Xt0, Yt0 = [x for x in parameters]
def rhs(y, t, p):
X, Y = y
dX_dt = alpha*X - beta*X*Y
dY_dt = -gamma*Y + delta*X*Y
return dX_dt, dY_dt
values = odeint(rhs, [Xt0, Yt0], times, (parameters,))
return values
ode_model = LotkaVolterraModel()
# ## Handling ODE gradients
#
# NUTS requires the gradient of the log of the target density w.r.t. the unknown parameters, $\nabla_{\boldsymbol{\theta}}p(\boldsymbol{\theta}|\boldsymbol{Y})$, which can be evaluated using the chain rule of differentiation as
# $$ \nabla_{\boldsymbol{\theta}}p(\boldsymbol{\theta}|\boldsymbol{Y}) = \frac{\partial p(\boldsymbol{\theta}|\boldsymbol{Y})}{\partial \boldsymbol{X}}^T \frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}.$$
#
# The gradient of an ODE w.r.t. its parameters, the term $\frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}$, can be obtained using local sensitivity analysis, although this is not the only method to obtain gradients. However, just like solving an ODE (a non-linear one to be precise) evaluation of the gradients can only be carried out using some sort of numerical method, say for example the famous Runge-Kutta method for non-stiff ODEs. PyMC3 uses Theano as the automatic differentiation engine and thus all models are implemented by stitching together available primitive operations (Ops) supported by Theano. Even to extend PyMC3 we need to compose models that can be expressed as symbolic combinations of Theano's Ops. However, if we take a step back and think about Theano then it is apparent that neither the ODE solution nor its gradient w.r.t. to the parameters can be expressed symbolically as combinations of Theano’s primitive Ops. Hence, from Theano’s perspective an ODE (and for that matter any other form of a non-linear differential equation) is a non-differentiable black-box function. However, one might argue that if a numerical method is coded up in Theano (using say the `scan` Op), then it is possible to symbolically express the numerical method that evaluates the ODE states, and then we can easily use Theano’s automatic differentiation engine to obtain the gradients as well by differentiating through the numerical solver itself. I like to point out that the former, obtaining the solution, is indeed possible this way but the obtained gradient would be error-prone. Additionally, this entails to a complete ‘re-inventing the wheel’ as one would have to implement decades old sophisticated numerical algorithms again from scratch in Theano.
#
# Thus, in this tutorial I am going to present the alternative approach which consists of defining new [custom Theano Ops](http://deeplearning.net/software/theano_versions/dev/extending/extending_theano.html), extending Theano, that will wrap both the numerical solution and the vector-Matrix product, $ \frac{\partial p(\boldsymbol{\theta}|\boldsymbol{Y})}{\partial \boldsymbol{X}}^T \frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}$, often known as the _**vector-Jacobian product**_ (VJP) in automatic differentiation literature. I like to point out here that in the context of non-linear ODEs the term Jacobian is used to denote gradients of the ODE dynamics $\boldsymbol{f}$ w.r.t. the ODE states $X(t)$. Thus, to avoid confusion, from now on I will use the term _**vector-sensitivity product**_ (VSP) to denote the same quantity that the term VJP denotes.
#
# I will start by introducing the forward sensitivity analysis.
#
# ## ODE sensitivity analysis
#
# For a coupled ODE system $\frac{d X(t)}{dt} = \boldsymbol{f}(X(t),\boldsymbol{\theta})$, the local sensitivity of the solution to a parameter is defined by how much the solution would change by changes in the parameter, i.e. the sensitivity of the the $k$-th state is simply put the time evolution of its graident w.r.t. the $d$-th parameter. This quantitiy, denoted as $Z_{kd}(t)$, is given by
# $$Z_{kd}(t)=\frac{d }{d t} \left\{\frac{\partial X_k (t)}{\partial \theta_d}\right\} = \sum_{i=1}^K \frac{\partial f_k}{\partial X_i (t)}\frac{\partial X_i (t)}{\partial \theta_d} + \frac{\partial f_k}{\partial \theta_d}.$$
#
# Using forward sensitivity analysis we can obtain both the state $X(t)$ and its derivative w.r.t the parameters, at each time point, as the solution to an initial value problem by augmenting the original ODE system with the sensitivity equations $Z_{kd}$. The augmented ODE system $\big(X(t), Z(t)\big)$ can then be solved together using a chosen numerical method. The augmented ODE system needs the initial values for the sensitivity equations. All of these should be set to zero except the ones where the sensitivity of a state w.r.t. its own initial value is sought, that is $ \frac{\partial X_k(t)}{\partial X_k (0)} =1 $. Note that in order to solve this augmented system we have to embark in the tedious process of deriving $ \frac{\partial f_k}{\partial X_i (t)}$, also known as the Jacobian of an ODE, and $\frac{\partial f_k}{\partial \theta_d}$ terms. Thankfully, many ODE solvers calculate these terms and solve the augmented system when asked for by the user. An example would be the [SUNDIAL CVODES solver suite](https://computation.llnl.gov/projects/sundials/cvodes). A Python wrapper for CVODES can be found [here](https://jmodelica.org/assimulo/).
#
# However, for this tutorial I would go ahead and derive the terms mentioned above, manually, and solve the Lotka-Volterra ODEs alongwith the sensitivites in the following code block. The functions `jac` and `dfdp` below calculate $ \frac{\partial f_k}{\partial X_i (t)}$ and $\frac{\partial f_k}{\partial \theta_d}$ respectively for the Lotka-Volterra model. For conveniance I have transformed the sensitivity equation in a matrix form. Here I extended the solver code snippet above to include sensitivities when asked for.
# +
n_states = 2
n_odeparams = 4
n_ivs = 2
class LotkaVolterraModel(object):
def __init__(self, n_states, n_odeparams, n_ivs, y0=None):
self._n_states = n_states
self._n_odeparams = n_odeparams
self._n_ivs = n_ivs
self._y0 = y0
def simulate(self, parameters, times):
return self._simulate(parameters, times, False)
def simulate_with_sensitivities(self, parameters, times):
return self._simulate(parameters, times, True)
def _simulate(self, parameters, times, sensitivities):
alpha, beta, gamma, delta, Xt0, Yt0 = [x for x in parameters]
def r(y, t, p):
X, Y = y
dX_dt = alpha*X - beta*X*Y
dY_dt = -gamma*Y + delta*X*Y
return dX_dt, dY_dt
if sensitivities:
def jac(y):
X, Y = y
ret = np.zeros((self._n_states, self._n_states))
ret[0, 0] = alpha - beta*Y
ret[0, 1] = - beta*X
ret[1, 0] = delta*Y
ret[1, 1] = -gamma + delta*X
return ret
def dfdp(y):
X, Y = y
ret = np.zeros((self._n_states,
self._n_odeparams + self._n_ivs)) # except the following entries
ret[0, 0] = X # \frac{\partial [\alpha X - \beta XY]}{\partial \alpha}, and so on...
ret[0, 1] = - X*Y
ret[1, 2] = -Y
ret[1, 3] = X*Y
return ret
def rhs(y_and_dydp, t, p):
y = y_and_dydp[0:self._n_states]
dydp = y_and_dydp[self._n_states:].reshape((self._n_states,
self._n_odeparams + self._n_ivs))
dydt = r(y, t, p)
d_dydp_dt = np.matmul(jac(y), dydp) + dfdp(y)
return np.concatenate((dydt, d_dydp_dt.reshape(-1)))
y0 = np.zeros( (2*(n_odeparams+n_ivs)) + n_states )
y0[6] = 1. #\frac{\partial [X]}{\partial Xt0} at t==0, and same below for Y
y0[13] = 1.
y0[0:n_states] = [Xt0, Yt0]
result = odeint(rhs, y0, times, (parameters,),rtol=1e-6,atol=1e-5)
values = result[:, 0:self._n_states]
dvalues_dp = result[:, self._n_states:].reshape((len(times),
self._n_states,
self._n_odeparams + self._n_ivs))
return values, dvalues_dp
else:
values = odeint(r, [Xt0, Yt0], times, (parameters,),rtol=1e-6,atol=1e-5)
return values
ode_model = LotkaVolterraModel(n_states, n_odeparams, n_ivs)
# -
# For this model I have set the relative and absolute tolerances to $10^{-6}$ and $10^{-5}$ respectively, as was suggested in the Stan tutorial. This will produce sufficiently accurate solutions. Further reducing the tolerances will increase accuracy but at the cost of increasing the computational time. A thorough discussion on the choice and use of a numerical method for solving the ODE is out of the scope of this tutorial. However, I must point out that the inaccuracies of the ODE solver do affect the likelihood and as a result the inference. This is more so the case for stiff systems. I would recommend interested readers to this nice blog article where this effect is discussed thoroughly for a [cardiac ODE model](https://mirams.wordpress.com/2018/10/17/ode-errors-and-optimisation/). There is also an emerging area of uncertainty quantification that attacks the problem of noise arisng from impreciseness of numerical algorithms, [probabilistic numerics](http://probabilistic-numerics.org/). This is indeed an elegant framework to carry out inference while taking into account the errors coming from the numeric ODE solvers.
#
# ## Custom ODE Op
#
# In order to define the custom `Op` I have written down two `theano.Op` classes `ODEGradop`, `ODEop`. `ODEop` essentially wraps the ODE solution and will be called by PyMC3. The `ODEGradop` wraps the numerical VSP and this op is then in turn used inside the `grad` method in the `ODEop` to return the VSP. Note that we pass in two functions: `state`, `numpy_vsp` as arguments to respective Ops. I will define these functions later. These functions act as shims using which we connect the python code for numerical solution of sate and VSP to Theano and thus PyMC3.
# +
class ODEGradop(theano.Op):
def __init__(self, numpy_vsp):
self._numpy_vsp = numpy_vsp
def make_node(self, x, g):
x = theano.tensor.as_tensor_variable(x)
g = theano.tensor.as_tensor_variable(g)
node = theano.Apply(self, [x, g], [g.type()])
return node
def perform(self, node, inputs_storage, output_storage):
x = inputs_storage[0]
g = inputs_storage[1]
out = output_storage[0]
out[0] = self._numpy_vsp(x, g) # get the numerical VSP
class ODEop(theano.Op):
def __init__(self, state, numpy_vsp):
self._state = state
self._numpy_vsp = numpy_vsp
def make_node(self, x):
x = theano.tensor.as_tensor_variable(x)
return theano.Apply(self, [x], [x.type()])
def perform(self, node, inputs_storage, output_storage):
x = inputs_storage[0]
out = output_storage[0]
out[0] = self._state(x) # get the numerical solution of ODE states
def grad(self, inputs, output_grads):
x = inputs[0]
g = output_grads[0]
grad_op = ODEGradop(self._numpy_vsp) # pass the VSP when asked for gradient
grad_op_apply = grad_op(x, g)
return [grad_op_apply]
# -
# I must point out that the way I have defined the custom ODE Ops above there is the possibility that the ODE is solved twice for the same parameter values, once for the states and another time for the VSP. To avoid this behaviour I have written a helper class which stops this double evaluation.
class solveCached(object):
def __init__(self, times, n_params, n_outputs):
self._times = times
self._n_params = n_params
self._n_outputs = n_outputs
self._cachedParam = np.zeros(n_params)
self._cachedSens = np.zeros((len(times), n_outputs, n_params))
self._cachedState = np.zeros((len(times),n_outputs))
def __call__(self, x):
if np.all(x==self._cachedParam):
state, sens = self._cachedState, self._cachedSens
else:
state, sens = ode_model.simulate_with_sensitivities(x, times)
return state, sens
times = np.arange(0, 21) # number of measurement points (see below)
cached_solver=solveCached(times, n_odeparams + n_ivs, n_states)
# ### The ODE state & VSP evaluation
#
# Most ODE systems of practical interest will have multiple states and thus the output of the solver, which I have denoted so far as $\boldsymbol{X}$, for a system with $K$ states solved on $T$ time points, would be a $T \times K$-dimensional matrix. For the Lotka-Volterra model the columns of this matrix represent the time evolution of the individual species concentrations. I flatten this matrix to a $TK$-dimensional vector $vec(\boldsymbol{X})$, and also rearrange the sensitivities accordingly to obtain the desired vector-matrix product. It is beneficial at this point to test the custom Op as described [here](http://deeplearning.net/software/theano_versions/dev/extending/extending_theano.html#how-to-test-it).
# +
def state(x):
State, Sens = cached_solver(np.array(x,dtype=np.float64))
cached_solver._cachedState, cached_solver._cachedSens, cached_solver._cachedParam = State, Sens, x
return State.reshape((2*len(State),))
def numpy_vsp(x, g):
numpy_sens = cached_solver(np.array(x,dtype=np.float64))[1].reshape((n_states*len(times),len(x)))
return numpy_sens.T.dot(g)
# -
# ## The Hudson's Bay Company data
#
# The Lotka-Volterra predator prey model has been used previously to successfully explain the dynamics of natural populations of predators and prey, such as the lynx and snowshoe hare data of the Hudson's Bay Company. This is the same data (that was shared [here](https://github.com/stan-dev/example-models/tree/master/knitr/lotka-volterra)) used in the Stan example and thus I will use this data-set as the experimental observations $\boldsymbol{Y}(t)$ to infer the parameters.
Year = np.arange(1900,1921,1)
Lynx = np.array([4.0, 6.1, 9.8, 35.2, 59.4, 41.7, 19.0, 13.0, 8.3, 9.1, 7.4,
8.0, 12.3, 19.5, 45.7, 51.1, 29.7, 15.8, 9.7, 10.1, 8.6])
Hare = np.array([30.0, 47.2, 70.2, 77.4, 36.3, 20.6, 18.1, 21.4, 22.0, 25.4,
27.1, 40.3, 57.0, 76.6, 52.3, 19.5, 11.2, 7.6, 14.6, 16.2, 24.7])
plt.figure(figsize=(15, 7.5))
plt.plot(Year,Lynx,color='b', lw=4, label='Lynx')
plt.plot(Year,Hare,color='g', lw=4, label='Hare')
plt.legend(fontsize=15)
plt.xlim([1900,1920])
plt.xlabel('Year', fontsize=15)
plt.ylabel('Concentrations', fontsize=15)
plt.xticks(Year,rotation=45)
plt.title('Lynx (predator) - Hare (prey): oscillatory dynamics', fontsize=25);
# ## The probablistic model
#
# I have now got all the ingredients needed in order to define the probabilistic model in PyMC3. As I have mentioned previously I will set up the probabilistic model with the exact same likelihood and priors used in the Stan example. The observed data is defined as follows:
#
# $$\log (\boldsymbol{Y(t)}) = \log (\boldsymbol{X(t)}) + \eta(t),$$
#
# where $\eta(t)$ is assumed to be zero mean i.i.d Gaussian noise with an unknown standard deviation $\sigma$, that needs to be estimated. The above multiplicative (on the natural scale) noise model encodes a lognormal distribution as the likelihood:
#
# $$\boldsymbol{Y(t)} \sim \mathcal{L}\mathcal{N}(\log (\boldsymbol{X(t)}), \sigma^2).$$
#
# The following priors are then placed on the parameters:
#
# $$
# \begin{aligned}
# x(0), y(0) &\sim \mathcal{L}\mathcal{N}(\log(10),1),\\
# \alpha, \gamma &\sim \mathcal{N}(1,0.5),\\
# \beta, \delta &\sim \mathcal{N}(0.05,0.05),\\
# \sigma &\sim \mathcal{L}\mathcal{N}(-1,1).
# \end{aligned}
# $$
#
# For an intuitive explanation, which I am omitting for brevity, regarding the choice of priors as well as the likelihood model, I would recommend the Stan example mentioned above. The above probabilistic model is defined in PyMC3 below. Note that the flattened state vector is reshaped to match the data dimensionality.
#
# Finally, I use the `pm.sample` method to run NUTS by default and obtain $1500$ post warm-up samples from the posterior.
# +
theano.config.exception_verbosity= 'high'
theano.config.floatX = 'float64'
# Define the data matrix
Y = np.vstack((Hare,Lynx)).T
# Now instantiate the theano custom ODE op
my_ODEop = ODEop(state,numpy_vsp)
# The probabilistic model
with pm.Model() as LV_model:
# Priors for unknown model parameters
alpha = pm.Normal('alpha', mu=1, sd=0.5)
beta = pm.Normal('beta', mu=0.05, sd=0.05)
gamma = pm.Normal('gamma', mu=1, sd=0.5)
delta = pm.Normal('delta', mu=0.05, sd=0.05)
xt0 = pm.Lognormal('xto', mu=np.log(10), sd=1)
yt0 = pm.Lognormal('yto', mu=np.log(10), sd=1)
sigma = pm.Lognormal('sigma', mu=-1, sd=1, shape=2)
# Forward model
all_params = pm.math.stack([alpha,beta,gamma,delta,xt0,yt0],axis=0)
ode_sol = my_ODEop(all_params)
forward = ode_sol.reshape(Y.shape)
# Likelihood
Y_obs = pm.Lognormal('Y_obs', mu=pm.math.log(forward), sd=sigma, observed=Y)
trace = pm.sample(1500, tune=1000, init='adapt_diag')
trace['diverging'].sum()
# -
with LV_model:
pm.traceplot(trace);
import pandas as pd
summary = pm.summary(trace)
STAN_mus = [0.549, 0.028, 0.797, 0.024, 33.960, 5.949, 0.248, 0.252]
STAN_sds = [0.065, 0.004, 0.091, 0.004, 2.909, 0.533, 0.045, 0.044]
summary['STAN_mus'] = pd.Series(np.array(STAN_mus), index=summary.index)
summary['STAN_sds'] = pd.Series(np.array(STAN_sds), index=summary.index)
summary
# These estimates are almost identical to those obtained in the Stan tutorial (see the last two columns above), which is what we can expect. Posterior predictives can be drawn as below.
ppc_samples = pm.sample_posterior_predictive(trace, samples=1000, model=LV_model)['Y_obs']
mean_ppc = ppc_samples.mean(axis=0)
CriL_ppc = np.percentile(ppc_samples,q=2.5,axis=0)
CriU_ppc = np.percentile(ppc_samples,q=97.5,axis=0)
plt.figure(figsize=(15, 2*(5)))
plt.subplot(2,1,1)
plt.plot(Year,Lynx,'o', color='b', lw=4, ms=10.5)
plt.plot(Year,mean_ppc[:,1], color='b', lw=4)
plt.plot(Year,CriL_ppc[:,1], '--', color='b', lw=2)
plt.plot(Year,CriU_ppc[:,1], '--', color='b', lw=2)
plt.xlim([1900,1920])
plt.ylabel('Lynx conc', fontsize=15)
plt.xticks(Year,rotation=45);
plt.subplot(2,1,2)
plt.plot(Year,Hare,'o', color='g', lw=4, ms=10.5, label='Observed')
plt.plot(Year,mean_ppc[:,0], color='g', lw=4, label='mean of ppc')
plt.plot(Year,CriL_ppc[:,0], '--', color='g', lw=2, label='credible intervals')
plt.plot(Year,CriU_ppc[:,0], '--', color='g', lw=2)
plt.legend(fontsize=15)
plt.xlim([1900,1920])
plt.xlabel('Year', fontsize=15)
plt.ylabel('Hare conc', fontsize=15)
plt.xticks(Year,rotation=45);
# # Efficient exploration of the posterior landscape with SMC
#
# It has been pointed out in several papers that the complex non-linear dynamics of an ODE results in a posterior landscape that is extremely difficult to navigate efficiently by many MCMC samplers. Thus, recently the curvature information of the posterior surface has been used to construct powerful geometrically aware samplers ([<NAME> and <NAME>, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x)) that perform extremely well in ODE inference problems. Another set of ideas suggest breaking down a complex inference task into a sequence of simpler tasks. In essence the idea is to use sequential-importance-sampling to sample from an artificial sequence of increasingly complex distributions where the first in the sequence is a distribution that is easy to sample from, the prior, and the last in the sequence is the actual complex target distribution. The associated importance distribution is constructed by moving the set of particles sampled at the previous step using a Markov kernel, say for example the MH kernel.
#
# A simple way of building the sequence of distributions is to use a temperature $\beta$, that is raised slowly from $0$ to $1$. Using this temperature variable $\beta$ we can write down the annealed intermediate distribution as
#
# $$p_{\beta}(\boldsymbol{\theta}|\boldsymbol{y})\propto p(\boldsymbol{y}|\boldsymbol{\theta})^{\beta} p(\boldsymbol{\theta}).$$
#
# Samplers that carry out sequential-importance-sampling from these artificial sequence of distributions, to avoid the difficult task of sampling directly from $p(\boldsymbol{\theta}|\boldsymbol{y})$, are known as Sequential Monte Carlo (SMC) samplers ([P Del Moral et al., 2006](https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1467-9868.2006.00553.x)). The performance of these samplers are sensitive to the choice of the temperature schedule, that is the set of user-defined increasing values of $\beta$ between $0$ and $1$. Fortunately, PyMC3 provides a version of the SMC sampler ([<NAME> and <NAME>, 2007](https://ascelibrary.org/doi/10.1061/%28ASCE%290733-9399%282007%29133%3A7%28816%29)) that automatically figures out this temperature schedule. Moreover, the PyMC3's SMC sampler does not require the gradient of the log target density. As a result it is extremely easy to use this sampler for inference in ODE models. In the next example I will apply this SMC sampler to estimate the parameters of the Fitzhugh-Nagumo model.
# ## The Fitzhugh-Nagumo model
#
# The Fitzhugh-Nagumo model given by
# $$
# \begin{aligned}
# \frac{dV}{dt}&=(V - \frac{V^3}{3} + R)c\\
# \frac{dR}{dt}&=\frac{-(V-a+bR)}{c},
# \end{aligned}
# $$
# consisting of a membrane voltage variable $V(t)$ and a recovery variable $R(t)$ is a two-dimensional simplification of the [Hodgkin-Huxley](http://www.scholarpedia.org/article/Conductance-based_models) model of spike (action potential) generation in squid giant axons and where $a$, $b$, $c$ are the model parameters. This model produces a rich dynamics and as a result a complex geometry of the posterior surface that often leads to poor performance of many MCMC samplers. As a result this model was used to test the efficacy of the discussed geometric MCMC scheme and since then has been used to benchmark other novel MCMC methods. Following [<NAME> and <NAME>, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x) I will also use artificially generated data from this model to setup the inference task for estimating $\boldsymbol{\theta}=(a,b,c)$.
class FitzhughNagumoModel(object):
def __init__(self, times, y0=None):
self._y0 = np.array([-1, 1], dtype=np.float64)
self._times = times
def _simulate(self, parameters, times):
a, b, c = [float(x) for x in parameters]
def rhs(y, t, p):
V, R = y
dV_dt = (V - V**3 / 3 + R) * c
dR_dt = (V - a + b * R) / -c
return dV_dt, dR_dt
values = odeint(rhs, self._y0, times, (parameters,),rtol=1e-6,atol=1e-6)
return values
def simulate(self, x):
return self._simulate(x, self._times)
# ## Simulated Data
#
# For this example I am going to use simulated data that is I will generate noisy traces from the forward model defined above with parameters $\theta$ set to $(0.2,0.2,3)$ respectively and corrupted by i.i.d Gaussian noise with a standard deviation $\sigma=0.5$. The initial values are set to $V(0)=-1$ and $R(0)=1$ respectively. Again following [<NAME> and <NAME>, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x) I will assume that the initial values are known. These parameter values pushes the model into the oscillatory regime.
n_states = 2
n_times = 200
true_params = [0.2,0.2,3.]
noise_sigma = 0.5
FN_solver_times = np.linspace(0, 20, n_times)
ode_model = FitzhughNagumoModel(FN_solver_times)
sim_data = ode_model.simulate(true_params)
np.random.seed(42)
Y_sim = sim_data + np.random.randn(n_times,n_states)*noise_sigma
plt.figure(figsize=(15, 7.5))
plt.plot(FN_solver_times, sim_data[:,0], color='darkblue', lw=4, label=r'$V(t)$')
plt.plot(FN_solver_times, sim_data[:,1], color='darkgreen', lw=4, label=r'$R(t)$')
plt.plot(FN_solver_times, Y_sim[:,0], 'o', color='darkblue', ms=4.5, label='Noisy traces')
plt.plot(FN_solver_times, Y_sim[:,1], 'o', color='darkgreen', ms=4.5)
plt.legend(fontsize=15)
plt.xlabel('Time',fontsize=15)
plt.ylabel('Values',fontsize=15)
plt.title('Fitzhugh-Nagumo Action Potential Model', fontsize=25);
# ## Define a non-differentiable black-box op using Theano @as_op
#
# Remember that I told SMC sampler does not require gradients, this is by the way the case for other samplers such as the Metropolis-Hastings, Slice sampler that are also supported in PyMC3. For all these gradient-free samplers I will show a simple and quick way of wrapping the forward model i.e. the ODE solution in Theano. All we have to do is to simply to use the decorator `as_op` that converts a python function into a basic Theano Op. We also tell Theano using the `as_op` decorator that we have three parameters each being a Theano scalar. The output then is a Theano matrix whose columns are the state vectors.
# +
import theano.tensor as tt
from theano.compile.ops import as_op
@as_op(itypes=[tt.dscalar,tt.dscalar,tt.dscalar], otypes=[tt.dmatrix])
def th_forward_model(param1,param2,param3):
param = [param1,param2,param3]
th_states = ode_model.simulate(param)
return th_states
# -
# ## Generative model
#
# Since I have corrupted the original traces with i.i.d Gaussian thus the likelihood is given by
# $$\boldsymbol{Y} = \prod_{i=1}^T \mathcal{N}(\boldsymbol{X}(t_i)), \sigma^2\mathbb{I}),$$
# where $\mathbb{I}\in \mathbb{R}^{K \times K}$. We place a Gamma, Normal, Uniform prior on $(a,b,c)$ and a HalfNormal prior on $\sigma$ as follows:
# $$
# \begin{aligned}
# a & \sim \mathcal{Gamma}(2,1),\\
# b & \sim \mathcal{N}(0,1),\\
# c & \sim \mathcal{U}(0.1,1),\\
# \sigma & \sim \mathcal{H}(1).
# \end{aligned}
# $$
#
# Notice how I have used the `start` argument for this example. Just like `pm.sample` `pm.sample_smc` has a number of settings, but I found the default ones good enough for simple models such as this one.
draws = 1000
with pm.Model() as FN_model:
a = pm.Gamma('a', alpha=2, beta=1)
b = pm.Normal('b', mu=0, sd=1)
c = pm.Uniform('c', lower=0.1, upper=10)
sigma = pm.HalfNormal('sigma', sd=1)
forward = th_forward_model(a,b,c)
cov=np.eye(2)*sigma**2
Y_obs = pm.MvNormal('Y_obs', mu=forward, cov=cov, observed=Y_sim)
startsmc = {v.name:np.random.uniform(1e-3,2, size=draws) for v in FN_model.free_RVs}
trace_FN = pm.sample_smc(draws, start=startsmc)
pm.plot_posterior(trace_FN, kind='hist', bins=30, color='seagreen');
# ## Inference summary
#
# With `pm.SMC`, do I get similar performance to geometric MCMC samplers (see [<NAME> and <NAME>, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x))? I think so !
results=[pm.summary(trace_FN, ['a']),pm.summary(trace_FN, ['b']),pm.summary(trace_FN, ['c'])\
,pm.summary(trace_FN, ['sigma'])]
results=pd.concat(results)
true_params.append(noise_sigma)
results['True values'] = pd.Series(np.array(true_params), index=results.index)
true_params.pop();
results
# ## Reconstruction of the phase portrait
#
# Its good to check that we can reconstruct the (famous) pahse portrait for this model based on the obtained samples.
# +
params=np.array([trace_FN.get_values('a'),trace_FN.get_values('b'),trace_FN.get_values('c')]).T
params.shape
new_values = []
for ind in range(len(params)):
ppc_sol= ode_model.simulate(params[ind])
new_values.append(ppc_sol)
new_values = np.array(new_values)
mean_values = np.mean(new_values, axis=0)
plt.figure(figsize=(15, 7.5))
plt.plot(mean_values[:,0], mean_values[:,1], color='black', lw=4, label='Inferred (mean of sampled) phase portrait')
plt.plot(sim_data[:,0], sim_data[:,1], '--', color='#ff7f0e', lw=4, ms=6, label='True phase portrait')
plt.legend(fontsize=15)
plt.xlabel(r'$V(t)$',fontsize=15)
plt.ylabel(r'$R(t)$',fontsize=15);
# -
# # Perspectives
#
# ### Using some other ODE models
#
# I have tried to keep everything as general as possible. So, my custom ODE Op, the state and VSP evaluator as well as the cached solver are not tied to a specific ODE model. Thus, to use any other ODE model one only needs to implement a `simulate_with_sensitivities` method according to their own specific ODE model.
#
# ### Other forms of differential equation (DDE, DAE, PDE)
#
# I hope the two examples have elucidated the applicability of PyMC3 in regards to fitting ODE models. Although ODEs are the most fundamental constituent of a mathematical model, there are indeed other forms of dynamical systems such as a delay differential equation (DDE), a differential algebraic equation (DAE) and the partial differential equation (PDE) whose parameter estimation is equally important. The SMC and for that matter any other non-gradient sampler supported by PyMC3 can be used to fit all these forms of differential equation, of course using the `as_op`. However, just like an ODE we can solve augmented systems of DDE/DAE along with their sensitivity equations. The sensitivity equations for a DDE and a DAE can be found in this recent paper, [C Rackauckas et al., 2018](https://arxiv.org/abs/1812.01892) (Equation 9 and 10). Thus we can easily apply NUTS sampler to these models.
#
# ### Stan already supports ODEs
#
# Well there are many problems where I believe SMC sampler would be more suitable than NUTS and thus its good to have that option.
#
# ### Model selection
#
# Most ODE inference literature since [<NAME> and <NAME>, 2008](https://academic.oup.com/bioinformatics/article/24/6/833/192524) recommend the usage of Bayes factor for the purpose of model selection/comparison. This involves the calculation of the marginal likelihood which is a much more nuanced topic and I would refrain from any discussion about that. Fortunately, the SMC sampler calculates the marginal likelihood as a by product so this can be used for obtaining Bayes factors. Follow PyMC3's other tutorials for further information regarding how to obtain the marginal likelihood after running the SMC sampler.
#
# Since we generally frame the ODE inference as a regression problem (along with the i.i.d measurement noise assumption in most cases) we can straight away use any of the supported information criterion, such as the widely available information criterion (WAIC), irrespective of what sampler is used for inference. See the PyMC3's API for further information regarding WAIC.
#
# ### Other AD packages
#
# Although this is a slight digression nonetheless I would still like to point out my observations on this issue. The approach that I have presented here for embedding an ODE (also extends to DDE/DAE) as a custom Op can be trivially carried forward to other AD packages such as TensorFlow and PyTorch. I had been able to use TensorFlow's [py_func](https://www.tensorflow.org/api_docs/python/tf/py_func) to build a custom TensorFlow ODE Op and then use that in the [Edward](http://edwardlib.org/) ppl. I would recommend [this](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html) tutorial, for writing PyTorch extensions, to those who are interested in using the [Pyro](http://pyro.ai/) ppl.
#
#
#
| docs/source/notebooks/ODE_with_manual_gradients.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.1
# language: julia
# name: julia-1.5
# ---
# # Arrays
# Let us start with `Array`s. It is very similar to lists in Python, though it can have more than one dimension. An `Array` is defined as follows,
A = [] # empty array
X = [1, 3, -5, 7] # array of integers
# ## Indexing and slicing
# Let's start by eating the frog. Julia uses 1-based indexing...
names = ["Arthur", "Ford", "Zaphod", "Marvin", "Trillian", "Eddie"]
names[0] # this does not work, sorry Pythonista's
names[1] # hell yeah!
names[end] # last element
names[end-1] # second to last element
# Slicing arrays is intuitive,
names[3:6]
# and slicing with assignment too.
names[end-1:end] = ["Slartibartfast","The Whale and the Bowl of Petunias"]
names
# ## Types
# Julia arrays can be of mixed type.
Y = [42, "Universe", []]
# The type of the array changes depending on the elements that make up the array.
typeof(A)
typeof(X)
typeof(Y)
# When the elements of the arrays are mixed, the type is promoted to the closest common ancestor. For `Y` this is `Any`. But an array of an integer and a float becomes an...
B = [1.1, 1]
typeof(B)
eltype(B) # gives the type of the elements
# ... array of floats.
#
# Julia allows the flexibility of having mixed types, though this will hinder performance, as the compiler can no longer optimize for the type. If you process an array of `Any`s, your code will be as slow as Python.
#
# To create an array of a particular type, just use `Type[]`.
Float64[1, 2, 3]
# ## Initialisation
# Arrays can be initialized in all the classic, very Pythonesque ways.
# +
C = [] # empty
zeros(5) # row vector of 5 zeroes
# -
ones(3,3) # 3X3 matrix of 1's, will be discussed later on
fill(0.5, 10) # in case you want to fill a matrix with a specific value
rand(2) # row vector of 2 random floats [0,1]
randn(2) # same but normally-distributed random numbers
# Sometimes it is better to provide a specific type for initialization. `Float64` is often the default.
zeros(Int8, 5)
# ## Comprehensions and list-like operations
#
# Comprehensions are a concise and powerful way to construct arrays and are very loved by the Python community.
Y = [1, 2, 3, 4, 5, 6, 8, 9, 8, 6, 5, 4, 3, 2, 1]
t = 0.1
dY = [Y[i-1] - 2*Y[i] + Y[i+1] for i=2:length(Y)-1] # central difference
# General $N$-dimensional arrays can be constructed using the following syntax:
#
# ```
# [ F(x,y,...) for x=rx, y=ry, ... ]
# ```
#
# Note that this is similar to using set notation. For example:
[i * j for i in 1:4, j in 1:5]
# ## Pushing, appending and popping
#
# Arrays behave like a stack. Elements can be added to the back of the array,
push!(names, "Eddie") # add a single element
append!(names, ["Sam", "Gerard"]) # add an array
# "Eddie" was appended as the final element of the Array along with "Sam" and "Gerard". Remember, a "!" is used to indicate an in-place function. `pop()` is used to return and remove the final element of an array
pop!(names)
# # Matrices
#
# Let's add a dimension and go to 2D Arrays, matrices. It is all quite straightforward,
Z = [0 1 1; 2 3 5; 8 13 21; 34 55 89]
Z[3,2] # indexing
Z[1,:] # slicing
# It is important to know that arrays and other collections are copied by reference.
R = Z
Z
R
R[1,1] = 42
Z
# `deepcopy()` can be used to make a wholly dereferenced object.
#
# ## Concatenation
# Arrays can be constructed and also concatenated using the following functions,
Z = [0 1 1; 2 3 5; 8 13 21; 34 55 89]
Y = rand(4,3)
cat(Z, Y, dims=2) # concatenation along a specified dimension
cat(Z, Y, dims=1) == vcat(Z,Y) == [Z;Y] # vertical concatenation
cat(Z,Y,dims=2) == hcat(Z,Y) == [Z Y] # horizontal concatenation
# Note that `;` is an operator to use `vcat`, e.g.
[zeros(2, 2) ones(2, 1); ones(1, 3)]
# This simplified syntax can lead to strange behaviour. Explain the following difference.
[1 2-3]
[1 2 -3]
# Sometimes, `vcat` and `hcat` are better used to make the code unambiguous.
#
# ## Vector operations
#
# By default, the `*` operator is used for matrix-matrix multiplication
# +
A = [2 4 3; 3 1 5]
B = [ 3 10; 4 1 ;7 1]
A * B
# -
# This is the Julian way since functions act on the objects, and element-wise operatios are done with "dot" operations. For every function or binary operation like `^` there is a "dot" operation `.^` to perform element-by-element exponentiation on arrays.
Y = [10 10 10; 20 20 20]
Y.^2
# Under the hood, Julia is looping over the elements of `Y`. So a sequence of dot-operations is fused into a single loop.
Y.^2 .+ cos.(Y)
# Did you notice that dot-operations are also applicable to functions, even user-defined functions? As programmers, we are by lazy by definition and all these dots are a lot of work. The `@.` macro does this for us.
Y.^2 .+ cos.(Y) == @. Y^2 + cos(Y)
# # Higher dimensional arrays
#
# Matrices can be generalized to multiple dimensions.
X = rand(3, 3, 3)
# # Ranges
#
# The colon operator `:` can be used to construct unit ranges, e.g., from 1 to 20:
ur = 1:20
# Or by increasing in steps:
str = 1:3:20
# Similar to the `range` function in Python, the object that is created is not an array, but an iterator. This is actually the term used in Python. Julia has many different types and structs, which behave a particular way. Types of `UnitRange` only store the beginning and end value (and stepsize in the case of `StepRange`). But functions are overloaded such that it acts as arrays.
# +
for i in ur
println(i)
end
str[3]
length(str)
# -
# All values can be obtained using `collect`:
collect(str)
# Such implicit objects can be processed much smarter than naive structures. Compare!
@time sum((i for i in 1:100_000_000))
@time sum(1:100_000_000)
# `StepRange` and `UnitRange` also work with floats.
0:0.1:10
# # Other collections
#
# Some of the other collections include tuples, dictionaries, and others.
tupleware = ("tuple", "ware") # tuples
scores = Dict("humans" => 2, "vogons" => 1) # dictionaries
scores["humans"]
# # Exercises
#
# ## Vandermonde matrix
#
# Write a function to generate an $n \times m$ [Vandermonde matrix](https://en.wikipedia.org/wiki/Vandermonde_matrix) for a given vector $\alpha=[\alpha_1,\alpha_2,\ldots,\alpha_m]^T$. This matrix is defined as follows
#
# $$
# {\displaystyle V={\begin{bmatrix}1&\alpha _{1}&\alpha _{1}^{2}&\dots &\alpha _{1}^{n-1}\\1&\alpha _{2}&\alpha _{2}^{2}&\dots &\alpha _{2}^{n-1}\\1&\alpha _{3}&\alpha _{3}^{2}&\dots &\alpha _{3}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\alpha _{m}&\alpha _{m}^{2}&\dots &\alpha _{m}^{n-1}\end{bmatrix}},}
# $$
#
# or
#
# $$
# V = [\alpha_i^{j-1}] .
# $$
#
# Write a one-liner function `vandermonde` to generate this matrix. This function takes as a vector `α` and `m`, the number of powers to compute.
vandermonde(α, m) = ...
# ## Determinant
#
# Write a function `mydet` to compute the determinant of a square matrix. Remember, for a $2 \times 2$ matrix, the determinant is computed as
#
# $$
# {\displaystyle|A|={\begin{vmatrix}a&b\\c&d\end{vmatrix}}=ad-bc.}
# $$
#
#
# For larger matrices, there is a recursive way of computing the determinant based on the minors, i.e. the determinants of the submatrices. See [http://mathworld.wolfram.com/Determinant.html](http://mathworld.wolfram.com/Determinant.html).
#
# Write a function to compute the determinant of a general square matrix.
function mydet(A)
size(A,1) != size(A,2) && throw(DimensionMismatch)
...
end
# ## Ridge regression
#
# Ridge regression can be seen as an extension of ordinary least squares regression,
#
# $$\beta X =b\, ,$$
#
# where a matrix $\beta$ is sought which minimizes the sum of squared residuals between the model and the observations,
#
# $$SSE(\beta) = (y - \beta X)^T (y - \beta X)$$
#
# In some cases it is adviceable to add a regularisation term to this objective function,
#
# $$SSE(\beta) = (y - \beta X)^T (y - \beta X) + \lambda \left\lVert X \right\rVert^2_2 \, , $$
#
# this is known as ridge regression. The matrix $\beta$ that minimises the objective function can be computed analytically.
#
# $$\beta = \left(X^T X + \lambda I \right)^{-1}X^T y$$
#
# Let us look at an example. We found some data on the evolution of human and dolphin intelligence.
# +
using Plots
blue = "#8DC0FF"
red = "#FFAEA6"
t = collect(0:10:3040)
ϵ₁ = randn(length(t))*15 # noise on Dolphin IQ
ϵ₂ = randn(length(t))*20 # noise on Human IQ
Y₁ = dolphinsIQ = t/12 + ϵ₁
Y₂ = humanIQ = t/20 + ϵ₂
scatter(t,Y₁; label="Dolphins", color=blue,
ylabel="IQ (-)", xlabel ="Time (year BC)", legend=:topleft)
scatter!(t,Y₂; label="Humans", color=red)
# -
# > "For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much - the wheel, New York, wars and so on - whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man - for precisely the same reasons."
# >
# > *Hitchhikers guide to the galaxy*
#
# **Assignment:** Plot the trend of human vs. dolphin intelligence by implementing the analytical solution for ridge regression. For this, you need the uniform scaling operator `I`, found in the `LinearAlgebra` package. Use $\lambda=0.01$.
# +
using LinearAlgebra
β₁ = #...
β₂ = #...
Y₁ = β₁*t
Y₂ = β₂*t
# -
# # References
# - [Julia Documentation](https://juliadocs.github.io/Julia-Cheat-Sheet/)
# - [Introduction to Julia UCI data science initiative](http://ucidatascienceinitiative.github.io/IntroToJulia/)
# - [Month of Julia](https://github.com/DataWookie/MonthOfJulia)
# - [Why I love Julia, Next Journal](https://nextjournal.com/kolia/why-i-love-julia)
| chapters/00.Introduction/.ipynb_checkpoints/02-collections-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import rospy
import numpy as np
from cookiecutter.main import cookiecutter
import collections
from pathlib import Path
import json
import shutil
# +
def get_param(name, default):
try:
value = rospy.get_param(name, default)
except ConnectionRefusedError:
value = default
print('param "{}" = "{}"'.format(name, value))
return value
SEED = int(get_param('~seed', '178'))
WORLD_NAME = get_param('~world_name', 'tomato_field')
MODEL_NAME_PREFIX = get_param('~model_name_prefix', 'tomato')
OUT_PATH = Path(get_param('~out_path', Path.cwd() / '../generated/test01')).resolve()
MODEL_TEMPLATE = Path(get_param('~model_template', Path.cwd() / '../templates/tomato_model')).resolve()
WORLD_TEMPLATE = Path(get_param('~world_template', Path.cwd() / '../templates/tomato_world')).resolve()
ROW_COUNT = int(get_param('~row_count', '3'))
ROW_LENGTH = int(get_param('~row_length', '6'))
ROW_DIST = float(get_param('~row_dist', '2.0'))
CROP_DIST = float(get_param('~crop_dist', '0.9'))
shutil.rmtree(OUT_PATH, ignore_errors=True)
np.random.seed(SEED)
# -
# helper class to build the markers.json
class Markers:
markers = []
last_id = 0
@staticmethod
def next_id():
Markers.last_id += 1
return Markers.last_id
@staticmethod
def reset():
Markers.markers = []
@staticmethod
def add_plant(x, y, z):
id = Markers.next_id()
Markers.markers.append({
'marker_type': 'PLANT',
'id': id,
'translation': [x, y, z]
})
return id
@staticmethod
def add_fruit(x, y, z, plant_id):
id = Markers.next_id()
Markers.markers.append({
'marker_type': 'FRUIT',
'id': id,
'translation': [x, y, z],
'plant_id': plant_id
})
return id
@staticmethod
def dumps():
return json.dumps(Markers.markers, indent=4)
# + tags=["outputPrepend"]
models = {'list': []}
Markers.reset()
for x in range(ROW_COUNT):
for y in range(ROW_LENGTH):
model_name = 'tomato_{}'.format(x * ROW_LENGTH + y)
cookiecutter(str(MODEL_TEMPLATE),
output_dir=str(OUT_PATH),
overwrite_if_exists=True,
no_input=True,
extra_context={'world_name': WORLD_NAME, 'model_name': model_name})
x_pos, y_pos, z_pos = x * ROW_DIST, y * CROP_DIST, 0
models['list'].append({
'model': model_name,
'name': model_name,
'pose': '{} {} 0 0 0 0'.format(x_pos, y_pos)
})
x_pos += np.random.uniform(-0.1, 0.1)
y_pos += np.random.uniform(-0.1, 0.1)
seed = np.random.randint(10000)
dir = (OUT_PATH / WORLD_NAME / model_name).resolve()
dir_blender = (Path.cwd() / '../blender').resolve()
blend = str(dir_blender / 'tomato_gen.blend')
script = str(dir_blender / 'tomato_gen.py')
# ! blender $blend --background --python $script -- --model_dir $dir --seed $seed
plant_id = Markers.add_plant(x_pos, y_pos, z_pos)
with open(dir / 'markers.json') as markers_file:
plant_markers = json.load(markers_file)
for marker in plant_markers:
if marker['marker_type'] == 'FRUIT':
Markers.add_fruit(
marker['translation'][0] + x_pos,
marker['translation'][1] + y_pos,
marker['translation'][2] + z_pos,
plant_id
)
cookiecutter(str(WORLD_TEMPLATE),
output_dir=str(OUT_PATH),
overwrite_if_exists=True,
no_input=True,
extra_context={'world_name': WORLD_NAME, 'models': models})
with open(OUT_PATH / WORLD_NAME / 'markers.json', 'w') as outfile:
json.dump(Markers.markers, outfile, indent=4, sort_keys=True)
| fields_ignition/scripts/tomato_gen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (myenv)
# language: python
# name: myenv
# ---
#NOT WORKING. THis file is just to show our experiments. Pick one of the other 2.
#wvpy2 not sure about the environment.
# +
#CHeck if the imports work.
import pandas as pd
import numpy as np
import string
import re
import sys
import os
import jellyfish
from sklearn.model_selection import train_test_split
from scipy.spatial.distance import cdist
import matplotlib.pyplot as plt
from tqdm import tqdm_pandas, tqdm_notebook as tqdm
from tqdm import tqdm as tqorig
tqorig.pandas(tqdm)
# -
PATH_TO_GLOVE = './glove.840B.300d.txt'
try:
sys.path.append(os.path.expanduser(PATH_TO_FAISS))
import faiss #Needs wvpy2 env. WON"T WORK
FAISS_AVAILABLE = True
except:
FAISS_AVAILABLE = False
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
def get_all_glove(glove_path):
glove_dict = {}
with open(glove_path) as f:
for line in tqdm(f):
word, vec = line.split(' ', 1)
glove_dict[word] = np.array(list(map(float, vec.split())), dtype=np.float32)
return glove_dict
# +
glove = get_all_glove(PATH_TO_GLOVE)
# +
# Convert the dictionary to an embedding matrix, a dictionary mapping from word to id, and a list which will map from id to word
emb = np.zeros((len(glove), 300), dtype=np.float32)
w2id = {}
id2w=[]
for cc, word in enumerate(glove.keys()):
emb[cc]=glove[word]
w2id[word]=cc
id2w.append(word)
emb = emb / np.linalg.norm(emb, axis=1, keepdims=True)
# -
if FAISS_AVAILABLE:
d = emb.shape[1]
# index = faiss.IndexFlatL2(d)
index = faiss.IndexFlatIP(d)
index.add(emb)
print(index.ntotal, 'words now in index')
# +
def getNeighbours(word_list, transform_vector=0, c=1.0, neighbours=10, metric='cosine', use_faiss=FAISS_AVAILABLE):
word_embeds = np.vstack([emb[w2id[x]] for x in word_list]) # create a numpy array of word embeddings
if use_faiss:
distances, indices = index.search(
(word_embeds - transform_vector*c).astype(np.float32), neighbours)
else:
dist_matrix = cdist((word_embeds - transform_vector*c).astype(np.float32), emb, metric=metric)
indices = np.argsort(dist_matrix)[:, :neighbours]
return indices
def toWords(index_list, n=10):
res = []
for ind in index_list:
if n==1:
res.append(id2w[ind[0]])
else:
res.append([id2w[x] for x in ind[:n]])
return res
# -
# %time toWords(getNeighbours(['reliable', 'relieable']), n=10)
print toWords(getNeighbours(['woman', 'girl', 'boy'], emb[w2id['man']] - emb[w2id['king']], c=1), n=5)
print toWords(getNeighbours(['woman', 'girl', 'boy'], emb[w2id['man']] - .7*emb[w2id['king']], c=1), n=5)
print toWords(getNeighbours(['foriegn'], emb[w2id['relieable']] - emb[w2id['reliable']], c=1), n=5)
print toWords(getNeighbours(['made'], emb[w2id['took']] - emb[w2id['take']], c=1.5), n=5)
print toWords(getNeighbours(['dog'], emb[w2id['man']] - emb[w2id['boy']], c=1.5), n=5)
print toWords(getNeighbours(['amd'], 0, c=-1.5), n=10)
mistakes = pd.read_csv('Oxford_common_spellings.csv')
mistakes.head()
# +
# This is just to help me reproduce the same chart. Comment out this line and uncomment the next line if you want to look at a new random selection word pairs
samp = mistakes.loc[[76, 31, 90, 14, 9, 6, 36, 91, 84]]
# samp = mistakes.sample(9)
fig, ax = plt.subplots(3, 3, sharey=True)
fig.set_size_inches(28, 10, forward=True)
i = 0
j = 0
for row in samp.itertuples():
ax[i][j].set_title(row.incorrect + '-' + row.correct)
ax[i][j].xaxis.set_visible(False)
# ax[i][j].yaxis.set_visible(False)
vec = emb[w2id[row.incorrect]]-emb[w2id[row.correct]]
ax[i][j].plot(vec)
# ax[i][j].bar(x=list(range(300)), height=vec, color=cmap(np.abs(vec)), width=1)
i += 1
if i == 3:
i = 0
j += 1
# +
train, test = train_test_split(mistakes, train_size=0.85, random_state=42)
spell_transform = np.zeros((300,))
for row in train.itertuples():
spell_transform += emb[w2id[row.incorrect]] - emb[w2id[row.correct]]
spell_transform /= len(train)
plt.plot(spell_transform)
print(len(test))
# -
# See how the transformation performs on the test set
test.loc[:, 'fixed'] = toWords(getNeighbours(test.incorrect, transform_vector=spell_transform, c=1), n=1)
print('{} correct out of {}'.format((test.fixed==test.correct).sum(), len(test)))
print('Accuracy on test set: {:.2f}%'.format(1.0*(test.fixed==test.correct).sum()/len(test)*100.0))
# See how the transformation performs on the test set
test.loc[:, 'fixed'] = toWords(getNeighbours(test.incorrect, transform_vector=spell_transform, c=1.5), n=1)
print('{} correct out of {}'.format((test.fixed==test.correct).sum(), len(test)))
print('Accuracy on test set: {:.2f}%'.format(1.0*(test.fixed==test.correct).sum()/len(test)*100))
print test[test.fixed!=test.correct]
# plt.plot(emb[w2id['Farenheit']]-emb[w2id['Fahrenheit']])
plt.plot(emb[w2id['chauffer']]-emb[w2id['chauffeur']])
plt.plot(spell_transform)
mistakes['lev_score'] = mistakes.apply(lambda x: jellyfish.levenshtein_distance(x.correct, x.incorrect) / max(len(x.correct), len(x.incorrect)), axis=1)
mistakes['lev_distance'] = mistakes.apply(lambda x: jellyfish.levenshtein_distance(x.correct, x.incorrect), axis=1)
mistakes.sort_values('lev_distance').tail()
| wordvec/WordVec_SpellCheck_py2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:deeprl]
# language: python
# name: conda-env-deeprl-py
# ---
# ## PyTorch Implementation of Curiosity-Driven Exploration by Self-Supervised Prediction
# ### Trained to play Super Mario Bros. with and without game-generated explicit rewards.
# #### Successfully learns to progress through game with just intrinsic (curiosity) rewards.
# - Paper: "Curiosity-driven Exploration by Self-supervised Prediction" Pathak et al 2017
#
# This implementation follows the paper almost exactly, however, we use a Deep Q-network as the agent rather than A3C for simplicity. We also do not utilize an LSTM layer in the Q-network for simplicity and it is unnecessary for Super Mario Brothers.
#
# You can train this model on a modern laptop for a few thousand iterations (will take 30+ minutes) and can already see interesting results (i.e. the agent will be obviously much better than the random agent, making relatively consistent forward progress, jumping over/on enemies and over obstacles. To match the reference paper's result you will need to train much longer using a GPU.
#
# If you like this, please check out our book, [Deep Reinforcement Learning in Action](https://www.manning.com/books/deep-reinforcement-learning-in-action)
# +
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage.transform import resize
import numpy as np
from random import shuffle
from collections import deque
from IPython import display
import gym
from nes_py.wrappers import BinarySpaceToDiscreteSpaceEnv
import gym_super_mario_bros
from gym_super_mario_bros.actions import SIMPLE_MOVEMENT, COMPLEX_MOVEMENT
env = gym_super_mario_bros.make('SuperMarioBros-v0')
env = BinarySpaceToDiscreteSpaceEnv(env, COMPLEX_MOVEMENT)
# %matplotlib inline
# -
# ### Test the environment by playing a random agent
#Random agent. Just for testing
done = True
for step in range(5000):
if done:
state = env.reset()
state, reward, done, info = env.step(env.action_space.sample())
env.render()
#env.close()
def downscale_obs(obs, new_size=(42,42), to_gray=True):
"""
downscale_obs: rescales RGB image to lower dimensions with option to change to grayscale
obs: Numpy array or PyTorch Tensor of dimensions Ht x Wt x 3 (channels)
to_gray: if True, will use max to sum along channel dimension for greatest contrast
"""
if to_gray:
return resize(obs, new_size, anti_aliasing=True).max(axis=2)
else:
return resize(obs, new_size, anti_aliasing=True)
# # Define the Q-network module
#
# - Input: x (Tensor dims: Batch x (3) Channels x (42) Ht x (42) Wt
# - Output: Batch x 12 (Q values per action)
class Qnetwork(nn.Module):
def __init__(self):
super(Qnetwork, self).__init__()
#in_channels, out_channels, kernel_size, stride=1, padding=0
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=(3,3), stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.linear1 = nn.Linear(288,100)
self.linear2 = nn.Linear(100,12)
def forward(self,x):
x = F.normalize(x)
y = F.elu(self.conv1(x))
y = F.elu(self.conv2(y))
y = F.elu(self.conv3(y))
y = F.elu(self.conv4(y))
y = y.flatten(start_dim=2)
y = y.view(y.shape[0], -1, 32)
y = y.flatten(start_dim=1)
y = F.elu(self.linear1(y))
y = self.linear2(y) #size N, 12
return y
# # Define the Phi (encoder) network
# - Input: A state of dimensions Batch x (3) Channels x 42 (Ht) x 42 (Wt)
# - Output: Encoded (lower-dimensional) state of dimension Batch x 288
class Phi(nn.Module): # (raw state) -> low dim state
def __init__(self):
super(Phi, self).__init__()
#in_channels, out_channels, kernel_size, stride=1, padding=0
self.conv1 = nn.Conv2d(3, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=2, padding=1)
def forward(self,x):
x = F.normalize(x)
y = F.elu(self.conv1(x))
y = F.elu(self.conv2(y))
y = F.elu(self.conv3(y))
y = F.elu(self.conv4(y)) #size [1, 32, 3, 3] batch, channels, 3 x 3
y = y.flatten(start_dim=1) #size N, 288
return y
# # Define Inverse model: $g(\phi(S_t), \phi(S_{t+1})$
# - Input 1: Encoded state1 $\phi(\text{State}_t)$ of dimension Batch x 288
# - Input 2: Encoded state2 $\phi(\text{State}_{t+1})$
# - Output: Predicted action that was taken to get from $S_t$ to $S_{t+1}$. That is, a softmax over actions, dimensions Batch x 12 (for 12 discrete actions)
class Gnet(nn.Module): #Inverse model: (phi_state1, phi_state2) -> action
def __init__(self):
super(Gnet, self).__init__()
#in_channels, out_channels, kernel_size, stride=1, padding=0
self.linear1 = nn.Linear(576,256)
self.linear2 = nn.Linear(256,12)
def forward(self, state1,state2):
x = torch.cat( (state1, state2) ,dim=1)
y = F.relu(self.linear1(x))
y = self.linear2(y)
y = F.softmax(y,dim=1)
return y
# # Define Forward Model: $f(\phi(s_t), a_t)$
# - Input 1: Encoded state \phi(s_t) of dimension Batch x 288
# - Input 2: (Integer-encoded 0-11 for 12 discrete actions) action of dimension Batch x 12
# - Output: Predicted encoded next state $\phi(s_{t+1})$
class Fnet(nn.Module):
def __init__(self):
super(Fnet, self).__init__()
#in_channels, out_channels, kernel_size, stride=1, padding=0
self.linear1 = nn.Linear(300,256)
self.linear2 = nn.Linear(256,288)
def forward(self,state,action):
action_ = torch.zeros(action.shape[0],12)
indices = torch.stack( (torch.arange(action.shape[0]), action.squeeze()), dim=0)
indices = indices.tolist()
action_[indices] = 1.
x = torch.cat( (state,action_) ,dim=1)
y = F.relu(self.linear1(x))
y = self.linear2(y)
return y
def policy(qvalues, eps=None): #Epsilon greedy
"""
policy(qvales, eps=None) takes Q-values and produces an integer representing action
The function takes a vector of dimension (12,) representing Q-values for each of the 12 discrete actions
and returns an integer. If `eps` is supplied it follows an epsilon-greedy policy with probability `eps`. If `eps`
is not supplied, then a softmax policy is used.
"""
if eps is not None:
if torch.rand(1) < eps:
return torch.randint(low=0,high=7,size=(1,))
else:
return torch.argmax(qvalues)
else:
return torch.multinomial(F.softmax(F.normalize(qvalues)), num_samples=1)
# ## Experience replay memory
#
# The experience replay memory stores a list of tuples where each tuple is a single experience from an initial state 1, an action taken, the resulting state 2 and reward: $(S_t, a_t, R_{t+1}, S_{t+1})$
#
#
# The `add_memory(state1, action, reward, state2)` method adds a memory to the memory list. The memory list has a fixed maximum length, if it's full, new memories will randomly overwrite old memories.
#
# The `get_batch()` method returns a random subset from the memory list for training.
class ExperienceReplay:
def __init__(self, N=500, batch_size=100):
self.N = N #total memory size
self.batch_size = batch_size
self.memory = [] #list of tuples of tensors (S_t, a_t, R_{t+1}, S_{t+1})
self.counter = 0
#S_t should be size B x Channel x Ht x Wt. R_t : B x 1
def add_memory(self, state1, action, reward, state2):
self.counter +=1
if self.counter % 500 == 0:
self.shuffle_memory()
if len(self.memory) < self.N:
self.memory.append( (state1, action, reward, state2) )
else:
rand_index = np.random.randint(0,self.N-1)
self.memory[rand_index] = (state1, action, reward, state2) #replace random memory
def shuffle_memory(self):
shuffle(self.memory)
def get_batch(self):
if len(self.memory) < self.batch_size:
batch_size = len(self.memory)
else:
batch_size = self.batch_size
if len(self.memory) < 1:
print("Error: No data in memory.")
return None
ind = np.random.choice(np.arange(len(self.memory)),batch_size,replace=False)
batch = [self.memory[i] for i in ind] #batch is a list of tuples
state1_batch = torch.stack([x[0].squeeze(dim=0) for x in batch],dim=0)
action_batch = torch.Tensor([x[1] for x in batch]).long()
reward_batch = torch.Tensor([x[2] for x in batch])
state2_batch = torch.stack([x[3].squeeze(dim=0) for x in batch],dim=0)
return state1_batch, action_batch, reward_batch, state2_batch
# ### Define basic hyperparameters. See reference paper for details.
params = {
'batch_size':150,
'beta':0.2,
'lambda':0.1,
'eta': 1.0,
'gamma':0.2,
'max_episode_len':100,
'min_progress':15,
'action_repeats':6,
'frames_per_state':3
}
# ### Instantiate the 4 modules and the experience replay buffer. Setup the optimizer.
replay = ExperienceReplay(N=1000, batch_size=params['batch_size'])
Qmodel = Qnetwork()
encoder = Phi()
forward_model = Fnet()
inverse_model = Gnet()
forward_loss = nn.MSELoss(reduction='none')#torch.nn.PairwiseDistance()#
inverse_loss = nn.CrossEntropyLoss(reduction='none')
qloss = nn.MSELoss()
# We can add the model parameters from each model to a list and pass that to a single optimizer
all_model_params = list(Qmodel.parameters()) + list(encoder.parameters())
all_model_params += list(forward_model.parameters()) + list(inverse_model.parameters())
opt = optim.Adam(lr=0.001, params=all_model_params)
def ICM(state1, action, state2, forward_scale=1., inverse_scale=1e4): #action is an integer [0:11]
"""
Intrinsic Curiosity Module (ICM): Calculates prediction error for forward and inverse dynamics
The ICM takes a state1, the action that was taken, and the resulting state2 as inputs
(from experience replay memory) and uses the forward and inverse models to calculate the prediction error
and train the encoder to only pay attention to details in the environment that are controll-able (i.e. it should
learn to ignore useless stochasticity in the environment and not encode that).
"""
state1_hat = encoder(state1)
state2_hat = encoder(state2)
#Forward model prediction error
state2_hat_pred = forward_model(state1_hat.detach(), action.detach())
forward_pred_err = forward_scale * forward_loss(state2_hat_pred, \
state2_hat.detach()).sum(dim=1).unsqueeze(dim=1)
#Inverse model prediction error
pred_action = inverse_model(state1_hat, state2_hat) #returns softmax over actions
inverse_pred_err = inverse_scale * inverse_loss(pred_action, \
action.detach().flatten()).unsqueeze(dim=1)
return forward_pred_err, inverse_pred_err
def loss_fn(q_loss, inverse_loss, forward_loss):
"""
Overall loss function to optimize for all 4 modules
Loss function based on calculation in paper
"""
loss_ = (1 - params['beta']) * inverse_loss
loss_ += params['beta'] * forward_loss
loss_ = loss_.sum() / loss_.flatten().shape[0]
loss = loss_ + params['lambda'] * q_loss
return loss
# +
def prepare_state(state):
"""
First downscale state, convert to grayscale, convert to torch tensor and add batch dimension
"""
return torch.from_numpy(downscale_obs(state, to_gray=True)).float().unsqueeze(dim=0)
def prepare_multi_state(state1, state2):
"""
Prepare a 3 channel state (for use in inference not training).
The Q-model and encoder/Phi model expect the input state to have 3 channels. Following the reference paper,
these models are fed 3 consecutive state frames to give the model's access to motion information
(i.e. velocity information rather than just positional information)
"""
#prev is 1x3x42x42
state1 = state1.clone()
tmp = torch.from_numpy(downscale_obs(state2, to_gray=True)).float()
#shift data along tensor to accomodate newest observation (we could have used deque w/ maxlen 3)
state1[0][0] = state1[0][1]
state1[0][1] = state1[0][2]
state1[0][2] = tmp #replace last frame
return state1
def prepare_initial_state(state,N=3):
"""
Prepares the initial state which is just a tensor of 1 (Batch) x 3 x 42 x 42
The channel dimension is just a copy of the input state 3 times
"""
#state should be 42x42 array
state_ = torch.from_numpy(downscale_obs(state, to_gray=True)).float()
tmp = state_.repeat((N,1,1)) #now 3x42x42
return tmp.unsqueeze(dim=0) #now 1x3x42x42
# -
def reset_env():
"""
Reset the environment and return a new initial state
"""
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
return state1
# ### Part of main training loop.
# This code extracts a minibatch from the experience replay memory and runs the 4 modules forward and calculates the prediction errors for each, returning them as output.
#
# If `use_explicit` is set to `True` then the reward will include the game-generated explicit reward. If set to `False` then the agent will learn only based on the instrinsic (curiosity) based prediction-error reward.
def minibatch_train(use_extrinsic=True):
state1_batch, action_batch, reward_batch, state2_batch = replay.get_batch()
action_batch = action_batch.view(action_batch.shape[0],1)
reward_batch = reward_batch.view(reward_batch.shape[0],1)
#replay.get_batch returns tuple (state1, action, reward, state2) where each tensor has batch dimension
forward_pred_err, inverse_pred_err = ICM(state1_batch, action_batch, state2_batch) #internal curiosity module
i_reward = (1. / params['eta']) * forward_pred_err
reward = i_reward.detach()
if use_extrinsic:
reward += reward_batch
qvals = Qmodel(state2_batch)
reward += params['gamma'] * torch.max(qvals)
reward_pred = Qmodel(state1_batch)
reward_target = reward_pred.clone()
indices = torch.stack( (torch.arange(action_batch.shape[0]), action_batch.squeeze()), dim=0)
indices = indices.tolist()
reward_target[indices] = reward.squeeze()
q_loss = 1e5 * qloss(F.normalize(reward_pred), F.normalize(reward_target.detach()))
return forward_pred_err, inverse_pred_err, q_loss
# # Main Training Loop
#
# Training details to note:
# - Training starts with softmax action policy, and then after 1000 steps (or whatever you set it to) it will switch to
# epsilon greedy policy. This empirically seems to help with exploration in the beginning and then epsilon-greedy helps increase the explotation. You should try starting with a softmax policy and increase the temperature slowly during training to make a continuous/smooth version of this approach.
#
# - We use a deque (a list-like data structure where we can specify a maximum length, append items and old items will
# automatically get pushed out once the max len is hit) to store the last 3 frames and then package it into a tensor for use as the state.
#
# - We keep track of the `last_x_pos` from the prior 50 frames. If the agent hasn't made significant forward progress (x_now - last_x_pos) > 10 then we assume the agent is stuck and we reset the environment.
#
# - Following the reference paper, we use deterministic sticky-actions such that if the policy says do action 0, then we repeat that action 6 times (only during training, during inference we just take the action once). This helps since each action is a very small step in any direction so by compounding them in training the agent can learn faster what the actions are doing.
#
# - If you train with intrinsic reward only, this implementation is not very stable. Sometimes the agent just repeatedly does something stupid and will not learn, and other times it will do very well.
epochs = 2500
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
eps=0.15
losses = []
ep_lengths = []
episode_length = 0
switch_to_eps_greedy = 1000
state_deque = deque(maxlen=params['frames_per_state'])
e_reward = 0.
last_x_pos = env.env.env._x_position
for i in range(epochs):
opt.zero_grad()
episode_length += 1
q_val_pred = Qmodel(state1)
if i > switch_to_eps_greedy:
action = int(policy(q_val_pred,eps))
else:
action = int(policy(q_val_pred))
for j in range(params['action_repeats']):
state2, e_reward_, done, info = env.step(action)
if done:
state1 = reset_env()
break
e_reward += e_reward_
state_deque.append(prepare_state(state2))
state2 = torch.stack(list(state_deque),dim=1)
replay.add_memory(state1, action, e_reward, state2)
e_reward = 0
if i % params['max_episode_len'] == 0 and i != 0:
if (info['x_pos'] - last_x_pos) < params['min_progress']:
done = True
else:
last_x_pos = info['x_pos']
if done:
print("Episode over.")
ep_lengths.append(info['x_pos'])
state1 = reset_env()
last_x_pos = env.env.env._x_position
episode_length = 0
else:
state1 = state2
#Enter mini-batch training
if len(replay.memory) < params['batch_size']:
continue
forward_pred_err, inverse_pred_err, q_loss = minibatch_train(use_extrinsic=False)
loss = loss_fn(q_loss, forward_pred_err, inverse_pred_err)
loss_list = (q_loss.mean(), forward_pred_err.flatten().mean(), inverse_pred_err.flatten().mean(), episode_length)
if i % 250 == 0:
print("Epoch {}, Loss: {}".format(i,loss))
print("Forward loss: {} \n Inverse loss: {} \n Qloss: {}".format(\
forward_pred_err.mean(),inverse_pred_err.mean(),q_loss.mean()))
print(info)
losses.append(loss_list)
loss.backward()
opt.step()
# ## Plot losses for each module
#
# Loss plots will look much cleaner if you train using explicit rewards too.
# Forward loss will generally decrease steadily. Q-learning will decrease but more erratically. Inverse model loss decreases rapidly initially and then plateaus. Note that the encoder/Phi model is trained via the inverse model (both are trained together), it does not have it's own loss.
#
# Note! These are log-transformed plots.
losses_ = np.array(losses)
ep_lengths_ = np.array(ep_lengths)
plt.figure(figsize=(8,6))
plt.plot(np.log(losses_[:,0]),label='Q loss')
plt.plot(np.log(losses_[:,1]),label='Forward loss')
plt.plot(np.log(losses_[:,2]),label='Inverse loss')
#plt.plot(ep_lengths_, label='Episode Length')
plt.legend()
plt.show()
# ## Test trained model
#Test model
eps=0.15
done = True
state_deque = deque(maxlen=params['frames_per_state'])
for step in range(5000):
if done:
env.reset()
state1 = prepare_initial_state(env.render('rgb_array'))
q_val_pred = Qmodel(state1)
action = int(policy(q_val_pred,eps))
state2, reward, done, info = env.step(action)
state2 = prepare_multi_state(state1,state2)
state1=state2
env.render()
#env.close()
# # Miscellaneous / Unused
def softmax(q,tau=1.4): #q is vector
q = F.normalize(q)
return torch.exp(q/tau) / torch.sum(torch.exp(q/tau))
def test_encoder(from_replay=False):
"""
Test's the encoder's ability to disentangle similar states.
If the encoder is being properly trained, it should learn to encode similar states such that their
euclidian distance is relatively large, so that the the forward network and inverse network can make
better predictions. You'll notice that if you run the function before or early during training
(make sure from_replay is false since the replay will be empty before training) the euclidian distance
between two states will be small, but during training the encoder will learn to disentangle these and the
distance will increase.
`from_replay=True` will test 2 consecutive states from the replay memory otherwise will just reset environment
and use initial two states after taking action
"""
if from_replay:
assert len(replay.memory) > 0
s1, a, r, s2 = replay.memory[np.random.randint(len(replay.memory))]
else:
env.reset()
s1 = prepare_initial_state(env.render('rgb_array'))
env.reset()
env.step(3)
s2 = prepare_multi_state(s1, env.render('rgb_array'))
return nn.MSELoss(reduction='mean')(encoder(s1),encoder(s2))
test_encoder(False)
# ### Code to save or load model parameters after training
#Save model parameters
torch.save(Qmodel.state_dict(),'Qmodel_')
torch.save(encoder.state_dict(),'encoder_')
torch.save(forward_model.state_dict(),'Fnet_')
torch.save(inverse_model.state_dict(),'Gnet_')
#Load model parameters from file
model.load_state_dict(torch.load('Qmodel_'))
model.load_state_dict(torch.load('encoder_'))
model.load_state_dict(torch.load('Fnet_'))
model.load_state_dict(torch.load('Gnet_'))
# # References
# - "Curiosity-driven Exploration by Self-supervised Prediction" Pathak et al 2017
| Chapter 8/Curiosity-Driven Exploration Super Mario.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Gender Prediction with a Logitic Regression in Apache Spark
#
# Authors: <NAME>, <NAME>, <NAME>
#
# This is an implementation of a LR model in Apache Spark to classify gender in our IMDb data set. The primary dependencies are from skimage libary for image filtering and feature extraction.
# # Start up Spark Cluster
# +
import os
import atexit
import sys
import pyspark
from pyspark.context import SparkContext
from pyspark.sql import SQLContext
import findspark
from sparkhpc import sparkjob
#Exit handler to clean up the Spark cluster if the script exits or crashes
def exitHandler(sj,sc):
try:
print('Trapped Exit cleaning up Spark Context')
sc.stop()
except:
pass
try:
print('Trapped Exit cleaning up Spark Job')
sj.stop()
except:
pass
findspark.init()
#Parameters for the Spark cluster
nodes=5
tasks_per_node=8
memory_per_task=1024 #1 gig per process, adjust accordingly
# Please estimate walltime carefully to keep unused Spark clusters from sitting
# idle so that others may use the resources.
walltime="2:00" #2 hour
os.environ['SBATCH_PARTITION']='cpu2019' #Set the appropriate ARC partition
sj = sparkjob.sparkjob(
ncores=nodes*tasks_per_node,
cores_per_executor=tasks_per_node,
memory_per_core=memory_per_task,
walltime=walltime
)
sj.wait_to_start()
sc = sj.start_spark()
#Register the exit handler
atexit.register(exitHandler,sj,sc)
#You need this line if you want to use SparkSQL
sqlCtx=SQLContext(sc)
# -
imgDir = "Images/100x100-10K/" # directory of images
labelsFile = "../Project/Images/genders_data_10k.json" # file that contains data on images
# +
# Creates a Spark Dataframe
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('df').getOrCreate()
df = spark.read.format("image").option("dropInvalid", "true").load(imgDir)
df.createOrReplaceTempView("Images")
# +
df.printSchema()
rdd = df.rdd.map(list)
# query = """SELECT image.* FROM Images WHERE image.height<>100"""
# sqlCtx.sql(query).show()
# +
# dependencies
import numpy as np
import skimage
from skimage.io import imread, imshow
from skimage.feature import canny, daisy, hog
from skimage.feature import peak_local_max
from skimage import img_as_float
from scipy import ndimage as ndi
from skimage.feature import shape_index
from mpl_toolkits.mplot3d import Axes3D
from skimage.filters.rank import entropy
from skimage.morphology import disk
from skimage.util import img_as_ubyte
from skimage.filters.rank import median
from skimage.filters.rank import mean
from skimage import exposure
from skimage.filters import rank
from skimage.filters import gaussian
import json
# -
# Reads in images as grey instead of their default which spark does
def image_reader(x):
img_path = x[0].origin
img = imread(img_path, as_gray = True)
return (img_path, img)
img_rdd = rdd.map(image_reader)
# +
def daisy_convert(x):
'''DAISY feature extraction'''
img_path = x[0]
try:
img = daisy(x[1], step=50, radius=45,
rings=2, histograms=8,orientations=8)
except:
return (None, None)
return (img_path, img)
def canny_convert(x):
'''Canny Feature Extraction'''
img_path = x[0]
try:
img = canny(x[1])
except:
return (None, None)
return (img_path, img)
def hog_convert(x):
'''Histogram of Oriented Gradients feature extraction'''
img_path = x[0]
try:
img = hog(x[1], orientations=8,
pixels_per_cell=(20, 20),cells_per_block=(1, 1))
except:
return (None, None)
return (img_path, img)
def peak_max_convert(x):
'''
TESTING ONLY NOT USED IN PROJECT
Peak Max Feature extraction.
'''
img_path = x[0]
try:
img_loc_max = img_as_float(x[1])
img = ndi.maximum_filter(img_loc_max, size=5, mode='constant')
coordinates = peak_local_max(img_loc_max, min_distance=8)
except:
return (None, None)
return (img_path, coordinates)
def shape_index_convert(x):
'''
TESTING ONLY NOT USED IN PROJECT
Shape Index Feature extraction.
'''
img_path = x[0]
try:
img = shape_index(x[1])
except:
return (None, None)
return (img_path, img)
def entropy_convert(x):
'''Entropy Feature Extraction'''
img_path = x[0]
try:
img = entropy(x[1],disk(1))
except:
return (None, None)
return (img_path, img)
def extract_features(x):
'''Extracts features and flattens array'''
daisy = daisy_convert(x)[1].flatten()
canny = canny_convert(x)[1].flatten()
hog = hog_convert(x)[1].flatten()
entropy = entropy_convert(x)[1].flatten()
peak_max = peak_max_convert(x)[1].flatten() # Not giving right amount of features
shape_index = shape_index_convert(x)[1].flatten() # Not giving right amount of features
# return (x[0], x[1]) # Original Images
return (x[0], daisy, canny) # Can change what features it returns to test different combinations
def flatten_array(x):
# Flattens Array from a NxM array to a list of length N*M
flattened = []
for arr in x[1:]:
for val in arr.flatten():
flattened.append(val)
return (x[0],len(flattened),flattened)
def preprocess_images(x):
# Preprocessing Techniques used on images
img_path = x[0]
try:
img = median(exposure.equalize_hist(gaussian(x[1],1)),disk(1))
except:
return (None, None)
return (img_path, img)
def aws_label_features(x):
lbl_dict[x[0].split("/")[-1]]["FaceDetails"][0]["Landmarks"][0]
# -
img_features = img_rdd.map(preprocess_images).map(extract_features).map(flatten_array)
with open(labelsFile, "r") as f:
gender_dict = json.load(f)
from pyspark.mllib.regression import LabeledPoint
classified_imgs = img_features.map(lambda x: LabeledPoint(gender_dict[x[0].split("/")[-1]]["Actual"], x[2]))
test, train = classified_imgs.randomSplit(weights=[0.25, 0.75], seed=1)
# ## SVM
# Not utilized in study as it is linear and it was found that it does not do a sufficent job of classifying gender
# +
from pyspark.mllib.classification import SVMWithSGD, SVMModel
# Build the model
svmModel = SVMWithSGD.train(train, iterations=1)
print("Model built")
# -
# Evaluating the model on training data
trainLabelsAndPreds = train.map(lambda p: (p.label, svmModel.predict(p.features)))
trainErr = trainLabelsAndPreds.filter(lambda v: v[0] != v[1]).count() / float(train.count())
# Evaluating the model on test data
testLabelsAndPreds = test.map(lambda p: (p.label, svmModel.predict(p.features)))
testErr = testLabelsAndPreds.filter(lambda v: v[0] != v[1]).count() / float(test.count())
print("SVM")
print(f"Training Error = {trainErr*100}%")
print(f"Test Error = {testErr*100}%")
svmModel.save(sc, "svmModel-daisy-canny")
# sameModel = SVMModel.load(sc, "target/tmp/pythonSVMWithSGDModel")
# ## Logistic Regression
# +
from pyspark.mllib.classification import LogisticRegressionWithLBFGS, LogisticRegressionModel
import time
start = time.time()
#Build Model
lrModel = LogisticRegressionWithLBFGS.train(train)
print("Model built")
# +
# Evaluating the model on training data
lrTrainLabelsAndPreds = train.map(lambda p: (p.label, lrModel.predict(p.features)))
lrTrainErr = lrTrainLabelsAndPreds.filter(lambda v: v[0] != v[1]).count() / float(train.count())
# Evaluating the model on testing data
lrTestLabelsAndPreds = test.map(lambda p: (p.label, lrModel.predict(p.features)))
lrTestErr = lrTestLabelsAndPreds.filter(lambda v: v[0] != v[1]).count() / float(test.count())
end = time.time()
# -
print("Logistic Regression")
print(f"Training Error = {lrTrainErr*100}%")
print(f"Test Error = {lrTestErr*100}%")
print(f"{end - start} seconds {nodes} nodes")
lrModel.save(sc, "lrModel-")
# sameModel = SVMModel.load(sc, "target/tmp/pythonSVMWithSGDModel")
# ## Get Model Metrics
f_name = "LabelsAndPredicted/10K/lrResults-test-original.csv"
act_pred_list = lrTestLabelsAndPreds.collect()
# +
def write_labels_and_pred_to_file(f_name, act_pred_list):
with open(f_name, "w") as f:
f.write("actual,pred\n")
for x in act_pred_list:
f.write(str(x[0]) + "," + str(x[1]) + "\n")
write_labels_and_pred_to_file(f_name, act_pred_list)
print("Written to file.")
# -
# # Below is Testing Different Feature Extraction Techniques
test = img_rdd.take(100)[75][1]
import matplotlib.pyplot as plt
# %matplotlib inline
test
imshow(canny(test))
daisy1, daisy_vis = daisy(test,step=50, radius=45, rings=2, histograms=8,orientations=8,visualize = True)
imshow(daisy_vis)
imshow(test)
daisy1
hog_test, hog_vis = hog(test, orientations=8, pixels_per_cell=(20, 20),
cells_per_block=(1, 1), visualize= True)
imshow(hog_vis)
imshow(test)
img_loc_max = img_as_float(test)
img_max = ndi.maximum_filter(img_loc_max, size=5, mode='constant')
imshow(img_max)
coordinates = peak_local_max(img_loc_max, min_distance=8)
# +
fig, axes = plt.subplots(1, 3, figsize=(8, 3), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(img_loc_max, cmap=plt.cm.gray)
ax[0].axis('off')
ax[0].set_title('Original')
ax[1].imshow(img_max, cmap=plt.cm.gray)
ax[1].axis('off')
ax[1].set_title('Maximum filter')
ax[2].imshow(img_loc_max, cmap=plt.cm.gray)
ax[2].autoscale(False)
ax[2].plot(coordinates[:, 1], coordinates[:, 0], 'r.')
ax[2].axis('off')
ax[2].set_title('Peak local max')
# +
s = shape_index(test)
# In this example we want to detect 'spherical caps',
# so we threshold the shape index map to
# find points which are 'spherical caps' (~1)
target = 1
delta = 0.05
point_y, point_x = np.where(np.abs(s - target) < delta)
point_z = test[point_y, point_x]
s_smooth = ndi.gaussian_filter(s, sigma=0.5)
point_y_s, point_x_s = np.where(np.abs(s_smooth - target) < delta)
point_z_s = test[point_y_s, point_x_s]
# Vis ------
fig = plt.figure(figsize=(24, 8))
ax1 = fig.add_subplot(1, 3, 1)
ax1.imshow(test, cmap=plt.cm.gray)
ax1.axis('off')
ax1.set_title('Input image', fontsize=18)
scatter_settings = dict(alpha=0.75, s=10, linewidths=0)
ax1.scatter(point_x, point_y, color='blue', **scatter_settings)
ax1.scatter(point_x_s, point_y_s, color='green', **scatter_settings)
ax2 = fig.add_subplot(1, 3, 2, projection='3d', sharex=ax1, sharey=ax1)
x, y = np.meshgrid(
np.arange(0, test.shape[0], 1),
np.arange(0, test.shape[1], 1)
)
ax2.plot_surface(x, y, test, linewidth=0, alpha=0.5)
ax2.scatter(
point_x,
point_y,
point_z,
color='blue',
label='$|s - 1|<0.05$',
**scatter_settings
)
ax2.scatter(
point_x_s,
point_y_s,
point_z_s,
color='green',
label='$|s\' - 1|<0.05$',
**scatter_settings
)
ax2.legend()
ax2.axis('off')
ax2.set_title('3D visualization')
ax3 = fig.add_subplot(1, 3, 3, sharex=ax1, sharey=ax1)
ax3.imshow(s, cmap=plt.cm.gray)
ax3.axis('off')
ax3.set_title('Shape index, $\sigma=1$', fontsize=18)
fig.tight_layout()
# +
imshow(entropy(test, disk(3)))
# +
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(12, 4),
sharex=True, sharey=True)
img0 = ax0.imshow(test, cmap=plt.cm.gray)
ax0.set_title("Image")
ax0.axis("off")
fig.colorbar(img0, ax=ax0)
img1 = ax1.imshow(entropy(test, disk(5)), cmap='gray')
ax1.set_title("Entropy")
ax1.axis("off")
fig.colorbar(img1, ax=ax1)
fig.tight_layout()
plt.show()
# +
fig, axes = plt.subplots(2, 2, figsize=(10, 10), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(test, cmap=plt.cm.gray)
ax[0].set_title('Noisy image')
ax[1].imshow(median(test, disk(1)), vmin=0, vmax=255, cmap=plt.cm.gray)
ax[1].set_title('Median $r=1$')
ax[2].imshow(median(test, disk(5)), vmin=0, vmax=255, cmap=plt.cm.gray)
ax[2].set_title('Median $r=5$')
ax[3].imshow(median(test, disk(20)), vmin=0, vmax=255, cmap=plt.cm.gray)
ax[3].set_title('Median $r=20$')
for a in ax:
a.axis('off')
plt.tight_layout()
# +
loc_mean = mean(test, disk(10))
fig, ax = plt.subplots(ncols=2, figsize=(10, 5), sharex=True, sharey=True)
ax[0].imshow(test, cmap=plt.cm.gray)
ax[0].set_title('Original')
ax[1].imshow(mean(test, disk(1)), cmap=plt.cm.gray)
ax[1].set_title('Local mean $r=1$')
for a in ax:
a.axis('off')
plt.tight_layout()
# -
# +
noisy_image = img_as_ubyte(test)
# equalize globally and locally
glob = exposure.equalize_hist(test)
loc = rank.equalize(test, disk(20))
# extract histogram for each image
hist = np.histogram(noisy_image, bins=np.arange(0, 256))
glob_hist = np.histogram(glob, bins=np.arange(0, 256))
loc_hist = np.histogram(loc, bins=np.arange(0, 256))
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(12, 12))
ax = axes.ravel()
ax[0].imshow(test, interpolation='nearest', cmap=plt.cm.gray)
ax[0].axis('off')
ax[1].plot(hist[1][:-1], hist[0], lw=2)
ax[1].set_title('Histogram of gray values')
ax[2].imshow(glob, interpolation='nearest', cmap=plt.cm.gray)
ax[2].axis('off')
ax[3].plot(glob_hist[1][:-1], glob_hist[0], lw=2)
ax[3].set_title('Histogram of gray values')
ax[4].imshow(loc, interpolation='nearest', cmap=plt.cm.gray)
ax[4].axis('off')
ax[5].plot(loc_hist[1][:-1], loc_hist[0], lw=2)
ax[5].set_title('Histogram of gray values')
plt.tight_layout()
# -
test = img_rdd.take(100)[20][1]
# +
from skimage import data, exposure, img_as_float
import numpy as np
fig, ax = plt.subplots(ncols=2, figsize=(10, 5), sharex=True, sharey=True)
ax[0].imshow(test, cmap=plt.cm.gray)
ax[0].set_title('Original')
ax[1].imshow(median(exposure.equalize_hist(gaussian(test,1)),disk(1)), cmap=plt.cm.gray)
ax[1].set_title('Preprocessing')
fig.savefig(fname="og-vs-pp-3", dpi=300)
# +
from skimage import data, exposure, img_as_float
import numpy as np
img = median(exposure.equalize_hist(gaussian(test,1)),disk(1))
hist, hist_centers = exposure.histogram(img)
fig, axes = plt.subplots(1, 2, figsize=(8, 3))
axes[0].imshow(img, cmap=plt.cm.gray, interpolation='nearest')
axes[0].axis('off')
axes[1].plot(hist_centers, hist, lw=2)
axes[1].set_title('histogram of gray values')
# -
| LogisticRegression-Spark/LogisticRegression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
variable = 1
def function():
return
# note print is highlighted in Python but function is not
print(variable, function)
# + language="markdown"
# # header
# **bold**, *italic*
#
# ### Heading with wrong level
# + language="html"
# <html style="color: green">
# <!-- this is a comment -->
# <head>
# <title>HTML Example</title>
# </head>
# <body>
# The indentation tries to be <em>somewhat "do what
# I mean"</em>... but might not match your style.
# </body>
# </html>
# + language="javascript"
# // "print" is NOT highlighted in javascript, while "function" is
# function add_together(a, b) {
# return a + b
# }
#
# print('A')
# -
# It should work for the same language of virtual document with multiple occurrences:
# + language="javascript"
# function add_together(a, b) {
# return a + b
# }
#
# print('A')
| atest/examples/Syntax highlighting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ___
# <img style="float: right; margin: 0px 0px 15px 15px;" src="https://www.carrerasenlinea.mx/logos/original/logo-instituto-tecnologico-y-de-estudios-superiores-de-occidente.png" width="150px" height="100px" />
#
#
# # SEGUNDO EXAMEN PARCIAL
# # SIMULACIÓN DE PROCESOS FINANCIEROS
#
# ## Nombre:
#
# ## Fecha: 19 de Octubre del 2018
#
# ## Por: <NAME>
'''Paquetería'''
from scipy import stats as s
import numpy as n
from matplotlib import pyplot as plt
# # 1.
# La financiera ”El cerdito feliz” atiende la apertura de cuentas de ahorro en su sucursal matriz, en esta sucursal se tienen 4 ejecutivos para atención al inversionista, teniendo una media de tiempo para la atención de un cliente de 30 minutos, aunque con frecuencia se requiere que uno de los ejecutivos participe en juntas de evaluación de crédito, lo que provoca que el equipo de atención disminuya a 3 ejecutivos y en estas circunstancias el promedio de atención por cliente aumenta a 45 minutos. El tiempo entre la llegada de cada cliente en promedio es de 25 minutos, si la apertura de la sucursal es a las 9:00 a.m. y cierra su atención al público a las 14:00 horas, realiza una simulación de mímino 1000 escenarios y responda.
tiempo_expon = lambda media: s.expon.rvs(scale=media)
''' Confirmamos que visualmente la distribución generada es exponencial'''
test =[tiempo_expon(25) for i in range(1000)]
plt.hist(test,bins =30)
plt.show()
print('E[x]',n.mean(test))
# <div style="text-align:justify">
# a) Justifique que tipo de distribuciones de probabilidad va a utilizar para realizar montecarlo.
#
# La distribucion de elección es una exponencial. La exponencial es un caso particular de una distribución gamma. Esta distibución modela el tiempo transcurrido entre dos eventos. Para el caso de atención a clientes, podríamos describir al intervalo entre inicio de atención al cliente y su fin como un intervalo de tiempo que sigue una distribución exponencial. Es decir el tiempo que toma entre la atención de cada cliente es siempre mayor a 0, se acerca mucho a su media y va disminuyendo de manera geométrica, la probabilidad de que el tiempo de atención se alargue; de igual manera se aplica al intervalo de tiempo entre el arrivo de dos clientes a la sucursal. Como contra parte la probabilidad que sucedan x cantidad de eventos en una determinada unidad de tiempo sigue una distribución de poisson. Vemos que el periodo de tiempo de la financiera son 5 horas. El número de cliente que llegarán en esas 5 horas sigue una distribución de poisson. La financiera debe atender la mayor cantidad de esos clientes como le sea posible.
# </div>
# b). ¿Cuántas personas se atenderán hasta las 14:00 horas, si el equipo está formado solo por 3 ejecutivos?, ¿cuántos quedarán en cola?
# Definamos un ambiente local para eficientar el acceso del kernel a las variables y no confundir variables. Creamos clases para la simulación.
#
# Supuesto:
# - El que está siendo atendido a las 14:00 se termina de atender.
# - En la hora de apertura no hay clientes esperando.
class simulacion:
'''Constructor para iniciar nuestro objeto simulador,
iniciamos las variables que no cambian en todo el proceso de simulación como el tiempo toal,
los generadores de clientes y de atención.
'''
def __init__(self,tiempo:'minutos',atencion:'generador',llegada:'generador'):
self.time = tiempo
self.A = atencion
self.LL = llegada
'''Generador de número de clientes y cuánto tarda en llegar el primero'''
'''Genera el número de clientes de la jornada y calcula cuanto tiempo tardas en aten'''
'''En este método se ejecuta el proceso de simulación.El proceso a seguir es el siguiente:
Se tiene en una cola los clientes predispuestos a llegar dentro del tiempo de simulación.
En cada instante se procesa un cliente y se calcula el tiempo trasncurrido.
Se procesan clientes hasta agotar el tiempo.
El cáculo del tiempo transcurrido depende del estado de la simulación.
En términos generales al llegar un cliente se calcula si se alcanza a
atender antes de que llegue otro cliente, si se alcanza a atender,
el tiempo transcurrido aumenta lo que tarde en llegar el otro cliente.
De lo contrario el tiempo aumenta lo que tarde este en ser atendido;
Se hacen los ajustes pertinentes para atender este estado, cómo sumarle
al tiempo de atención del siguiente cliente el tiempo que tuvo que esperar
en ser atendido.
'''
def sim2(self):
done=0
sync = True
queue = self.nC()
overtime = 0
timeTotal = queue.pop(0)
while timeTotal < self.time:
try:
arrival = queue.pop(0)
except:
break
dispatch = self.A()
overtime += dispatch - arrival
# print('Done',done,'queued',len(queue),'time',timeTotal,'dispatch',dispatch,'arrival',arrival,'overtime',overtime)
if overtime > 0:
timeTotal += dispatch
sync = False
elif not sync:
timeTotal += dispatch - overtime
overtime=0
sync = True
else:
timeTotal += arrival
overtime=0
done +=1
# print('Done',done,'queued',len(queue),'time',timeTotal,'dispatch',dispatch,'arrival',arrival,'overtime',overtime)
return(done,len(queue))
'''Este método se encarga de generar una lista con
los tiempos de los clientes en el intervalo de tiempo del objeto.'''
def nC(self):
i = 0
t = list()
while(i<self.time):
gen = self.LL()
t.append(gen)
i += gen
return t
def ambiente2():
media = {4:30,3:45}
media_llegada = 25
horas_abierto = 5
N1 = 10000
ss=simulacion(horas_abierto*60,lambda: tiempo_expon(media[3]),lambda: tiempo_expon(media_llegada))
# ss.sim2()
x,y=zip(*[ss.sim2()for i in range(N1)])
plt.hist(x,alpha=.55, label = 'Completados')
plt.hist(y,alpha=.55, label= 'En espera')
plt.legend()
plt.show()
print('Media clientes completados:',n.mean(x))
print('Media clientes en espera:',n.mean(y))
ambiente2()
# c). ¿Cuántas personas se atenderán hasta las 14:00 horas, si el equipo está formado solo por 4 ejecutivos?, ¿Cuántos quedarán en cola?
def ambiente2():
media = {4:30,3:45}
media_llegada = 25
horas_abierto = 5
N1 = 10000
ss=simulacion(horas_abierto*60,lambda: tiempo_expon(media[4]),lambda: tiempo_expon(media_llegada))
# ss.sim2()
x,y=zip(*[ss.sim2()for i in range(N1)])
plt.hist(x,alpha=.55, label = 'Completados')
plt.hist(y,alpha=.55, label= 'En espera')
plt.legend()
plt.show()
print('Media clientes completados:',n.mean(x))
print('Media clientes en espera:',n.mean(y))
ambiente2()
# # 2
# a). Demuestre que la distribución de poisson satisface la siguiente forma recursiva de su probabilidad
# $$ p(k+1)={\lambda \over k+1}p(k)$$
#
# Realizar todo el desarrollo matemático en mardown usando ecuaciones latex.
#
#
# $\forall k:k >0:$
#
# $P(0) = \frac { e^{-\lambda}}{0!},$
# $P(1) = \frac {\lambda^1 e^{-\lambda}}{1!} = \frac {\lambda e^{-\lambda}}{(1)0!},$
# $P(2) = \frac {\lambda^2 e^{-\lambda}}{2!}= \frac { \lambda \lambda e^{-\lambda}}{(2)(1)0!},$
# $P(3) = \frac {\lambda^3 e^{-\lambda}}{3!}= \frac { \lambda \lambda \lambda e^{-\lambda}}{(3)(2)(1)0!},$
# $P(k) = \frac {\lambda^k e^{-\lambda}}{k!} =\frac{\lambda}{k}P(k-1)$
# b). De el resultado demostrado en el inciso a) (usando la ecuación recursiva), gráfique la función de distribución de probabilidad y su función acumulada, para $\lambda = [4,10,30]$, valide sus resultados comparando con el paquete estadístico `scipy.stats`. Posteriormente, genere muestras aletorias que distribuyan poisson con el conjunto de parámetros lambda dados y realice el histograma correspondiente (en gráficas distintas para cada lambda), validando lo obtenido al graficar la función de densidad de probabilidad. **Nota**: Recuerde que la distribución de poisson es una distribución de probabilidad discreta así que el histograma debe ser discreto y no continuo.
def poissonGen(mu):
'''Generador de función de distribución de probabilidad'''
k = 0
prev = n.exp(-mu)
while True:
yield(prev)
k +=1
prev = mu*prev/k
def poissonRVS(N,mu):
'''Generador de variables'''
rs = n.random.rand(N)
def compare(x):
gen = CumPoisson(mu)
v = 0
while (x >= next(gen)):
v+=1
return v
return list(map(lambda x:compare(x),rs))
def CumPoisson(mu):
'''Generador de función de distribución acumulada'''
gen = poissonGen(mu)
current = next(gen)
while True:
yield current
current += next(gen)
mu = [4,10,30]
for i in mu:
plt.figure(figsize=(18,5))
p = s.poisson(mu=i)
x = n.arange(3*i)
y = p.pmf(x)
a = poissonGen(i)
plt.subplot(131)
gen = [next(a) for i in x]
plt.title('PDF $\lambda$= {}'.format(i))
plt.plot(x,gen,c='r',label='CAGS')
plt.stem(y,label='Scipy')
plt.legend()
plt.subplot(132)
plt.title('CDF $\lambda$= {}'.format(i))
plt.plot(x,n.cumsum(gen),c='r',label='CAGS')
plt.stem(n.cumsum(y),label='Scipy')
plt.legend()
plt.subplot(133)
plt.title('RVS $\lambda$= {}'.format(i))
sample = poissonRVS(1000,i)
plt.hist(sample,label='CAGS',width=.5,density=True,bins=len(n.unique(sample)))
plt.legend()
plt.show()
# # 3
# Demostrar **Teoricamente** usando el método de máxima verosimilitud, que los estimadores para los parámetros $\mu$ y $\sigma$ de una distribución normal, estan dados por:
#
# $$\hat \mu = {1\over n}\sum_{i=1}^n x_i,\quad \hat \sigma^2={1\over n}\sum_{i=1}^n (x_i-\hat \mu)^2$$
#
# **Recuerde que:** La distribución normal es
# $$f(x\mid \mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}$$
#
# Reporte su respuesta usando markdown-latex
# $$ l(x)= \prod_i^n f(x_i\mid \mu ,\sigma ^{2})$$
# Es conveniente trabajar con $\log l(x)=\log(l(x))$
# $l(x)= (2\pi \sigma^2)^{-n/2}e^{-\frac{\sum_i^n(x_i-\mu)^2}{2\sigma^2}} \rightarrow log(l(x)) = \frac{-n}{2}ln(2\pi \sigma^2)-\frac{\sum_i^n (x_i-\mu)^2}{2\sigma^2}$
# $\frac{\partial l}{\partial \mu} = -2\frac{-\sum x_i + \mu n}{2\sigma ^2} \rightarrow 0$ , despejamos $\mu$
# $\sum x_i = \mu n \rightarrow \mu = \frac{\sum_{i=0}^n x_i}{n}$
# $\frac{\partial l}{\partial \sigma} = \frac 1 2 [-n \frac{4\pi\sigma}{2\pi\sigma^2}+2\sigma^{-3}\sum(x_i-\mu)^2] \rightarrow 0$ , despejamos $\sigma^2$
# $\frac{n}{\sigma}=\sum(x_i-\mu)^2\sigma^{-3} \rightarrow \sigma^2 = \frac{\sum(x_i-\mu)^2}{n}$
# $\mu = \frac{\sum_{i=0}^n x_i}{n},\sigma^2 = \frac{\sum(x_i-\mu)^2}{n}$
# # 4
# Un banco, de acuerdo a la ley, tienen la posibilidad de poder invertir las utilidades logradas por créditos de trabajo quirografarios en dos diferentes instrumentos de inversión: El primero es inversión en deuda que presenta una utilidad promedio del 23.5% con una desviación estándar de 1.1%. El segundo es inversión a plazo donde la rentabilidad esperada se comporta como una distribución uniforme y está entre el -2% y el 9%, pero nunca entre -0.5% y 0.5%. Calcular cuál es la mejor combinación posible, si la inversión será en múltiplos de 25%, es decir (inv1 25%, Inv2 75%)-(Inv1 50%,Inv2 50%)-(Inv1 75%, Inv2 25%), simular 1000 veces (escenarios) e indicar rendimientos promedio por combinación.
# a1 : $N(.23,.011^2)$, a2:$U$
# Combinaciónes, función de ponderaciones lineales, cantidad de muestras y muestro de activo normal:
c = [(0,1),(.25,.75),(.5,.5),(.75,.25),(1,0)]
r = lambda x,y,w: x*w[0] + y*w[1]
smple = 1000
a1 = n.random.normal(.235,.011,smple)
# Función de probabilidad acumulada para activo con intervalo constante
def F(x):
a,b,c,d = (-.02,-.005,.005,.09)
if(a <= x and x <= b):
return (x-a)/(d-c+b-a)
elif(b < x and x < c):
return (b-a)/(d-c+b-a)
elif(c <= x and x <= d):
return (x-c+b-a)/(d-c+b-a)
# Estructura de la función de probabilidad acumulada.
def aux():
'''Ambiente autocontenido'''
a,b,c,d = (-.02,-.005,.005,.09)
x=n.linspace(a,d,100)
plt.plot(x,[F(i) for i in x])
plt.show()
aux()
# Función Inversa
def F_1(x):
a,b,c,d = (-.02,-.005,.005,.09)
den = (d-c+b-a)
if(0 <= x and x <= (b-a)/den):
return x*den+a
elif((b-a)/den < x and x <= 1):
return x*den+c-b+a
# Generamos los aleatorios.
a2=list(map(lambda x:F_1(x),n.random.rand(10000)))
# La distribución del activo uniforme se generó correctamente.
plt.hist(a2,bins=30)
plt.show()
print('La mejor combinación de activos es :',c[n.argmax([r(n.mean(a1),n.mean(a2),i) for i in c])])
# Bastante sencillo el resultado si consideramos que un activo es normal centrado en 23.5% con desviación 1.1% y el otro uniforme sin siquiera alcanzar 10% como cota superior.
# # 5
# Considere la siguiente función de distribución de probabilidad
# $$ f(x)=\begin{cases}400e^{-400(x-1)},& \text{para }x\geq 1\\0,& \text{otro caso}\end{cases}$$
#
# a). Para realizar una comparación, realice el desarrollo analítico del valor esperado.
# $u= -400(x-1), du=-400dx \rightarrow -\int_0^{-\inf}e^udu$
# Centro de masa respecto a $u :\frac{-1}{-\inf-0}e^u\big |_0^{-\inf}=0 \rightarrow e^0 \rightarrow E[x] = 1$
x = n.linspace(1,1.05,1000)
f = lambda x:400*n.exp(-400*(x-1))
plt.plot(x,f(x))
plt.show()
# b). Realice el desarrollo teórico para obtener muestras aletorias de esta distribución usando el método de la función inversa. Con los resultados obtenidos, genere 100 muestras y grafique el histograma de dichas muestras aleatorias obtenidas y a su vez la función de densidad f(x), para validar los resultados obtenidos.
# Con la transformación $u = g(x)$ anterior y el desarrollo de a):
# $F(x) = -e^{-400(x-1)}$
F_1 = lambda x: -n.log(x)/400+1
f = lambda x: 400*n.exp(-400*(x-1))
'''Función a ser propuesta'''
f1 = lambda x: 400/x**300
# $c = \frac{-400}{299}x^{-299}\big|_1^\inf=\frac{400}{299}$
F1_1 = lambda x: (1/x)**(1./299)
c =400/299
# De una vez generamos graficamos una función para usar el método de aceptación y rechazo.
plt.hist(F_1(n.random.rand(100)),bins=20,density=True)
x=n.linspace(1,1.02,100)
plt.plot(x,f(x),label='A generar')
plt.plot(x,f1(x),label='Propuesta para A/R')
plt.legend()
plt.show()
# c). Repita el inciso anterior pero en esta ocasión utilice el método de aceptación y rechazo para generar las muestras aleatorias.
ns = 1000*400//299
u1 = n.random.rand(ns)
Y = F1_1(n.random.rand(ns))
zipped = zip(u1,Y)
aa=list(filter(lambda x: x[0]*f1(x[1])<f(x[1]),zipped))
_,u2=zip(*aa)
plt.hist(u2,bins=30,density=True)
x=n.linspace(1,1.02,100)
plt.plot(x,f(x),label='A generar')
plt.legend()
plt.show()
#
#
# **Se desea estimar la media de esta distribución, usando el método montecarlo crudo y sus técnicas de reducción de varianza con muestras de tamaño 10,100 y 1000. Estas cantidades de muestras para cada uno de los siguientes literales:**
#
# d). Use el método de montecarlo crudo para estimar la media.
# F_1(n.random.rand(100))
A = 1
seq = [100,1000,10000,1000000]
res = list(map(lambda x:n.mean(list(map(lambda x:F_1(x),n.random.rand(x)))),seq))
# e). Use el método de muestreo estratificado con 5 estratos $0\leq F(x)\leq0.3, 0.3\leq F(x)\leq0.5, 0.5\leq F(x)\leq0.7, 0.7\leq F(x)\leq0.9 $ y $0.9\leq F(x) \leq 1$. Reparta el total de muestras en los estratos de la siguiente forma: estrato 1, 2, 3, 4, 5, 20%,20%,25%,15% y 20% de las muestras respectivamente.
strat = []
for i in seq:
a,b,c = (int(.2*i),int(.25*i),int(.15*i))
r1 = n.random.uniform(0,.3,a)
r2 = n.random.uniform(.3,.5,a)
r3 = n.random.uniform(.5,.7,b)
r4 = n.random.uniform(.7,.9,c)
r5 = n.random.uniform(.9,1,a)
r = [r1,r2,r3,r4,r5]
m = range(len(r)) # Cantidad de estratos
w = [.3/a,.2/a,.2/b,.2/c,.1/a]
estrat1 = list(map(lambda r:n.array(list(map(F_1,r))),r))
muestras = list(map(lambda wi,xi:xi*wi,w,estrat1))
strat.append(n.concatenate(muestras).sum())
# f). Use el método de los números complementarios.
comp=[n.mean([(F_1(u)+F_1(1-u))/2 for u in n.random.rand(i//2)]) for i in seq]
# g). Finalmente use el método de estratificación en el cúal se divide en N estratos.
partition = lambda B:(n.random.rand(B)+n.arange(B))/B
Nstrat= [n.mean([F_1(x) for x in partition(i)]) for i in seq]
# h). Compare todos los resulados obtenidos con cada método en una tabla unsando la librería pandas, donde muestre segun la cantidad de términos usados, la aproximación de la media aproximada y su **error relativo con el valor obtenido en el inciso a).**
import pandas as pd
df =pd.DataFrame(res,index=seq,columns=['Montecarlo'])
df['Error M'] = df['Montecarlo']/A -1
df['Estratificado'] = strat
df['Error E'] = df['Estratificado']/A -1
df['N Estratificado'] = Nstrat
df['Error NE'] = df['N Estratificado']/A -1
df['Complementarios'] = comp
df['Error C'] = df['Complementarios']/A -1
df['Analítico'] = A
df
# # 6
# 
#
# ## <font color = 'red'> Nota: </font> Use el método de integración montecarlo visto en clase.
#
# a). Use el método de montecarlo para aproximar el valor de pi para muestras de tamaño 100,1000,10000,1000000 y comparelo con su valor real.
#
# b). Repita el inciso anterior usando todos los método de reducción de varianza utilizados en el ejercicio *6*, para constrastar los resultados . Explique sus resultados.
# <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/80a984ae034987174d331e67cecc1fbebe71cc27"></img>
# $v = \int_0^1dx = 1$
A = n.pi/4
f = lambda x:n.sqrt(1-x**2)
seq = [100,1000,10000,1000000]
res = list(map(lambda x:n.mean(list(map(lambda x:f(x),n.random.rand(x)))),seq))
# Use el método de muestreo estratificado con 5 estratos $0≤𝐹(𝑥)≤0.3,0.3≤𝐹(𝑥)≤0.5,0.5≤𝐹(𝑥)≤0.7,0.7≤𝐹(𝑥)≤0.9 y 0.9≤𝐹(𝑥)≤1$ . Reparta el total de muestras en los estratos de la siguiente forma: estrato 1, 2, 3, 4, 5, 20%,20%,25%,15% y 20% de las muestras respectivamente.
strat = []
for i in seq:
a,b,c = (int(.2*i),int(.25*i),int(.15*i))
r1 = n.random.uniform(0,.3,a)
r2 = n.random.uniform(.3,.5,a)
r3 = n.random.uniform(.5,.7,b)
r4 = n.random.uniform(.7,.9,c)
r5 = n.random.uniform(.9,1,a)
r = [r1,r2,r3,r4,r5]
m = range(len(r)) # Cantidad de estratos
w = [.3/a,.2/a,.2/b,.2/c,.1/a]
estrat1 = list(map(lambda r:n.array(list(map(f,r))),r))
muestras = list(map(lambda wi,xi:xi*wi,w,estrat1))
strat.append(n.concatenate(muestras).sum())
# Use el método de los números complementarios.
comp=[n.mean([(f(u)+f(1-u))/2 for u in n.random.rand(i//2)]) for i in seq]
# Finalmente use el método de estratificación en el cúal se divide en N estratos.
partition = lambda B:(n.random.rand(B)+n.arange(B))/B
Nstrat= [n.mean([f(x) for x in partition(i)]) for i in seq]
import pandas as pd
df =pd.DataFrame(res,index=seq,columns=['Montecarlo'])
df['Error M'] = df['Montecarlo']/A -1
df['Estratificado'] = strat
df['Error E'] = df['Estratificado']/A -1
df['N Estratificado'] = Nstrat
df['Error NE'] = df['N Estratificado']/A -1
df['Complementarios'] = comp
df['Error C'] = df['Complementarios']/A -1
df['Analítico'] = A
df
# **Valor de los ejercicios**
# - 1- 2 puntos
# - 2- 2 puntos
# - 3- 1 punto
# - 4- 1 punto
# - 5- 3 puntos
# - 6- 1 punto
# <script>
# $(document).ready(function(){
# $('div.prompt').hide();
# $('div.back-to-top').hide();
# $('nav#menubar').hide();
# $('.breadcrumb').hide();
# $('.hidden-print').hide();
# });
# </script>
#
# <footer id="attribution" style="float:right; color:#808080; background:#fff;">
# Created with Jupyter by <NAME>.
# </footer>
| TEMA-2/Hwk/Examen1_Tema2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy
import toyplot
colormap = toyplot.color.LinearMap(toyplot.color.Palette(), domain_min=0, domain_max=8)
canvas = toyplot.Canvas(width=400, height=100)
axis = canvas.color_scale(colormap, label="Color Scale", scale="linear")
axis.axis.ticks.locator = toyplot.locator.Extended(format="{:.1f}")
| notebooks/tick-label-digits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''amt2'': conda)'
# name: python37664bitamt2conda07fade9a62164467aded83fb3c21478f
# ---
# # Reasonable Crowd Dataset Tutorial
# +
import json
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
from shapely.geometry import Polygon
# %matplotlib inline
# -
# ## Simple manipulations
# ### Plotting a footprint
# In this section, we show a simple code snippet for plotting a footprint
# +
# Enter the path to a trajectory
trajectory_path = "<INSERT_PATH_TO_DOWNLOADED_DATA>/reasonable_crowd_data/trajectories/U_27-a.json"
# load the trajectory
with open(trajectory_path, "r") as j_file:
traj_states = json.load(j_file)
# let's plot the footprint of the first state
state = traj_states[0]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect("equal")
plt.axis("off")
footprint = Polygon(state["footprint"])
ax.plot(*footprint.exterior.xy)
plt.show()
# -
# ### Plotting a layer of the map
# Let's plot a layer of one of the maps.
# +
# Path to a gpkg file representing one of the layers
map_path = "<INSERT_PATH_TO_DOWNLOADED_DATA>/reasonable_crowd_data/maps/S_boundaries.gpkg"
# thanks to the wide support of the gpkg format, plotting is only a couple of lines
map_df = gpd.read_file(map_path)
fig, ax = plt.subplots(1, 1)
map_df.plot(ax = ax)
ax.set_aspect('equal')
plt.axis("off")
plt.show()
# -
# ## More advanced manipulations
# To get better acquainted with the data, we wrote a small script: `visualize_realization.py` that visualizes a realization by making use of the map files and the trajectory files. The output of this script is something like `movie.mp4` in this directory. (Legend for the movie: the black polygon is ego, blue polygons are other cars, red polygons are pedestrians.)
# We highly recommend that you go through the code to become really familiar with the data.
# +
from visualize_realization import visualize_realization
trajectory_path = "<INSERT_PATH_TO_DOWNLOADED_DATA>/reasonable_crowd_data/trajectories/S_1-a.json"
map_path = "<INSERT_PATH_TO_DOWNLOADED_DATA>/reasonable_crowd_data/maps/S_boundaries.gpkg"
# directory where to save the ouputs of this script (a video and an image for each frame)
save_dir = "<INSERT_PATH_TO_DIRECTORY_WHERE_SAVE_PLOTS>"
# NOTE: This might take a couple mins to run;
visualize_realization(trajectory_path, map_path, save_dir)
# -
| tutorials/tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark
# language: python
# name: pyspark
# ---
import json
from datetime import datetime
import pandas as pd
from google.cloud import storage
bucket_name = "rjr-dados-abertos-scripts"
client = storage.Client()
bucket = client.get_bucket(bucket_name)
# +
def save_maps(maps, bucket):
with open(f'maps.json', 'w', encoding='utf-8') as file:
json.dump(maps, file, ensure_ascii=False)
blob = bucket.blob(f"censo_escolar/transform/maps1.json")
blob = blob.upload_from_filename(f'maps.json')
def parse_csv(table, bucket, dtype=None):
return dict(pd.read_csv(f"gs://{bucket}/censo_escolar/transform/tables/{table}.csv").dropna().values.tolist())
# -
maps = {}
# # Escolas
# +
maps["TP_SITUACAO_FUNCIONAMENTO"] = {
1: "Em Atividade",
2: "Paralisada",
3: "Extinta (ano do censo)",
4: "Extinta em anos anteriores"
}
maps["TP_DEPENDENCIA"] = {
1: "Federal",
2: "Estadual",
3: "Municipal",
4: "Privada"
}
maps["TP_LOCALIZACAO"] = {
1: "Urbana",
2: "Rural"
}
maps["TP_LOCALIZACAO_DIFERENCIADA"] = {
0: "A escola não está em área de localização diferenciada",
1: "Área de assentamento",
2: "Terra indígena",
3: "Área onde se localiza comunidade remanescente de quilombos",
4: "Unidade de uso sustentável",
5: "Unidade de uso sustentável em terra indígena",
6: "Unidade de uso sustentável em área remanescente de quilombos"
}
maps["TP_CATEGORIA_ESCOLA_PRIVADA"] = {
1: "Particular",
2: "Comunitária",
3: "Confessional",
4: "Filantrópica"
}
maps["TP_CONVENIO_PODER_PUBLICO"] = {
1: "Municipal",
2: "Estadual",
3: "Estadual e Municipal",
}
maps["TP_REGULAMENTACAO"] = {
0: "Não",
1: "Sim",
2: "Em tramitação"
}
maps["TP_RESPONSAVEL_REGULAMENTACAO"] = {
1: "Federal",
2: "Estadual",
3: "Municipal",
4: "Estadual e Municipal",
5: "Federal e Estadual",
6: "Federal, Estadual e Municipal"
}
maps["TP_OCUPACAO_PREDIO_ESCOLAR"] = {
1: "Próprio",
2: "Alugado",
3: "Cedido"
}
maps["TP_REDE_LOCAL"] = {
0: "Não há rede local interligando computadores",
1: "A cabo",
2: "Wireless",
3: "A cabo e Wireless"
}
maps["TP_INDIGENA_LINGUA"] = {
1: "Somente em Língua Indígena",
2: "Somente em Língua Portuguesa",
3: "Em Língua Indígena e em Língua Portuguesa"
}
maps["TP_PROPOSTA_PEDAGOGICA"] = {
0: "Não",
1: "Sim",
2: "A escola não possui projeto político pedagógico/proposta pedagógica"
}
maps["TP_AEE"] = {
0: "Não oferece",
1: "Não exclusivamente",
2: "Exclusivamente"
}
maps["TP_ATIVIDADE_COMPLEMENTAR"] = {
0: "Não oferece",
1: "Não exclusivamente",
2: "Exclusivamente"
}
maps["TP_OCUPACAO_GALPAO"] = {
1: "Próprio",
2: "Alugado",
3: "Cedido"
}
maps["CO_LINGUA_INDIGENA_1"] = parse_csv("CO_LINGUA_INDIGENA", bucket_name,
dtype={"Cód. Atual": str, "Língua de identificação": str})
maps["CO_LINGUA_INDIGENA_2"] = maps["CO_LINGUA_INDIGENA_3"] = maps["CO_LINGUA_INDIGENA_1"]
# -
# # Turmas
# +
maps["TP_MEDIACAO_DIDATICO_PEDAGO"] = {
1: "Presencial",
2: "Semipresencial",
3: "Educação a Distância - EAD"
}
maps["TP_TIPO_ATENDIMENTO_TURMA"] = {
1: "Exclusivo Escolarização",
2: "Escolarização e Atividade complementar",
3: "Atividade complementar",
4: "Atendimento Educacional Especializado (AEE)"
}
maps["TP_TIPO_LOCAL_TURMA"] = {
0: "A turma não está em local de funcionamento diferenciado",
1: "Sala anexa",
2: "Unidade de atendimento socioeducativo",
3: "Unidade prisional"
}
maps["TP_UNIFICADA"] = {
0: "Não",
1: "Unificada",
2: "Multietapa",
3: "Multi",
4: "Correção de fluxo",
5: "Mista (Concomitante e Subsequente)"
}
maps["TP_ETAPA_ENSINO"] = {
1: "Educação Infantil - Creche",
2: "Educação Infantil - Pré-escola",
3: "Educação Infantil - Unificada",
56: "Educação Infantil e Ensino Fundamental (9 anos) Multietapa",
4: "Ensino Fundamental de 8 anos - 1ª Série",
5: "Ensino Fundamental de 8 anos - 2ª Série",
6: "Ensino Fundamental de 8 anos - 3ª Série",
7: "Ensino Fundamental de 8 anos - 4ª Série",
8: "Ensino Fundamental de 8 anos - 5ª Série",
9: "Ensino Fundamental de 8 anos - 6ª Série",
10: "Ensino Fundamental de 8 anos - 7ª Série",
11: "Ensino Fundamental de 8 anos - 8ª Série",
12: "Ensino Fundamental de 8 anos - Multi",
13: "Ensino Fundamental de 8 anos - Correção de Fluxo",
14: "Ensino Fundamental de 9 anos - 1º Ano",
15: "Ensino Fundamental de 9 anos - 2º Ano",
16: "Ensino Fundamental de 9 anos - 3º Ano",
17: "Ensino Fundamental de 9 anos - 4º Ano",
18: "Ensino Fundamental de 9 anos - 5º Ano",
19: "Ensino Fundamental de 9 anos - 6º Ano",
20: "Ensino Fundamental de 9 anos - 7º Ano",
21: "Ensino Fundamental de 9 anos - 8º Ano",
41: "Ensino Fundamental de 9 anos - 9º Ano",
22: "Ensino Fundamental de 9 anos - Multi",
23: "Ensino Fundamental de 9 anos - Correção de Fluxo",
24: "Ensino Fundamental de 8 e 9 anos - Multi 8 e 9 anos",
25: "Ensino Médio - 1º ano/1ª Série",
26: "Ensino Médio - 2º ano/2ª Série",
27: "Ensino Médio - 3ºano/3ª Série",
28: "Ensino Médio - 4º ano/4ª Série",
29: "Ensino Médio - Não Seriada",
30: "Curso Técnico Integrado (Ensino Médio Integrado) 1ª Série",
31: "Curso Técnico Integrado (Ensino Médio Integrado) 2ª Série",
32: "Curso Técnico Integrado (Ensino Médio Integrado) 3ª Série",
33: "Curso Técnico Integrado (Ensino Médio Integrado) 4ª Série",
34: "Curso Técnico Integrado (Ensino Médio Integrado) Não Seriada",
35: "Ensino Médio - Modalidade Normal/Magistério 1ª Série",
36: "Ensino Médio - Modalidade Normal/Magistério 2ª Série",
37: "Ensino Médio - Modalidade Normal/Magistério 3ª Série",
38: "Ensino Médio - Modalidade Normal/Magistério 4ª Série",
39: "Curso Técnico - Concomitante",
40: "Curso Técnico - Subsequente",
64: "Curso Técnico Misto (Concomitante e Subsequente)",
65: "EJA - Ensino Fundamental - Projovem Urbano",
67: "Curso FIC integrado na modalidade EJA - Nível Médio",
68: "Curso FIC Concomitante",
69: "EJA - Ensino Fundamental - Anos Iniciais",
70: "EJA - Ensino Fundamental - Anos Finais",
71: "EJA - Ensino Médio",
72: "EJA - Ensino Fundamental - Anos Iniciais e Anos Finais",
73: "Curso FIC integrado na modalidade EJA - Nível Fundamental (EJA integrada à Educação Profissional de Nível Fundamental)",
74: "Curso Técnico Integrado na Modalidade EJA (EJA integrada à Educação Profissional de Nível Médio)"
}
# -
# # Matriculas
# +
maps["TP_SEXO"] = {
1: "Masculino",
2: "Feminino"
}
maps["TP_COR_RACA"] = {
0: "Não declarada",
1: "Branca",
2: "Preta",
3: "Parda",
4: "Amarela",
5: "Indígena"
}
maps["TP_NACIONALIDADE"] = {
1: "Brasileira",
2: "Brasileira - nascido no exterior ou naturalizado",
3: "Estrangeira"
}
maps["TP_ZONA_RESIDENCIAL"] = maps["TP_LOCALIZACAO"]
maps["TP_LOCAL_RESID_DIFERENCIADA"] = maps["TP_LOCALIZACAO_DIFERENCIADA"]
maps["TP_OUTRO_LOCAL_AULA"] = {
1: "Em hospital",
2: "Em domicílio",
3: "Não recebe escolarização fora da escola"
}
maps["TP_RESPONSAVEL_TRANSPORTE"] = {
1: "Estadual",
2: "Municipal"
}
# -
# # Docente e gestor
# +
maps["TP_SITUACAO_CURSO_1"] = {
1: "Concluído",
2: "Em andamento"
}
maps["TP_SITUACAO_CURSO_3"] = maps["TP_SITUACAO_CURSO_2"] = maps["TP_SITUACAO_CURSO_1"]
maps["TP_TIPO_IES_1"] = {
1: "Pública",
2: "Privada"
}
maps["TP_TIPO_IES_3"] = maps["TP_TIPO_IES_2"] = maps["TP_TIPO_IES_1"]
maps["TP_ENSINO_MEDIO"] = {
1: "Formação Geral",
2: "Modalidade Normal (Magistério)",
3: "Curso Técnico",
4: "Magistério Indígena Modalidade Normal",
9: "Não informado"
}
maps["TP_ESCOLARIDADE"] = {
1: "Não concluiu o ensino fundamental (fundamental incompleto)",
2: "Ensino fundamental completo",
3: "Ensino médio completo",
4: "Ensino superior completo"
}
maps["TP_CARGO_GESTOR"] = {
1: "Diretor(a)",
2: "Outro Cargo"
}
maps["TP_TIPO_ACESSO_CARGO"] = {
1: "Ser proprietário ou sócio-proprietário da escola (apenas escolas privadas)",
2: "Exclusivamente por indicação/escolha da gestão (escolas públicas e privadas)",
3: "Processo seletivo qualificado e escolha/nomeação da gestão (escolas públicas e privadas)",
4: "Concurso público específico para o cargo de gestor escolar (apenas escolas públicas)",
5: "Exclusivamente por processo eleitoral com aparticipação da comunidade escolar (apenas escolas públicas)",
6: "Processo seletivo qualificado e eleição com participação da comunidade escolar (apenas escola pública)",
7: "Outro (escolas públicas e privadas)"
}
maps["TP_TIPO_CONTRATACAO"] = {
1: "Concursado/efetivo/estável",
2: "Contrato temporário",
3: "Contrato terceirizado",
4: "Contrato CLT"
}
# -
# # Geral
# +
df = pd.read_csv(f"gs://{bucket_name}/censo_escolar/transform/tables/ufs.csv",
engine="python", sep=',', quotechar='"', header=0, encoding="utf8")
names = ["CO_REGIAO", "CO_UF", "CO_MESORREGIAO",
"CO_MICRORREGIAO", "CO_MUNICIPIO"]
cols = [[0, 1], [2, 3], [2, 5, 6], [2, 7, 8], [9, 10]]
names_cols = zip(names, cols)
for name, cols_ in names_cols:
df_ = df.iloc[:, cols_].drop_duplicates()
if len(cols_) == 2:
map_ = dict(df_.values.tolist())
elif name == "CO_MESORREGIAO":
map_ = {f"{col1}{col2:02}": col3
for col1, col2, col3
in df_.values}
else:
map_ = {f"{col1}{col2:03}": col3
for col1, col2, col3
in df_.values}
maps[name] = map_
maps["CO_UF_NASC"] = maps["CO_UF"]
maps["CO_MUNICIPIO_NASC"] = maps["CO_MUNICIPIO"]
maps["CO_UF_END"] = maps["CO_UF"]
maps["CO_MUNICIPIO_END"] = maps["CO_MUNICIPIO"]
tables = ["CO_AREA_COMPL_PEDAGOGICA", "CO_AREA_CURSO", "CO_ORGAO_REGIONAL",
"CO_CURSO", "CO_PAIS_ORIGEM", "CO_CURSO_EDUC_PROFISSIONAL", "CO_TIPO_ATIVIDADE", "CO_IES"]
for table in tables:
table_ = parse_csv(table, bucket_name)
if table == "CO_TIPO_ATIVIDADE":
for i in range(1, 8):
maps[f"{table}_{i}"] = table_
elif table in ["CO_AREA_COMPL_PEDAGOGICA", "CO_AREA_CURSO", "CO_CURSO", "CO_IES"]:
maps[f"{table}_2"] = maps[f"{table}_3"] = maps[f"{table}_1"] = table_
else:
maps[f"{table}"] = table_
maps["CO_IES_OFERTANTE"] = maps["CO_IES_1"]
maps["CO_PAIS_RESIDENCIA"] = maps["CO_PAIS_ORIGEM"]
# -
save_maps(maps, bucket)
| etl/censo_escolar/transform/maps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:endnetGpu]
# language: python
# name: conda-env-endnetGpu-py
# ---
# # Which GPU to use
# +
multiGPU = False
whichGPU = 1
# Select which GPU to use
if(multiGPU):
from keras.utils.training_utils import multi_gpu_model
else:
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
# The GPU id to use, usually either "0" or "1"
os.environ["CUDA_VISIBLE_DEVICES"] = str(whichGPU)
# # Do other imports now...
# -
# # Load all the functions
# %run -i 'arena.py'
# # General Parameters
# +
import math
# What data to use
tableBase = '3PKk'
convertStates = False
# Interactive (just in general if one is asked for confirmations, set to False if on autopilot over night f.x.)
askForConfirmation = False
# NN parameters
filters = [16,32,32,64,128,128,128]
filterShape = [2,2,2,2,2,2,2]
batch_size = 256
optimizer = 'Adadelta'
useBatchNorm = False
num_classes = 3
input_shape = (4,8,8)
### DON'T MODIFY BELOW ###
# Generate dataset variables
fileName = tableBase + '.hdf5'
dataSetName = tableBase + '_onlyLegal'
if not convertStates:
dataSetName = tableBase + '_onlyLegal_fullStates'
dataSetWdlName = tableBase + '_Wdl_onlyLegal_3Values'
# Number of Pieces
nPi = int(dataSetName[0])
nPa = nPi - 2
nWPa = math.ceil(nPa/2)
# -
# # Experiment 1
# Bengio methood 3n4 with freeze
# ### Exp 1 Paramters
# +
# %run -i 'arena.py'
# Parameters
sourceNet = '103' # trained on 3pc from scratch
# sourceNet = '107' # trained on 4pc from scratch
freeze = True
resSaveFile = '3n4freeze'
epochs = 1
averageOver = 1
expDescr = "Bengio 3n4 - freeze = {} - average over {} runs".format(str(freeze), averageOver)
saveEveryRun = False # save stuff in results dir
saveWeightsCheckpoints = False # save chkp in results dit
saveTensorboardLogs = False # save logs in ./logs dir
resID = '---NORESID---' # used when not saving data, but fitModel() still needs a resID
fractionOfDataToUse = 0.1
plotDuringTraining = True
loadWeights = False
askForConfirmation = False
saveDir = 'bengioResults'
resSaveFile = resSaveFile + '-{}runAverage'.format(averageOver)
resSaveFileFullPath = saveDir + '/' + str(resSaveFile) + '.pkl'
# -
# ### Create model and load data
# +
# prepare save file
# if not os.path.exists(resSaveFileFullPath):
# print("Save file doesn't exists, creating...\n")
# save_obj(saveDir, resSaveFile, [])
# else:
# print("Save file exists...\n")
# load data
X_train, X_test, y_train, y_test = loadData()
# create model
model, nnStr = createModel()
layersCount = len(model.layers)
# load old results
# results = load_obj(saveDir, resSaveFile)
# initialize variables wrt old results
# startTrainingAtLayer = len(results)
startTrainingAtLayer = 0
# print("\nStarting/restarting TL at {} transfered layers".format(startTrainingAtLayer))
# -
# ### Train
def loadNFirstLayers(model, sourceNet, copyFirstNLayers, freeze):
# Load weights
# weightsPath = 'Results/' + sourceNet + '/weights.hdf5'
# print("Loading first {} layers from results {}, ".format(copyFirstNLayers, weightsPath))
# model.load_weights(weightsPath)
# Randomize all but first n layers
session = K.get_session()
layers = model.layers
for i in range(len(layers)):
layer = layers[i]
if hasattr(layer, 'kernel_initializer'):
# freeze layer
if i < copyFirstNLayers:
if freeze:
print("- {}: Freezing layer {}".format(i+1,layer))
layer.trainable = False
# randomize layer
else:
print('- {}: Resetting layer {}'.format(i+1,layer))
layer.kernel.initializer.run(session=session)
else:
print('- {}: Skipping layer {}'.format(i+1,layer))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
return model
# # %run -i 'arena.py'
model, nnStr = createModel()
for layer in model.layers:
print(layer, layer.trainable)
compareResultsDuringTraining = False
plotDuringTraining = False
saveTensorboardLogs = False
epochs = 1
model = loadNFirstLayers(model, sourceNet, 9 , freeze)
print(model.summary())
for layer in model.layers:
print(layer, layer.trainable)
score = calcScore(model)
fitHistory, logDir = trainModel(resID, model, saveWeightsCheckpoints, saveTensorboardLogs)
score = calcScore(model)
# +
# %run -i 'arena.py'
for copyFirstNLayers in range(startTrainingAtLayer, layersCount):
print('\n\n')
print('==========================================================================================')
print('= =')
print('= Currently transfering first {} layers, out of {} ='.format(copyFirstNLayers, layersCount - 1))
print('= =')
print('==========================================================================================')
print()
if copyFirstNLayers == layersCount - 1:
copyFirstNLayers += 1
accumulatedScore = 0
for a in range(averageOver):
# save current averagePosition to tmp file
with open(saveDir + '/' + str(resSaveFile) + '_currentPosition.txt','w') as file:
file.write('Currently at transferedLayers = {} out of {} \nInner avg loop position: {} out of {}'.format(copyFirstNLayers, layersCount-1, a+1, averageOver))
# load Model layers
model = loadNFirstLayers(model, sourceNet, copyFirstNLayers , freeze)
# Prepare save dir
if saveEveryRun:
resID = genNextResultsDir(model)
# train
fitHistory, logDir = trainModel(resID, model, saveWeightsCheckpoints, saveTensorboardLogs)
# score and save results
score = calcScore(model)
if saveEveryRun:
saveTrainResults(resID, model, logDir, score, copyFirstNLayers)
# update Return
accumulatedScore += score[1]
# append averaged results for one set of layers
results.append(accumulatedScore/averageOver)
# save old results to checkpoints dir
dateTime = time.strftime('%Y-%m-%d-%H:%M:%S', time.localtime())
src = saveDir + '/' + str(resSaveFile) + '.txt'
dest = saveDir + '/checkpoints/' + str(resSaveFile) + dateTime + '.txt'
if os.path.exists(src):
shutil.move(src, dest)
# save results
save_obj(saveDir, resSaveFile, results)
with open(saveDir + '/' + str(resSaveFile) + '.txt','w') as file:
file.write(str(results))
# to load:
# results = load_obj('temp','3n4.txt')
print('\n Final Results: {}'.format(results))
| mainCode/1.old/testZone2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hypothyroid prediction using Light GBM classifier
import pandas as pd
import numpy as np
# +
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
file_handler = open("F:\\Thyroid final\\hypothyroid.csv", "r")
df = pd.read_csv(file_handler, sep = ",")
file_handler.close()
# -
df.loc[df['Age'] == '455', 'Age'] = '45'
df.dropna(inplace=True)
df.replace(to_replace='?', inplace=True)
df.dropna(inplace=True)
df = df.replace(to_replace={'f':0,'t':1, 'y':1, 'n':0,'M':0,'F':1})
df = df.replace(to_replace={'?':True})
df.dropna(inplace=True)
from sklearn.preprocessing import LabelEncoder
lb_make = LabelEncoder()
df["class"] = lb_make.fit_transform(df["class"])
df.head(5)
df.describe()
import seaborn as sns
sns.countplot(x="class",data=df)
# +
x=df.drop('class',axis=1)
y=df["class"]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y,test_size=0.4,random_state=1)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
# -
import lightgbm as ltb
model = ltb.LGBMClassifier(boosting_type='dart',max_depth=10,num_leaves=90,binary="log loss",learning_rate=0.199,
objective="cross_entropy",extra_trees="True",tree_learner="data",metric="binary_logloss")
model.fit(x_train, y_train)
from sklearn import metrics
expected_y = y_test
y_pred = model.predict(x_test)
# summarize the fit of the model
print(); print(metrics.classification_report(expected_y,y_pred))
print(); print(metrics.confusion_matrix(expected_y, y_pred))
print("Accuracy: ",metrics.accuracy_score(expected_y, y_pred))
# calculate accuracy
from sklearn import metrics
print("ACCURACY:")
print(metrics.accuracy_score(y_test, y_pred))
y_test.value_counts()
# calculate the percentage of ones
# because y_test only contains ones and zeros, we can simply calculate the mean = percentage of ones
print("Percentage of ones")
y_test.mean()
print("Percentage of zeros")
1 - y_test.mean()
# calculate null accuracy in a single line of code
# only for binary classification problems coded as 0/1
max(y_test.mean(), 1 - y_test.mean())
confusion = metrics.confusion_matrix(y_test, y_pred)
print(confusion)
#[row, column]
TP = confusion[1, 1]
TN = confusion[0, 0]
FP = confusion[0, 1]
FN = confusion[1, 0]
print("classification_error")
print(1 - metrics.accuracy_score(y_test, y_pred))
print("sensitivity")
print(metrics.recall_score(y_test, y_pred))
# +
print("True Positive Rate")
specificity = TN / (TN + FP)
print(specificity)
# -
false_positive_rate = FP / float(TN + FP)
print("false_positive_rate")
print(false_positive_rate)
print("precision")
print(metrics.precision_score(y_test, y_pred))
print("roc_auc_score")
print(metrics.roc_auc_score(y_test, y_pred))
# +
import matplotlib.pyplot as plt
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.rcParams['font.size'] = 12
plt.title('ROC curve Using Light GBM(25 attributes)')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
# -
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True)
cv_results = cross_val_score(model, x_train, y_train, cv=kfold, scoring='accuracy')
cv_results
import seaborn as sns
sns.countplot(x="class",data=df)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, model.predict(x_test))
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(cm)
ax.grid(False)
ax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted Hypothyroid', 'Predicted Negative'))
ax.yaxis.set(ticks=(0, 1), ticklabels=('Actual Hypothyroid', 'Actual Negative'))
ax.set_ylim(1.5, -0.5)
for i in range(2):
for j in range(2):
ax.text(j, i, cm[i, j], ha='center', va='center', color='red')
plt.title("Confusion matrix using Light GBM (25 attributes)")
plt.show()
from sklearn.metrics import f1_score
score = f1_score(y_test, y_pred, average='binary')
print('F-Measure: %.3f' % score)
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(16,16))
sns.heatmap(df.iloc[:,0:].corr(),annot=True)
| Source Code/Light GBM classifier(25 features).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# -
import earth_model
# ## PREM and sesimic wave velocities
#
# P- and S-wave velocities are parameterised in PREM in a similar way to density. For example
# $V_P$ can be written:
#
# $$
# V_P(r) = \left\{
# \begin{array}{ll}
# p_{0,0} + p_{0,1}r + p_{0,2}r^2 + p_{0,3}r^3 & r\leq 1221.5 \; \mathrm{km} \\
# p_{1,0} + p_{1,1}r + p_{1,2}r^2 + p_{1,3}r^3 & 1221.5\leq r\leq 3480.0 \; \mathrm{km}\\
# \vdots & \vdots \\
# p_{12,0} + p_{12,1}r + p_{12,2}r^2 + p_{12,3}r^3 & 6368.0\leq r\leq 6371.0 \; \mathrm{km} \\
# \end{array}
# \right.
# $$
#
# with $V_S$ written in the same way. However, PREM is both anisotropic (in the upper mantle, for
# $6151.0\leq r\leq 6346.6 \; \mathrm{km}$) and anelastic (at all depths). We will ignore the anisotropy
# (although adding that may be an interesting thing to do) but we do need to consider anelasticity.
#
# A single value is applied to the bulk, $Q_{\kappa}$ and shear, $Q_{\mu}$, quality factor for each layer. This
# is the inverse of the dissipation (e.g. $q_{\kappa} = Q^{-1}_{\kappa}$) and can be used to calculate the
# seismic velocities at periods other than 1 s (which is the reference period used in PREM). For a period $T$,
# velocities are given by:
#
# $$V_S(r,T) = V_S(r,1)\left(1-\frac{\ln T}{\pi} q_{\mu}(r)\right)$$
#
# and
#
# $$V_P(r,T) = V_P(r,1)\left(1-\frac{\ln T}{\pi}\left[
# \left(1 - E\right)q_{\kappa}(r)
# + Eq_{\mu}(r)\right]\right)$$
#
# where $E = \frac{4}{3}\left(\frac{V_S(r,1)}{V_P(r,1)}\right)^2$
# +
# This implements the PREM density model using
r_earth = 6371 # km
# Note use of isotropic approxmation from footnote in paper
vp_params = np.array([[11.2622, 0.0000, -6.3640, 0.0000],
[11.0487, -4.0362, 4.8023, -13.5732],
[15.3891, -5.3181, 5.5242, -2.5514],
[24.9520, -40.4673, 51.4832, -26.6419],
[29.2766, -23.6027, 5.5242, -2.5514],
[19.0957, -9.8672, 0.0000, 0.0000],
[39.7027, -32.6166, 0.0000, 0.0000],
[20.3926, -12.2569, 0.0000, 0.0000],
[ 4.1875, 3.9382, 0.0000, 0.0000],
[ 4.1875, 3.9382, 0.0000, 0.0000],
[ 6.8000, 0.0000, 0.0000, 0.0000],
[ 5.8000, 0.0000, 0.0000, 0.0000]])
vs_params = np.array([[ 3.6678, 0.0000, -4.4475, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 6.9254, 1.4672, -2.0834, 0.9783],
[11.1671, -13.7818, 17.4575, -9.2777],
[22.3459, -17.2473, -2.0834, 0.9783],
[ 9.9839, -4.9324, 0.0000, 0.0000],
[22.3512, -18.5856, 0.0000, 0.0000],
[ 8.9496, -4.4597, 0.0000, 0.0000],
[ 2.1519, 2.3481, 0.0000, 0.0000],
[ 2.1519, 2.3481, 0.0000, 0.0000],
[ 3.9000, 0.0000, 0.0000, 0.0000],
[ 3.2000, 0.0000, 0.0000, 0.0000]])
q_kappa_params = np.array([1327.7, 57823.0, 57823.0, 57823.0, 57823.0,
57823.0, 57823.0, 57823.0, 57823.0, 57823.0,
57823.0, 57823.0])
q_mu_params = np.array([84.6, np.inf, 312.0, 312.0, 312.0, 143.0, 143.0,
143.0, 80.0, 600.0, 600.0, 600.0])
# All 14 discontiuities in PREM in km.
breakpoints = np.array([0.0, 1221.5, 3480.0, 3630.0, 5600.0, 5701.0, 5771.0,
5971.0, 6151.0, 6291.0, 6346.6, 6356.0, 6371.0])
# Turn range of polynomials from 0 - 1 to 0 - r_earth
vp_params[:,1] = vp_params[:,1] / r_earth
vp_params[:,2] = vp_params[:,2] / (r_earth**2)
vp_params[:,3] = vp_params[:,3] / (r_earth**3)
# Turn range of polynomials from 0 - 1 to 0 - r_earth
vs_params[:,1] = vs_params[:,1] / r_earth
vs_params[:,2] = vs_params[:,2] / (r_earth**2)
vs_params[:,3] = vs_params[:,3] / (r_earth**3)
prem = earth_model.Prem(breakpoints=breakpoints, r_earth=r_earth, vp_params=vp_params,
vs_params=vs_params, q_mu_params=q_mu_params,
q_kappa_params=q_kappa_params)
# +
# What does it look like?
fig, ax = plt.subplots(figsize=(10,6))
rs = np.arange(0, 6371, 0.5)
ax.plot(rs, prem.vp(rs), 'b', label='Vp')
ax.plot(rs, prem.vs(rs), 'r', label='Vs')
ax.set_xlabel('Radius (km)')
ax.set_ylabel('Seismic velocity (km/s)')
ax.legend()
ax.axvline(1221.5, ls=':', c='k')
ax.axvline(3480, ls='--', c='k')
ax.axvline(3630, ls=':', c='k')
ax.axvline(5701, ls=':', c='k')
ax.axvline(5971, ls=':', c='k')
secax = ax.secondary_xaxis('top', functions=(lambda x: 6371 - x, lambda x: 6371 - x))
secax.set_xlabel('Depth (km)')
plt.show()
# -
print(prem.vs(1000.0))
print(prem.vs(1000.0, t=10.0))
print(prem.vs(1000.0, t=1000.0))
print(prem.vs(1000.0, t=60*60*24))
print(prem.vp(1000.0))
print(prem.vp(1000.0, t=10.0))
print(prem.vp(1000.0, t=1000.0))
print(prem.vp(1000.0, t=60*60*24))
# +
# What does it look like?
fig, ax = plt.subplots(figsize=(10,6))
rs = np.arange(5500, 6300, 0.5)
ax.plot(rs, prem.vs(rs), 'r', label='T = 2 s')
ax.plot(rs, prem.vs(rs, t=20.0), 'r-.', label='T = 20 s')
ax.plot(rs, prem.vs(rs, t=200.0), 'r--', label='T = 200 s')
ax.plot(rs, prem.vs(rs, t=2000.0), 'r:', label='T = 2000 s')
ax.set_xlabel('Radius (km)')
ax.set_ylabel('Vs (km/s)')
ax.legend()
ax.axvline(5701, ls=':', c='k')
ax.axvline(5971, ls=':', c='k')
ax.annotate('LVZ', (6200, 4.5))
secax = ax.secondary_xaxis('top', functions=(lambda x: 6371 - x, lambda x: 6371 - x))
secax.set_xlabel('Depth (km)')
plt.show()
# -
# ## Bulk and shear modulus
#
# (I think I may have a slight error in the parameters, bulk modulus for r=0
# a bit off compared to paper).
#
# $$ Vp = \sqrt{\frac{\kappa + \frac{4}{3} \mu}{\rho}} $$
#
# and
#
# $$ Vs = \sqrt{\frac{\mu}{\rho}} $$
print('mu = ', prem.shear_modulus(0), 'GPa')
print('kappa = ', prem.bulk_modulus(0), 'GPa')
# +
# What does it look like?
fig, ax = plt.subplots(figsize=(10,6))
rs = np.arange(0, 6371, 0.5)
ax.plot(rs, prem.bulk_modulus(rs), 'c', label='$\kappa$')
ax.plot(rs, prem.shear_modulus(rs), 'm', label='$\mu$')
ax.set_xlabel('Radius (km)')
ax.set_ylabel('Modulus (GPa)')
ax.legend()
ax.axvline(1221.5, ls=':', c='k')
ax.axvline(3480, ls='--', c='k')
ax.axvline(3630, ls=':', c='k')
ax.axvline(5701, ls=':', c='k')
ax.axvline(5971, ls=':', c='k')
secax = ax.secondary_xaxis('top', functions=(lambda x: 6371 - x, lambda x: 6371 - x))
secax.set_xlabel('Depth (km)')
plt.show()
# -
| PREM_velocity_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
# %matplotlib inline
catalog = pd.read_csv('catalogTest.csv')
# -
print(catalog.columns)
catalog.head(4)
histIncl = catalog.hist(column='inclination',bins=100)
histIncl = catalog.hist(column='eccentricity',bins=100)
histMM = catalog.hist(column='meanmotion',bins=100)
# Create a column for orbital period and one for revs/day
# Mean Motion is the the fraction of a radian traveled in 1 minute
# Revs per day is 1440*mean_motion/2*pi
# Orbital period is 1440/revs_per_day
minutesPerDay = 1440.
catalog['revsPerDay'] = catalog['meanmotion']*minutesPerDay/(2*np.pi)
catalog['period'] = minutesPerDay/catalog['revsPerDay']
histPeriod = catalog.hist(column='revsPerDay',bins=100)
histRevsPerDay = catalog.hist(column='revsPerDay',bins=100)
# Plot the inclinaation and eccentricity of objects with ~1 rev/day
geosync = catalog[catalog['revsPerDay'] < 1.5]
histGeoSync = geosync.hist(column='revsPerDay',bins=100)
axGeo = geosync.plot.scatter(x='inclination',y='eccentricity')
geoHist, xedges, yedges = np.histogram2d(geosync['inclination'],geosync['eccentricity'])
print(geoHist)
geohist2 = plt.hist2d(geosync['inclination'],geosync['eccentricity'],bins=100,cmin=1)
# Get the objects in a polar orbit
polar = catalog[catalog['inclination'] > 1.6]
polarhist2 = plt.hist2d(polar['revsPerDay'],polar['eccentricity'],bins=100,cmin=1)
| Python/visualizeCatalog.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ypR4NNY7oyEV" colab_type="text"
# #### We need to install the ktrain library. Its a light weight wrapper for keras to help train neural networks. With only a few lines of code it allows you to build models, estimate optimal learning rate, loading and preprocessing text and image data from various sources and much more. More about our approach can be found at [this](https://towardsdatascience.com/bert-text-classification-in-3-lines-of-code-using-keras-264db7e7a358) article.
# + id="58WB13Jx3rQm" colab_type="code" outputId="847a6286-f36b-4b32-a791-5ab6d625c701" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !pip3 install ktrain==0.2.2
# + id="KN6N85ah8VXf" colab_type="code" outputId="6e97b40b-ea0d-4eb1-9562-8862c525c0f3" colab={"base_uri": "https://localhost:8080/", "height": 82}
#Importing
import ktrain
from ktrain import text
# + id="Mr1YXudk8Vti" colab_type="code" outputId="8b08004e-329e-4eb8-a21e-e4dae409fc3e" colab={"base_uri": "https://localhost:8080/", "height": 52}
#obtain the dataset
import tensorflow as tf
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True,
)
# + id="2x46reXu9Kru" colab_type="code" outputId="f1ca7357-d740-492d-e482-76c8901eb04f" colab={"base_uri": "https://localhost:8080/", "height": 52}
# %cd /root/.keras/datasets/aclImdb
# !ls
# + id="qnXQ-lcL8d6O" colab_type="code" outputId="4a790d12-399d-415f-d643-75e55038ea69" colab={"base_uri": "https://localhost:8080/", "height": 35}
# set path to dataset
import os.path
dataset = '/root/.keras/datasets/aclImdb'
IMDB_DATADIR = os.path.join(os.path.dirname(dataset), 'aclImdb')
print(IMDB_DATADIR)
# + [markdown] id="ugopbOABrmne" colab_type="text"
# ## STEP 1: Preprocessing
# ####The texts_from_folder function will load the training and validation data from the specified folder and automatically preprocess it according to BERT's requirements. In doing so, the BERT model and vocabulary will be automatically downloaded.
# + id="jELdxonN9J8v" colab_type="code" outputId="d32f97a7-e69f-465b-d1e5-bfef9ee204f5" colab={"base_uri": "https://localhost:8080/", "height": 228}
(x_train, y_train), (x_test, y_test), preproc = text.texts_from_folder(IMDB_DATADIR,
maxlen=500,
preprocess_mode='bert',
train_test_names=['train',
'test'],
classes=['pos', 'neg'])
# + [markdown] id="a0SIaqHcslLZ" colab_type="text"
# ### STEP 2: Loading a pre trained BERT and wrapping it in a ktrain.learner object
# + id="90ftQ6MgAJy4" colab_type="code" outputId="a1c715b8-5d54-4405-c5e9-b7bcb042a131" colab={"base_uri": "https://localhost:8080/", "height": 606}
model = text.text_classifier('bert', (x_train, y_train), preproc=preproc)
learner = ktrain.get_learner(model,train_data=(x_train, y_train), val_data=(x_test, y_test), batch_size=6)
# + [markdown] id="nN6zWQgys0c_" colab_type="text"
# ### STEP 3: Training and Tuning the model's parameters
# + id="Fxdw88YjAfvF" colab_type="code" outputId="663b6e29-8bd0-4fed-cbf6-c8cc361b5244" colab={"base_uri": "https://localhost:8080/", "height": 392}
learner.fit_onecycle(2e-5, 4)
# + id="ihOn7ztsAnaL" colab_type="code" colab={}
# + id="mPVhsfj3TwHf" colab_type="code" colab={}
| Ch4/07_BERT_Sentiment_Classification_IMDB_ktrain.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# To open this notebook in Google Colab and start coding, click on the Colab icon below.
#
# <table style="border:2px solid orange" align="left">
# <td style="border:2px solid orange ">
# <a target="_blank" href="https://colab.research.google.com/github/neuefische/ds-welcome-package/blob/main/programming/1_Python_Variables_Types.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# </table>
# + [markdown] id="g01JrxKRM2vL"
# ---
# # Intro to Python
# + [markdown] id="g01JrxKRM2vL"
# Welcome to your first notebook on python!
#
# In this notebook you will start your journey of becoming a Pythonista and Data Scientist. If you're completely new to programming this is the right place to start. But even if you are already familiar with Python or some of its concepts it will be a good revision to refresh your knowledge.
#
# At the end of the notebook you will...
# * know how to use python as a calculator.
# * be familiar with pythons concept of variables.
# * have an overview over Python's data types.
# + [markdown] id="IJcGK6R8M2vM"
# ## A brief little introduction...
#
# Python was developed in the late 1980s by <NAME>. It is an interpreted, high-level and general purpose programming language. In comparison to other programming languages Python strives for a simpler, less-cluttered syntax and grammar. Nevertheless it is one of the (if not to say the) preferred and most used language when it comes to Data Science. Its versatility and flexibility in combination with its easy to learn and clean syntax makes it a perfect language for programming beginners.
#
# Enough said, let's directly dive into the first lesson.
# + [markdown] id="Uz23ukSCM2vR"
# ## Numeric Operations
#
# At its base level, Python is really just an awesome calculator that can do way more stuff than addition and subtraction. But, let's focus on that functionality for now.
#
# All of the simple operations that you think should be available are available. Addition, subtraction, multiplication, division and exponentiation are all accessible via + , - , * , / and ** , respectively.
# + colab={"base_uri": "https://localhost:8080/"} id="iE3ERx9pNhr-" outputId="48e68e5b-a4ee-4c42-f477-9d372e92ad1a"
7 + 8
# + colab={"base_uri": "https://localhost:8080/"} id="G_dY7ba6NoMm" outputId="c4221237-1567-4ffd-aa32-1d92b375706b"
7 - 8
# + colab={"base_uri": "https://localhost:8080/"} id="kuYct8w5NpbK" outputId="d9db79bd-fb76-4ee9-a036-7e7ec13ac239"
7 * 8
# + colab={"base_uri": "https://localhost:8080/"} id="aqEwvZ4CNqfp" outputId="beb6f535-6d2b-48ef-f254-5a155012b9b8"
7 / 8
# + colab={"base_uri": "https://localhost:8080/"} id="_5dDFIvZNr6g" outputId="dc1c6668-5b54-42c7-bbb3-6a95ae41bad1"
7 ** 8
# + [markdown] id="sB76f_UrOJOH"
# Perfekt. All of these operations output exactly what we think they would.
#
# Besides those simple operators you can also find on any calculator Python offers two more arithmetic operators which might be new to you: // and %.
# The double slash // is called floor division. All it does is perform division and truncate the result. So where 7 / 8 gave us 0.875, 7 // 8 cuts off after the 0. giving us 0.
# + colab={"base_uri": "https://localhost:8080/"} id="epd5O0jyObjA" outputId="fb075e22-f038-4164-b282-cb483c3a642a"
7 // 8
# + [markdown] id="urGZkhhZOSit"
# The last operation that we will go over is the modular division operator, % (also called modulo). This operation is the sibling to //. As you can see in the following example the // gives us the integer number of times that 7 goes into 71. But, there is still a remainder of one. The way we get the remainder of integer division is with the % operator.
# + colab={"base_uri": "https://localhost:8080/"} id="MAAE_-HWOfm7" outputId="2618124c-e429-4228-b1ac-f14a487a9b49"
71 // 7 # floor division
# + colab={"base_uri": "https://localhost:8080/"} id="5u_Tn8DbOmKX" outputId="e16ec356-972a-4c73-fa39-2f1b6d0d2604"
71 % 7 # modulo
# + [markdown] id="Odr77fECOt0J"
# You can change the order in which operations get performed by using parenthesis (), just as you can in algebra.
# + colab={"base_uri": "https://localhost:8080/"} id="ANgCbnEcOutU" outputId="7eedb534-5e12-42a8-8dbb-59006413953d"
4 + 5 * 3
# + colab={"base_uri": "https://localhost:8080/"} id="FSJz2eBCOwZR" outputId="ae99859a-983d-4e7f-dff4-0a4180e2be47"
(4 + 5) * 3
# + [markdown] id="oLsL77UovgvS"
# At this point we also want to introduce some simple functions which you will encounter during your daily work as a Data Scientist. (Don't worry if you don't know what a function is. We will cover this in another notebook.)
#
# `abs()`, `min()`, `max()` and `round()` are useful little functions which will make your life a lot easier. As their names already imply `abs()` will return the absolut value of a number, `min()` and `max()` will return the minimum respectively maximum of a bunch of numbers and `round()` will round a number to a given amount of decimals.
# + colab={"base_uri": "https://localhost:8080/"} id="32LPzJNUvu__" outputId="e615a593-ff59-4b0a-e345-529283844afd"
abs(3.14)
# + colab={"base_uri": "https://localhost:8080/"} id="w27ravqvugoq" outputId="9b756bb1-8d96-4d84-afb5-0ee79e1781fc"
abs(-3.41)
# + colab={"base_uri": "https://localhost:8080/"} id="R0-wIjfM_J7T" outputId="7dc624f8-852d-4e7f-d366-099f0255ad31"
min(3, 5, 1, 6, 8, 9)
# + colab={"base_uri": "https://localhost:8080/"} id="FqBtOsgv_M5-" outputId="54d87491-4106-4d6c-e907-bd1f0567798e"
max(3, 5, 1, 6, 8, 9)
# + colab={"base_uri": "https://localhost:8080/"} id="XRhuZMJj-9vt" outputId="9fba2a6c-50e6-45f9-aa5f-21a4fba091d5"
round(5.23412, 2) # you can specify the amount of decimals after the comma
# + [markdown] id="L4n2RJ7aO1OC"
# **Questions:**
#
# What do you think the results of the following computations will be?
#
# 1. 8.0 - 7
# 2. 8 * 0.25
# 3. 5 ** 2
# 4. 17 // 5
# 5. (4 + 5) / 6
# 6. (46 / 8) % 2
# 7. abs(-7.9)
# 8. round(6.293764, 3)
# + [markdown] id="r5zfHBWDF50c"
# <details><summary>
# Click here for the answers.
# </summary>
#
# 1. 1.0
# 2. 2.0
# 3. 25
# 4. 3
# 5. 1.5
# 6. 1.75
# 7. 7.9
# 8. 6.294
#
# </details>
# + [markdown] id="8FsAtR0tQrhI"
# ## Variables
# + [markdown] id="Xi9hR-sgRPji"
# One of the most powerful constructs in programming is the ability to store arbitrary values in what we call variables. You can think of variable assignment as giving a name to something so that it can be accessed later by different parts of your program.
#
# In Python, variable assignment occurs with the `=` operator. To assign a value to a variable name (i.e. declare it), you simply put the variable name on the left side of the = and the value you want to associate with that variable name on the right side. Once this has happened, you can access the value in the variable simply by using it's name somewhere later in your code.
# + id="y2ImxFNdQmU8"
x = 5
y = 2
# + colab={"base_uri": "https://localhost:8080/"} id="af-trpuORRP-" outputId="e48ec73a-2053-42c0-97ef-f428f065db15"
x
# + colab={"base_uri": "https://localhost:8080/"} id="z63TMHsVRRiT" outputId="8ca9c591-9688-47f2-acd6-f235da4cc1af"
x - 4
# + colab={"base_uri": "https://localhost:8080/"} id="fqjPyjYvRRvX" outputId="25b007e3-6eef-4d2b-e344-013806b1d5ad"
x + y
# + [markdown] id="uPdw_CEZRuz4"
# The name you can give a variable can technically be any contiguous set of characters, but there are some conventions followed in Python and programming in general. Python follows a variable naming convention called snake case. To write something in snake case, simply use a _ anywhere you would use a space, and make sure every word is lower case. For example, `this_is_a_variable`. Giving variables good names makes your code more readable and therefore maintainable. There is a big difference between seeing a variable called `degrees` and one called `y`. You should strive to give your variables well-defined, succinct names.
#
# There are of course cases where using less than descriptive variable names follows convention and are, therefore, just fine to use. A common example is the use of `i` to keep track of an index. Because of its prevalent usage for indexing, it is usually easy to understand what is happening in that context when all you see is the variable name `i`. Here, the lack of descriptiveness is okay. The important thing is that the code is **understandable**.
#
# Note that we saw no output from either x = 5 or y = 2 above. This is because the return value that would have been printed as output was assigned to the variables x and y, respectively. This is why we had to view them in the next lines.
#
# A large part of variables' power is the fact that they can change (vary, if you will). This allows us to use a single variable name to keep track of a specific thing throughout the life of a program. Remember how we assigned the value 5 to x above? The exact same syntax can be used to change the value stored in the variable.
#
# Say we want to make the value of x five more than it currently is. All we need to do is have x be assigned the value that results from adding 5 to x.
# + colab={"base_uri": "https://localhost:8080/"} id="uN1KtyetSJ82" outputId="b6cd096b-3af9-4b4e-9a7a-a0aebb3b3ac1"
x = x + 5
x
# + [markdown] id="CRpsOOxkSmOK"
# Notice how the first line above is formatted. Python knows that the = means variable assignment, so when it sees the first line it evaluates the right side of the equals and then puts that value in x, even though x is part of the calculation on the right side. x is now connected with this new value and the old value is gone.
#
# Changing variables in this way occurs so commonly that there is a built-in shorthand for it. The result of the first line could have been achieved with x += 5. This syntactic sugar is available for all the simple operations +, -, *, /, **, and % that we covered earlier.
# + [markdown] id="8FcX32_fSueI"
# **Questions:**
#
# Consider the following code:
# ```
# x = 5
# y = 8
# x += y
# y = x - 3
# x -= y - 5
# ```
#
# What are the values of x and y after each line?
# + [markdown] id="ZvlWqlKiTBar"
# <details><summary>
# Click here to see the values for x and y after the last line.
# </summary>
#
# x = 8,
# y = 10
#
# </details>
# + [markdown] id="yXYlR6fNaXBT"
# ## Types
#
# You successfully used python as a simple calculator and learned how to use variables! The last topic we will cover in this notebook are data types.
#
# Data type is an important concept in programming. Variables can store data in different types and different types can do different things.
#
# We will cover some of Python's built in data types in the following section.
#
# > Maybe you have heard or read that an important characteristic of Python is that it is a **duck typed** language. What does this mean? The name duck comes from the classic "If it walks like a duck, and quacks like a duck, then it must be a duck" adage. As applied to our situation, it simply means that Python will determine what it thinks is the best type to call a variable when you use it, unless explicitly told otherwise.
# + [markdown] id="IPi_Wa9hM2vM"
# ### Numeric types
#
# Python has a lot of different data types. You've already used some of the base numeric types, which are built into Python. All of those represent a very simple idea, numbers. Numbers can be either `ints`, short for integers, `floats`, short for floating point/decimal numbers, or `complex`, which contain real and imaginary parts stored as floats.
# + colab={"base_uri": "https://localhost:8080/"} id="Fxd_uzIBM2vN" outputId="8b5da531-d5ee-45b1-a862-0793904749ff"
7 # int
# + colab={"base_uri": "https://localhost:8080/"} id="9TPXYuXZM2vO" outputId="f2f497a7-3410-4d2b-e452-5d491ddfa076"
3.5 # float
# + colab={"base_uri": "https://localhost:8080/"} id="JEh5Ee-1M2vO" outputId="b1fbdacc-9e57-4519-c7cb-3f199d2077b8"
complex(3, 6) # complex
# + [markdown] id="yGgfhspsd44a"
# To inspect what type Python thinks a numeric (or anything else) is, you can pass it to the type() function. Let's see what we get out when we pass numbers of various types to this function.
# + colab={"base_uri": "https://localhost:8080/"} id="pOndrc6-M2vP" outputId="0ef6c166-68ac-4728-dfc7-7628594cd906"
type(7)
# + colab={"base_uri": "https://localhost:8080/"} id="MUNM_MS9M2vP" outputId="c29675ec-3102-4146-e590-ce1a368182b9"
type(3.5)
# + colab={"base_uri": "https://localhost:8080/"} id="YcXj8lzIM2vQ" outputId="c97a7788-fec1-4348-81e0-8f15bbb346ba"
type(complex(3, 6))
# + [markdown] id="9zL2XNDDM2vQ"
# As you can see, Python assumes that a number with no decimal point is an `int`, those with a decimal point a `float`, and (surprise!) those from the complex() constructor as `complex`.
#
# Frequently, these subtle differences wont matter too much. However, there will be times when this implementation detail will make you think that something will work, when really it won't. Knowing how to check the type of something will help you solve any of these potential problems.
# + [markdown] id="lYACCkuAd_U-"
# ### String
#
# Besides numeric types Python has a special type for text. This data type called `string` or short `str` represents sequences of characters. A string can contain as many characters as you want. While other programming languages have a special type called `char` for single characters, in Python single characters are also designated as `string`. Strings can also be empty or contain numbers.
# To make it clear for Python that something is a string you can use double `" "` or single `' '` quotes. All characters between an opening delimiter and a matching closing delimiter are part of the string.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="ir42IRTJgX3M" outputId="a65610d5-c845-4fb5-99a3-d7f0eed3d3f7"
"Welcome at neuefische!"
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="MoZ-rybkgcYN" outputId="798c32aa-0013-44ff-cf4a-85b97ead0273"
'We are looking forward to meeting you all soon :)'
# + colab={"base_uri": "https://localhost:8080/"} id="GgLacSa6gQaQ" outputId="9840f36f-31fb-48f0-fe00-498b6c63ed64"
type("We hope you are also excited.")
# + colab={"base_uri": "https://localhost:8080/"} id="DCCI7lNl5k_1" outputId="b361b6c0-f68b-451f-b0c9-dda4ef443711"
type('')
# + colab={"base_uri": "https://localhost:8080/"} id="-oqSpun2gwz_" outputId="6f9e20d0-eb1c-4066-a9e0-c5762f15f4e5"
type("A")
# + colab={"base_uri": "https://localhost:8080/"} id="hn-h1i-KgsIh" outputId="17ab5873-8683-4d3f-b1d6-68ed3744d17e"
type("8128")
# -
# You can use arithmetic operations like `+` or `*` also on strings. There is a lot more you can do with strings and we could fill a whole notebook only with string formatting and manipulation. But no worries, you will learn a lot about strings while you'll use them.
"neue" + "fische"
"Bam" * 3
# + [markdown] id="yyJ8OZ2U5z2h"
# ### Bool
#
#
# Python has a type of variable called `bool`. It has two possible values: `True` and `False`.
#
# > Note that the boolean values `True` and `False` begin with capital letters. Python is a case sensitive language. That means True and true are not the same!
# + colab={"base_uri": "https://localhost:8080/"} id="qwda3_Vb56Wp" outputId="b8fa1e08-1580-4064-ea2f-b33c3d40cf04"
True
# + colab={"base_uri": "https://localhost:8080/"} id="zThO9Cmd58Kn" outputId="0d3e67e2-9169-4561-e153-64de4b2ea752"
False
# + colab={"base_uri": "https://localhost:8080/"} id="OUwADLGQ58D9" outputId="2374b159-c2d5-4958-bed0-a6dd2b77384a"
type(True)
# + colab={"base_uri": "https://localhost:8080/"} id="hwmCWX6S5780" outputId="227b9425-2a36-4669-d99d-b8679379e839"
type(False)
# + [markdown] id="_ZzBYIPE6w55"
# Instead of putting `True` or `False` directly into our code, we usually get those boolean values from comparison operations, which can either evaluate to `True` or `False`. The following table gives an overview over the possible comparison operations and the operators used in Python.
#
# | Operator | Operation | Description |
# |----|-----------|-------------|
# | == | a == b | a equal to b |
# | < | a < b | a less than b |
# | <= | a <= b | a less than or equal to b |
# | > | a > b | a greater than b |
# | >= | a >= b | a greater than or equal to b |
# | != | a != b | a not equal to b |
# + colab={"base_uri": "https://localhost:8080/"} id="9m2NgPSZu1rJ" outputId="195dd21a-ad70-4566-b898-d3d02f0103de"
1 == 1
# + colab={"base_uri": "https://localhost:8080/"} id="E138BHyM8ZUV" outputId="93fc120d-6467-43cf-8a63-74853c27eccf"
7 < 5
# + colab={"base_uri": "https://localhost:8080/"} id="56sydtXu8bp1" outputId="bbc0cf6f-5678-427a-cb90-972f491d7289"
3 >= 3
# + [markdown] id="cIHQ1uYX9VBo"
# You will have to deal a lot with booleans and one characteristic is especially interesting from the Data Scienctist's perspective.
#
# If you convert `True` or `False` to an integer using `int()` you will always get 1 for `True` and 0 for `False`. This will come in handy in some situations.
# + colab={"base_uri": "https://localhost:8080/"} id="BrdWZMMuDpPZ" outputId="80b4b762-f5a8-43af-d14a-71cee62098b4"
int(True)
# + colab={"base_uri": "https://localhost:8080/"} id="E28eFJ1R6GME" outputId="d9c8dd3a-161b-4a53-8a7f-1cc983076d33"
int(False)
# + [markdown] id="OnhN68nHILl5"
# ### Further data types
#
# Other built in data types are:
#
# * sequence types like `list`, `tuple`, `range`
# * mapping type like `dict`
# * set types like `set` or `frozenset`
# * binary types like `bytes`, `bytearray`, `memoryview`
#
# We will not cover those types in detail in this notebook. But no worries, you will encounter most of them in the other notebooks or during the bootcamp.
# + [markdown] id="yqTZg9wQM2vQ"
# **Questions:**
#
# Determine the type of the following:
#
# 1. " "
# 2. 231.54
# 3. True
# 4. 96
# 5. "Hello"
# 6. 4 + 3i
# 7. '203 + 45i'
#
# + [markdown] id="rXb6R6-2M2vQ"
# <details><summary>
# Click here for the answers.
# </summary>
#
# 1. string
# 2. float
# 3. bool
# 4. integer
# 5. string
# 6. complex
# 7. string
# </details>
#
# + [markdown] id="dtWb0GXCKOem"
# ## Summary
#
# Congratulations! You've reached the end of the first Python notebook.
#
# Here is what you should be familiar with by now:
# * how to use Python as a simple calculator
# * what variables are and how to assign them
# * what the data types `int`, `float`, `complex`, `str` and `bool` are
# + id="V7fhjK9gLIQ4"
| programming/1_Python_Variables_Types.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# !pip install --upgrade pip
# !pip install python-decouple
# !pip install geoalchemy2
# !pip install shapely
# !pip install scipy
# +
from sqlalchemy import create_engine, func, text
from sqlalchemy.orm import sessionmaker
from decouple import config
from geoalchemy2.shape import to_shape
import pandas as pd
import numpy as np
import json
from datetime import datetime, timedelta
import re
from matplotlib import pyplot as plt
import random
from keras.models import Sequential
from keras.layers import LSTM, Dense
from sklearn.model_selection import GridSearchCV
# +
"""Contains models for DB."""
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, BigInteger, Integer, String, DateTime, \
ForeignKey, Float, LargeBinary
from sqlalchemy.orm import relationship
from geoalchemy2 import Geometry
BASE = declarative_base()
class City(BASE):
"""City model for DB. Has information of cities."""
__tablename__ = 'city'
id = Column(BigInteger, primary_key=True)
city = Column(String, unique=False, nullable=False)
state = Column(String, unique=False, nullable=True)
country = Column(String, unique=False, nullable=False)
location = Column(Geometry(geometry_type='POINT'), nullable=False)
blocks = relationship("Blocks", back_populates="city")
zipcodes = relationship("ZipcodeGeom", back_populates="city")
incidents = relationship("Incident", back_populates="city")
class Blocks(BASE):
"""Block model for DB. Has information of city blocks for a related city
id."""
__tablename__ = 'block'
id = Column(BigInteger, primary_key=True)
cityid = Column(BigInteger, ForeignKey('city.id'), nullable=False)
shape = Column(Geometry(geometry_type='MULTIPOLYGON'), nullable=False)
population = Column(Integer, nullable=False)
prediction = Column(LargeBinary, nullable=True)
year = Column(Integer, nullable=True)
month = Column(Integer, nullable=True)
city = relationship("City", back_populates="blocks")
incidents = relationship("Incident", back_populates="block")
class ZipcodeGeom(BASE):
"""Zipcode geometry model for DB. Has information of zipcodes and related
city id."""
__tablename__ = 'zipcodegeom'
id = Column(BigInteger, primary_key=True)
cityid = Column(BigInteger, ForeignKey('city.id'), nullable=False)
zipcode = Column(String, nullable=False, unique=True)
shape = Column(Geometry(geometry_type='MULTIPOLYGON'), nullable=False)
city = relationship("City", back_populates="zipcodes")
class Incident(BASE):
"""Incident model for DB. Has information of a specific crime, including
where it took place, when it took place, and the type of crime that
occurred."""
__tablename__ = 'incident'
id = Column(BigInteger, primary_key=True)
crimetypeid = Column(BigInteger, ForeignKey('crimetype.id'), nullable=False)
locdescid = Column(BigInteger, ForeignKey('locdesctype.id'), nullable=False)
cityid = Column(BigInteger, ForeignKey('city.id'), nullable=False)
blockid = Column(BigInteger, ForeignKey('block.id'), nullable=False)
location = Column(Geometry(geometry_type='POINT'), nullable=False)
datetime = Column(DateTime, nullable=False)
hour = Column(Integer, nullable=False)
dow = Column(Integer, nullable=False)
month = Column(Integer, nullable=False)
year = Column(Integer, nullable=False)
city = relationship("City", back_populates="incidents")
block = relationship("Blocks", back_populates="incidents")
crimetype = relationship("CrimeType", back_populates="incidents")
locationdesc = relationship("LocationDescriptionType", back_populates="incidents")
class CrimeType(BASE):
"""CrimeType model for DB. Has information of the types of crime, including
a general description and the numerical severity of the crime."""
__tablename__ = 'crimetype'
id = Column(BigInteger, primary_key=True)
category = Column(String, unique=True, nullable=False)
severity = Column(Integer, nullable=False)
incidents = relationship("Incident", back_populates="crimetype")
class LocationDescriptionType(BASE):
"""Location description model for DB. Has information on the type of
location that the crime took place."""
__tablename__ = 'locdesctype'
id = Column(BigInteger, primary_key=True)
key1 = Column(String, nullable=False)
key2 = Column(String, nullable=False)
key3 = Column(String, nullable=False)
incidents = relationship("Incident", back_populates="locationdesc")
class Job(BASE):
"""Job model for DB and redis. Has information on the status and result of
redis queue job."""
__tablename__ = 'job'
id = Column(BigInteger, primary_key=True)
result = Column(String, nullable=False)
datetime = Column(DateTime, nullable=False)
# +
DAY_OF_WEEK = 0
DAY_OF_MONTH = 1
HOUR_OF_DAY = 2
data_type = DAY_OF_WEEK
do_gridsearch = False
if data_type == DAY_OF_WEEK:
SCALING_FACTOR = 1000.
OPERATOR = '*'
elif data_type == DAY_OF_MONTH:
SCALING_FACTOR = 1000.
OPERATOR = '*'
else:
raise ValueError("Unexpected data type:", data_type)
def scale_data(X):
if data_type == DAY_OF_WEEK:
X *= SCALING_FACTOR
elif data_type == DAY_OF_MONTH:
X *= SCALING_FACTOR # This we will use with the count(severity) SQL query
else:
raise ValueError('Unexpected data type:', data_type)
return X
def descale_data(X):
if data_type == DAY_OF_WEEK:
X /= SCALING_FACTOR
elif data_type == DAY_OF_MONTH:
X /= SCALING_FACTOR # This we will use with the count(severity) SQL query
else:
raise ValueError('Unexpected data type:', data_type)
return X
class GetData(object):
def go(self, SESSION, start_year, end_year):
if data_type == DAY_OF_WEEK:
SQL_QUERY = \
f'''
SELECT
incident.blockid,
incident.datetime,
incident.year,
incident.month,
incident.dow,
crimetype.ppo,
crimetype.violence,
COUNT(*)/AVG(block.population) AS category
FROM incident
INNER JOIN block ON incident.blockid = block.id
INNER JOIN crimetype ON incident.crimetypeid = crimetype.id
AND block.population > 0
AND incident.cityid = 1
AND incident.year >= {start_year}
AND incident.year <= {end_year}
GROUP BY
incident.blockid,
incident.datetime,
incident.year,
incident.month,
incident.dow,
crimetype.ppo,
crimetype.violence
'''
elif data_type == HOUR_OF_DAY:
SQL_QUERY = \
f'''
SELECT
incident.blockid,
incident.datetime,
incident.year,
incident.month,
incident.hour,
crimetype.ppo,
crimetype.violence,
COUNT(*)/AVG(block.population) AS category
FROM incident
INNER JOIN block ON incident.blockid = block.id
INNER JOIN crimetype ON incident.crimetypeid = crimetype.id
AND block.population > 0
AND incident.cityid = 1
AND incident.year >= {start_year}
AND incident.year <= {end_year}
GROUP BY
incident.blockid,
incident.datetime,
incident.year,
incident.month,
incident.hour,
crimetype.ppo,
crimetype.violence
'''
return SESSION.execute(text(SQL_QUERY)).fetchall()
# -
def fill_data(X, y, r, start_year, end_year, blockid_dict, data_type):
def day_of_month(x):
return x.day - 1
def hour_of_day(x):
return x.hour
def encode(row):
ppo_dict = {'PERSONAL': 0,
'PROPERTY': 1,
'OTHER': 2}
viol_dict = {'VIOLENT': 1,
'NON_VIOLENT': 0}
res = []
res.extend([r[0], r[1], r[2], r[3], r[4], ppo_dict[r[5]], viol_dict[r[6]], r[7]])
return res
# Output: blockid month dow hour ppo violence = value
#
# 0 incident.blockid,
# 1 incident.datetime,
# 2 incident.year,
# 3 incident.month,
# 4 incident.dow OR incident.hour,
# 5 crimetype.ppo, 'PERSONAL', 'PROPERTY', 'OTHER' 3
# 6 crimetype.violence 'VIOLENT', 'NONVIOLENT' 2
# 7 value at this location
r = encode(r)
if r[2] == end_year:
if data_type == DAY_OF_WEEK:
# block id 0-based month
# vvvvvvvvvvvvvvvvv vvvvvv
y[blockid_dict[r[0]], r[3]-1, r[4], r[5], r[6]] = float(r[7])
# ^^^^ ^^^^
# dow risk
elif data_type == HOUR_OF_DAY:
# block id 0-based month
# vvvvvvvvvvvvvvvvv vvvvvv
y[blockid_dict[r[0]], r[3]-1, r[4], r[5], r[6]] = float(r[7])
# ^^^^^^^^^^^^^^^ ^^^^
# hour of day risk
else:
raise ValueError('Unsupported data type:', data_type)
else:
if data_type == DAY_OF_WEEK:
X[blockid_dict[r[0]], 12*(r[2]-start_year-1)+r[3]-1, r[4], r[5], r[6]] = float(r[7])
elif data_type == HOUR_OF_DAY:
X[blockid_dict[r[0]], 12*(r[2]-start_year-1)+r[3]-1, r[4], r[5], r[6]] = float(r[7])
else:
raise ValueError('Unsupported data type:', data_type)
def process_data(data, start_year, end_year, blockid_dict, data_type):
if data_type == DAY_OF_WEEK:
X = np.zeros((len(blockid_dict), 24, 7, 3, 2))
y = np.zeros((len(blockid_dict), 12, 7, 3, 2))
elif data_type == HOUR_OF_DAY:
X = np.zeros((len(blockid_dict), 24, 24, 3, 2))
y = np.zeros((len(blockid_dict), 12, 24, 3, 2))
else:
raise ValueError('data_type not supported:', data_type)
# records is the list of rows we get from the query with this order:
# blockid, year, month, dow, hour, risk
# month is from 1 - 12
for r in data:
if r[0] in blockid_dict:
fill_data(X, y, r, start_year, end_year, blockid_dict, data_type)
# print('Data value counts:', pd.Series(y.flatten()).value_counts())
X = scale_data(X)
y = scale_data(y)
# for i in range(24):
# X[:, i, -1] = (start_year*12+i) / (2000 * 12)
return X, y
# +
from contextlib import contextmanager
@contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
DB_URI = config('DB_URI')
ENGINE = create_engine(DB_URI)
Session = sessionmaker(bind=ENGINE)
SESSION = Session()
try:
yield SESSION
SESSION.commit()
except:
SESSION.rollback()
raise
finally:
SESSION.close()
def ready_data(training_start_year, training_end_year, train_blockid_dict,
testing_start_year, testing_end_year, test_blockid_dict,
data_type):
with session_scope() as session:
training_data = GetData().go(session,
training_start_year,
training_end_year)
testing_data = GetData().go(session,
testing_start_year,
testing_end_year)
X_train, y_train = process_data(training_data,
training_start_year,
training_end_year,
train_blockid_dict,
data_type)
X_test, y_test = process_data(testing_data,
testing_start_year,
testing_end_year,
test_blockid_dict,
data_type)
return X_train, X_test, y_train, y_test
# -
# ## Day of week analysis for each month of each block id
# +
# start month = 3, end_month = 2 (months are 0-indexed)
# X: 4/2017 -> 3/2019 actual date
# y: 4/2019 -> 3/2020 actual date
#
X_test_start_month = 0
X_test_end_month = 0
X_test_start_year = 2016
X_test_end_year = 2018
TRAIN_NUM_BLOCKIDS = TEST_NUM_BLOCKIDS = 801
TRAIN_BLOCKIDS = random.sample(list(range(1,802)), k=TRAIN_NUM_BLOCKIDS)
train_blockid_dict = {}
for ind, blockid in enumerate(TRAIN_BLOCKIDS ):
train_blockid_dict[blockid] = ind
TEST_BLOCKIDS = random.sample(list(range(1,802)), k=TEST_NUM_BLOCKIDS)
test_blockid_dict = {}
for ind, blockid in enumerate(TEST_BLOCKIDS ):
test_blockid_dict[blockid] = ind
# -
def plot_output(y, y_pred, dataset_type, x_label, y_label):
fig = plt.figure(figsize=(10, 8))
plt.plot(np.arange(len(y.flatten())),
y.flatten(), color='blue');
plt.plot(np.arange(len(y_pred.flatten())),
y_pred.flatten(), color='red');
plt.xlabel(x_label, fontsize=16)
plt.ylabel(y_label, fontsize=18)
plt.title(dataset_type + ' dataset', fontsize=18)
plt.legend(labels=['data', 'prediction'], prop={'size': 20})
plt.show()
# +
from sklearn.ensemble import RandomForestRegressor
from sklearn.multioutput import MultiOutputRegressor
from sklearn.metrics import mean_squared_error
random.seed(101)
def get_predictions(X_train, y_train, X_test, y_test,
x_label, y_label, model, do_gridsearch=False):
def print_data_info(data, data_name):
flat = data.flatten()
print('Number of data points:', len(flat))
print('Number of non-zero elements:', len(flat[flat > 0.0]))
print('Percentage of non-zero elements:', len(flat[flat > 0.0])/len(flat))
pd.Series(flat).hist(bins=[0.25, 0.5, 1.0, 1.5, 2.5, 5.0, 10, 15, 20]);
plt.title(f'Histogram of {data_name}')
plt.show()
print_data_info(y_test, 'y_test')
print('Correlation between y_train and y_test:\n',
np.corrcoef(y_train.flatten(), y_test.flatten()))
def mult_of_shapes(X):
return X.shape[1] * X.shape[2] * X.shape[3] * X.shape[4]
X_train = X_train.reshape((TRAIN_NUM_BLOCKIDS, mult_of_shapes(X_train)))
y_train = y_train.reshape((TRAIN_NUM_BLOCKIDS, mult_of_shapes(y_train)))
X_test = X_test.reshape((TEST_NUM_BLOCKIDS, mult_of_shapes(X_test)))
y_test = y_test.reshape((TEST_NUM_BLOCKIDS, mult_of_shapes(y_test)))
print('y_test shape after reshaping:', y_test.shape)
if do_gridsearch == True:
# For regressors:
param_grid = {
'estimator__n_estimators': [80, 100, 120],
'estimator__max_depth': [2, 3, 4, 5, 6],
}
gridsearch = GridSearchCV(model,
param_grid=param_grid,
scoring='neg_mean_squared_error',
cv=3, n_jobs=-1,
return_train_score=True, verbose=10)
model = gridsearch
model.fit(X_train, y_train)
best_training_score = model.score(X_train, y_train)
best_testing_score = model.score(X_test, y_test)
print(f' Best training score:', -best_training_score)
print(f' Best testing score: ', -best_testing_score)
if do_gridsearch == True:
best_model_params = model.cv_results_['params'][model.best_index_]
print('Best Grid Search model:', best_model_params)
y_pred = model.predict(X_test)
print('mean_squared_error:', mean_squared_error(y_test, y_pred))
plot_output(y_test, y_pred, 'Testing', x_label, y_label)
def relative_percent_difference(y_true, y_pred):
return 1 - np.absolute((y_true - y_pred) / (np.absolute(y_true) + np.absolute(y_pred)))
return y_test, y_pred, relative_percent_difference(y_test, y_pred), model
# -
# ## Day of week analysis for each block ID
# +
# %%time
X_train_dow, X_test_dow, y_train_dow, y_test_dow = \
ready_data(2015, 2017, train_blockid_dict,
X_test_start_year, X_test_end_year, test_blockid_dict,
DAY_OF_WEEK)
print(X_train_dow.shape, y_train_dow.shape, X_test_dow.shape, y_test_dow.shape)
model = MultiOutputRegressor(RandomForestRegressor(max_depth=3, n_estimators=100))
y_test_dow, y_pred_dow, rpd_dow, model_dow = \
get_predictions(X_train_dow, y_train_dow, X_test_dow, y_test_dow,
'day of week for each month',
f'crime rate / population {OPERATOR} {SCALING_FACTOR}',
model, do_gridsearch=do_gridsearch)
# -
plot_output(y_test_dow.flatten()[:100], y_pred_dow.flatten()[:100], 'Test',
'day of week for each month', 'crime count')
# ## Hour of day analysis for each block ID
# +
# %%time
X_train_hod, X_test_hod, y_train_hod, y_test_hod = \
ready_data(2015, 2017, train_blockid_dict,
X_test_start_year, X_test_end_year, test_blockid_dict,
HOUR_OF_DAY)
print(X_train_hod.shape, y_train_hod.shape, X_test_hod.shape, y_test_hod.shape)
model = MultiOutputRegressor(RandomForestRegressor(max_depth=3, n_estimators=100))
y_test_hod, y_pred_hod, rpd_hod, model_hod = \
get_predictions(X_train_hod, y_train_hod, X_test_hod, y_test_hod,
'day of month for each month',
f'crime rate / population {OPERATOR} {SCALING_FACTOR}',
model, do_gridsearch=do_gridsearch)
# -
# ## Weigh and combine predictions into one array
y_test_dow.shape, y_test_hod.shape
# +
NUM_BLOCKIDS = 801
NUM_MONTHS_IN_YEAR = 12
NUM_DAYS_IN_WEEK = 7
NUM_HOURS_IN_DAY = 24
crime_rate = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK, NUM_HOURS_IN_DAY, 3, 2))
y_test_combo = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK, NUM_HOURS_IN_DAY, 3, 2))
# Each of the relative percent difference arrays can contain NaN's.
# This is because if both y_pred and y_true were zeros, their
# addition is also zero, and we cannot divide by this zero.
# Here we replace NaN's to get around that issue.
# Since we want to have the highest weight to those values that
# have the same y_pred and y_true, we will give the largest value
# from our array to these NaNs.
rpd_dow = np.nan_to_num(rpd_dow)
rpd_hod = np.nan_to_num(rpd_hod)
print('Number of nans after replacement:', np.isnan(rpd_dow).sum())
y_test_dow = y_pred_dow.reshape((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK, 3, 2))
y_test_hod = y_pred_hod.reshape((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_HOURS_IN_DAY, 3, 2))
y_pred_dow = y_pred_dow.reshape((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK, 3, 2))
y_pred_hod = y_pred_hod.reshape((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_HOURS_IN_DAY, 3, 2))
rpd_dow = rpd_dow.reshape((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK, 3, 2))
rpd_hod = rpd_hod.reshape((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_HOURS_IN_DAY, 3, 2))
# Returns number of days in a month
def days_in_month(year, month):
p = pd.Period(f'{year}-{month}-1')
return p.days_in_month
# Day of week returns 0-based day value
def day_of_week(dt):
return dt.weekday()
end_year = X_test_end_year
# for blockid in range(NUM_BLOCKIDS):
# count = np.zeros((NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK, NUM_HOURS_IN_DAY, 3, 2))
# for month in range(1, NUM_MONTHS_IN_YEAR + 1):
# for day in range(1, days_in_month(end_year, month) + 1):
# for hour in range(24):
# dow = day_of_week(datetime(end_year, month, day))
# weight_dow = rpd_dow[blockid, (month - 1)*dow]
# weight_hod = rpd_hod[blockid, (month - 1)*hour]
# weight_sum = weight_dow + weight_hod
# crime_rate[blockid, month-1, dow * hour] += \
# (y_pred_dow[blockid, (month - 1)*dow] * weight_dow +
# y_pred_hod[blockid, (month - 1)*hour] * weight_hod) / weight_sum
# y_test_combo[blockid, month-1, dow*hour] += \
# (y_test_dow[blockid, (month - 1)*dow] * weight_dow +
# y_test_hod[blockid, (month - 1)*hour] * weight_hod) / weight_sum
# count[dow * hour] += 1
# crime_rate[blockid, month-1, dow * hour] /= count[dow * hour]
# y_test_combo[blockid, month-1, dow * hour] /= count[dow * hour]
for blockid in range(NUM_BLOCKIDS):
for month in range(NUM_MONTHS_IN_YEAR):
for day in range(NUM_DAYS_IN_WEEK):
for hour in range(NUM_HOURS_IN_DAY):
for ppo in range(3):
for violence in range(2):
crime_rate[blockid][month][day][hour][ppo][violence] = \
(y_pred_dow[blockid][month][day][ppo][violence] * rpd_dow[blockid][month][day][ppo][violence] + \
y_pred_hod[blockid][month][hour][ppo][violence] * rpd_hod[blockid][month][hour][ppo][violence]) / \
(rpd_dow[blockid][month][day][ppo][violence] + rpd_hod[blockid][month][hour][ppo][violence])
y_test_combo[blockid][month][day][hour][ppo][violence] = \
(y_test_dow[blockid][month][day][ppo][violence] * rpd_dow[blockid][month][day][ppo][violence] + \
y_test_hod[blockid][month][hour][ppo][violence] * rpd_hod[blockid][month][hour][ppo][violence]) / \
(rpd_dow[blockid][month][day][ppo][violence] + rpd_hod[blockid][month][hour][ppo][violence])
crime_rate_descaled = descale_data(crime_rate)
crime_rate_descaled = np.nan_to_num(crime_rate_descaled)
crime_rate = crime_rate_descaled.copy()
y_test_combo = descale_data(y_test_combo)
y_test_combo = np.nan_to_num(y_test_combo)
# -
y = y_test_combo.flatten()
r = crime_rate.flatten()
print('Number of zeros in y_test_combo:', len(y[y == 0.0]), 'out of:', len(y))
print('Number of zeros in risks:', len(r[r == 0.0]), 'out of:', len(r))
def plot_y_vs_ypred(y, y_pred):
fig = plt.figure(figsize=(10, 8))
plt.plot(np.arange(len(y.flatten())),
y.flatten(), color='blue', alpha=0.5);
plt.plot(np.arange(len(y_pred.flatten())),
y_pred.flatten(), color='red', alpha=0.5);
plt.xlabel('dow * hour', fontsize=16)
plt.ylabel('crime count / population * 1000', fontsize=18)
plt.title('Test dataset', fontsize=18)
if data_type == DAY_OF_WEEK:
plt.legend(labels=['count', 'predicted count'], prop={'size': 20})
elif data_type == HOUR_OF_DAY:
plt.legend(labels=['count', 'predicted count'], prop={'size': 20})
else:
plt.legend(labels=['risk', 'predicted risk'], prop={'size': 20})
plt.show()
plot_y_vs_ypred(y_test_combo, crime_rate)
print('Correlation between y_test and y_pred',
np.corrcoef(y_test_combo.flatten(), crime_rate.flatten()))
# ## Store predictions in DB
# +
from decouple import config
pred_blockid_dict = test_blockid_dict
def store_predictions_in_db(y_pred):
DB_URI_WRITE = config('DB_URI_WRITE')
# Put predictions into pandas DataFrame with corresponding block id
predictions = pd.DataFrame([[x] for x in pred_blockid_dict.keys()], columns=["id"])
predictions.loc[:, "prediction"] = predictions["id"].apply(lambda x: y_pred[pred_blockid_dict[x],:,:].astype(np.float64).tobytes().hex())
predictions.loc[:, "month"] = 0
predictions.loc[:, "year"] = 2018
predictions.to_csv("predictions.csv", index=False)
# Query SQL
query_commit_predictions = """
CREATE TEMPORARY TABLE temp_predictions (
id SERIAL PRIMARY KEY,
prediction TEXT,
month INTEGER,
year INTEGER
);
COPY temp_predictions (id, prediction, month, year) FROM STDIN DELIMITER ',' CSV HEADER;
UPDATE block
SET
prediction = DECODE(temp_predictions.prediction, 'hex'),
month = temp_predictions.month,
year = temp_predictions.year
FROM temp_predictions
WHERE block.id = temp_predictions.id;
DROP TABLE temp_predictions;
"""
# Open saved predictions and send to database using above query
with open("predictions.csv", "r") as f:
print("SENDING TO DB")
RAW_CONN = create_engine(DB_URI_WRITE).raw_connection()
cursor = RAW_CONN.cursor()
cursor.copy_expert(query_commit_predictions, f)
RAW_CONN.commit()
RAW_CONN.close()
for r in SESSION.execute("SELECT ENCODE(prediction::BYTEA, 'hex'), id FROM block WHERE prediction IS NOT NULL LIMIT 5;").fetchall():
print(np.frombuffer(bytes.fromhex(r[0]), dtype=np.float64).reshape((12,7,24,3,2)))
print(y_pred[pred_blockid_dict[int(r[1])], :].reshape((12,7,24,3,2)))
# -
with session_scope() as SESSION:
store_predictions_in_db(crime_rate)
# ## Save predictions to file
# +
import pickle
with open("predictions_working_week5_wednesday_model_dow.pkl", "wb") as f:
pickle.dump(model_dow, f)
with open("predictions_working_week5_wednesday_model_hod.pkl", "wb") as f:
pickle.dump(model_hod, f)
with open('predictions_working_week5_wednesday_y_test_times_hour.pkl', 'wb') as f:
pickle.dump(y_test_combo, f)
with open('predictions_working_week5_wednesday_y_pred_dow.pkl', 'wb') as f:
pickle.dump(y_pred_dow, f)
with open('predictions_working_week5_wednesday_y_pred_hod.pkl', 'wb') as f:
pickle.dump(y_pred_hod, f)
with open('predictions_working_week5_wednesday_rpd_dow.pkl', 'wb') as f:
pickle.dump(rpd_dow, f)
with open('predictions_working_week5_wednesday_rpd_hod.pkl', 'wb') as f:
pickle.dump(rpd_hod, f)
with open("predictions_working_week5_wednesday_test_blockid_dict.pkl", "wb") as f:
pickle.dump(test_blockid_dict, f)
with open("predictions_working_week5_wednesday_crime_rate.pkl", "wb") as f:
pickle.dump(crime_rate, f)
# -
# ## Load predictions from file and write to database
# +
# with open("predictions_working_week5_wednesday_model_dow.pkl", "wb") as f:
# model_dow = pickle.load(f)
# with open("predictions_working_week5_wednesday_model_hod.pkl", "wb") as f:
# model_hod = pickle.dump(f)
# with open('predictions_working_week5_wednesday_y_test_times_hour.pkl', 'wb') as f:
# y_test_times_hour = pickle.dump(f)
# with open('predictions_working_week5_wednesday_y_pred_dow.pkl', 'wb') as f:
# y_pred_dow = pickle.dump(f)
# with open('predictions_working_week5_wednesday_y_pred_hod.pkl', 'wb') as f:
# y_pred_hod = pickle.dump(f)
# with open('predictions_working_week5_wednesday_rpd_dow.pkl', 'wb') as f:
# rpd_dow = pickle.dump(f)
# with open('predictions_working_week5_wednesday_rpd_hod.pkl', 'wb') as f:
# rpd_hod = pickle.dump(f)
# with open("predictions_working_week5_wednesday_test_blockid_dict.pkl", "wb") as f:
# test_blockid_dict = pickle.dump(f)
# with open("predictions_working_week5_wednesday_crime_rate.pkl", "wb") as f:
# crime_rate = pickle.dump(f)
# Write to Database
# with session_scope() as SESSION:
# store_predictions_in_db(crime_rate)
# -
| Chicago_predictions_combo_crime_types.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:biobombe]
# language: python
# name: conda-env-biobombe-py
# ---
# # Create a hetnet of genesets for automatic gene expression compression interpretation
#
# This script was modified from https://github.com/dhimmel/integrate.
#
# The script creates a hetnet as described in the eLIFE publication _"Systematic integration of biomedical knowledge prioritizes drugs for repurposing"_ by [Himmelstein et al. 2017](https://doi.org/10.7554/eLife.26726)
#
# ## Datasets
#
# 1. [MSigDb](https://doi.org/10.1073/pnas.0506580102 "Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles") - Curated genesets that represent various biological processes
# 2. [xCell](https://doi.org/10.1186/s13059-017-1349-1, "xCell: digitally portraying the tissue cellular heterogeneity landscape") - Curated genesets that describe profiles of different cell-types
# +
import os
import csv
import pandas as pd
import seaborn as sns
import hetio.hetnet
import hetio.readwrite
import hetio.stats
# -
# %matplotlib inline
# ## Define the metagraph and instantiate the graph
# +
kind_to_abbev = {
# metanodes
'Gene': 'G',
# MSigDB Nodes
'Cancer-Hallmarks': 'H',
'Positional-Gene-Sets': 'C1',
'Curated-Gene-Sets-CPG': 'C2CPG',
'Curated-Gene-Sets-REACTOME': 'C2CPREACTOME',
'Motif-Gene-Sets-MIR': 'C3MIR',
'Motif-Gene-Sets-TFT': 'C3TFT',
'Computational-Gene-Sets-CGN': 'C4CGN',
'Computational-Gene-Sets-CM': 'C4CM',
'GO-Gene-Sets-BP': 'C5BP',
'GO-Gene-Sets-CC': 'C5CC',
'GO-Gene-Sets-MF': 'C5MF',
'Oncogenic-Gene-Sets': 'C6',
'Immunologic-Gene-Sets': 'C7',
# xCell Nodes
'xCell-Cell-Type': 'XCELL',
# metaedges
'participates': 'p',
}
metaedge_tuples = [
# MSigDB metaedges
('Gene', 'Cancer-Hallmarks', 'participates', 'both'),
('Gene', 'Positional-Gene-Sets', 'participates', 'both'),
('Gene', 'Curated-Gene-Sets-CPG', 'participates', 'both'),
('Gene', 'Curated-Gene-Sets-REACTOME', 'participates', 'both'),
('Gene', 'Motif-Gene-Sets-MIR', 'participates', 'both'),
('Gene', 'Motif-Gene-Sets-TFT', 'participates', 'both'),
('Gene', 'Computational-Gene-Sets-CGN', 'participates', 'both'),
('Gene', 'Computational-Gene-Sets-CM', 'participates', 'both'),
('Gene', 'GO-Gene-Sets-BP', 'participates', 'both'),
('Gene', 'GO-Gene-Sets-CC', 'participates', 'both'),
('Gene', 'GO-Gene-Sets-MF', 'participates', 'both'),
('Gene', 'Oncogenic-Gene-Sets', 'participates', 'both'),
('Gene', 'Immunologic-Gene-Sets', 'participates', 'both'),
# xCell metaedges
('Gene', 'xCell-Cell-Type', 'participates', 'both'),
]
# -
# Initialize the graph
metagraph = hetio.hetnet.MetaGraph.from_edge_tuples(metaedge_tuples, kind_to_abbev)
graph = hetio.hetnet.Graph(metagraph)
# ## Gene Nodes
# +
# Load curated gene names from versioned resource
commit = '<PASSWORD>'
url = 'https://raw.githubusercontent.com/cognoma/genes/{}/data/genes.tsv'.format(commit)
gene_df = pd.read_table(url)
# Only consider protein-coding genes
gene_df = (
gene_df.query("gene_type == 'protein-coding'")
)
coding_genes = set(gene_df['entrez_gene_id'].astype(int))
print(gene_df.shape)
gene_df.head(2)
# -
# ## Add genes as nodes to the graph
#
# Use the gene-symbol identifier for easier interpretation
# %%time
for i, row in gene_df.iterrows():
# Build dictionary of descriptive elements for each gene
meta_data = {
'description': row['description'],
'source': 'Entrez Gene',
'url': 'http://identifiers.org/ncbigene/{}'.format(row['entrez_gene_id']),
'license': 'CC0 1.0',
}
if pd.notnull(row['chromosome']):
meta_data['chromosome'] = row['chromosome']
# Add genes to graph
graph.add_node(kind='Gene', identifier=int(row['entrez_gene_id']), name=row['symbol'],
data=meta_data)
# Load gene updater
url = 'https://raw.githubusercontent.com/cognoma/genes/{}/data/updater.tsv'.format(commit)
updater_df = pd.read_table(url)
old_to_new_entrez = dict(zip(updater_df.old_entrez_gene_id,
updater_df.new_entrez_gene_id))
# ## Add gene set nodes and associated genes as edges
#
# Add each MSigDB collection as distinct nodes with a `participates` edge for representative gene sets and corresponding membership.
def add_node_to_graph(current_graph, collection_file, collection_kind,
collection_source, gene_list, min_geneset_size=4,
max_geneset_size=1000, license='CC BY 4.0'):
"""
Add nodes and edges to current graph based on geneset memembership of collection
Arguments:
current_graph - a hetnet object to add node-edge info to
collection_file - location of msigdb file
collection_kind - the kind of node already initialized in the graph
collection_source - alternative ID for collection
gene_list - a list of genes to consider when building the graph
min_geneset_size - filter out a given gene set if it has fewer genes
max_geneset_size - filter out a given gene set if it has more genes
license - given license associated with node
Output:
Adds to current graph; Returns the amount of filtered genesets
"""
# Build meta data dictionary to store node info
meta_data = {'license': license, 'source': collection_source}
# Open the .gmt file and process each geneset
filtered_genesets = []
with open(collection_file, 'r') as collection_fh:
collection_reader = csv.reader(collection_fh, delimiter='\t')
for row in collection_reader:
# Get geneset and and metadata info
geneset_name = row[0]
meta_data['url'] = row[1]
# Process geneset membership
genes = row[2:]
# Update entrez_gene_id
genes = set(old_to_new_entrez[x] if x in old_to_new_entrez else x for x in genes)
# The genes must exist in curated resource
genes = [int(x) for x in genes if int(x) in gene_list]
# Filter geneset if its too big or small
if min_geneset_size > len(genes) or len(genes) > max_geneset_size:
filtered_genesets.append(geneset_name)
continue
# Add the genesetname as a node (based on collection) to the graph
current_graph.add_node(kind=collection_kind,
identifier=geneset_name,
data=meta_data)
# Loop through all genes and add to the graph it should be considered
for gene in genes:
source_id = ('Gene', gene)
target_id = (collection_kind, geneset_name)
edge_data = meta_data.copy()
current_graph.add_edge(source_id, target_id, 'participates',
'both', edge_data)
return filtered_genesets
# +
hetnet_build = {
# Format: `Collection Source`: [`Collection File`, `Collection Kind`]
# MSigDB
'MSigDB-H': ['h.all.v6.1.entrez.gmt', 'Cancer-Hallmarks'],
'MSigDB-C1': ['c1.all.v6.1.entrez.gmt', 'Positional-Gene-Sets'],
'MSigDB-C2-CPG': ['c2.cgp.v6.1.entrez.gmt', 'Curated-Gene-Sets-CPG'],
'MSigDB-C2-Reactome': ['c2.cp.reactome.v6.1.entrez.gmt', 'Curated-Gene-Sets-REACTOME'],
'MSigDB-C3-MIR': ['c3.mir.v6.1.entrez.gmt', 'Motif-Gene-Sets-MIR'],
'MSigDB-C3-TFT': ['c3.tft.v6.1.entrez.gmt', 'Motif-Gene-Sets-TFT'],
'MSigDB-C4-CGN': ['c4.cgn.v6.1.entrez.gmt', 'Computational-Gene-Sets-CGN'],
'MSigDB-C4-CM': ['c4.cm.v6.1.entrez.gmt', 'Computational-Gene-Sets-CM'],
'MSigDB-C5-BP': ['c5.bp.v6.1.entrez.gmt', 'GO-Gene-Sets-BP'],
'MSigDB-C5-CC': ['c5.cc.v6.1.entrez.gmt', 'GO-Gene-Sets-CC'],
'MSigDB-C5-MF': ['c5.mf.v6.1.entrez.gmt', 'GO-Gene-Sets-MF'],
'MSigDB-C6': ['c6.all.v6.1.entrez.gmt', 'Oncogenic-Gene-Sets'],
'MSigDB-C7': ['c7.all.v6.1.entrez.gmt', 'Immunologic-Gene-Sets'],
# xCell
'xCell-X': ['xcell_all_entrez.gmt', 'xCell-Cell-Type'],
}
# +
# %%time
# Add all collections genesets to hetnet
filtered = {}
for collection_source, collection_info in hetnet_build.items():
path, collection_kind = collection_info
collection_file = os.path.join('data', path)
filtered[collection_kind] = add_node_to_graph(current_graph=graph,
collection_file=collection_file,
collection_kind=collection_kind,
collection_source=collection_source,
gene_list=coding_genes)
# -
# ## Network visualizations and stats
# Export node degree tables
node_degree_file = os.path.join('results', 'interpret_node_degrees.xlsx')
hetio.stats.degrees_to_excel(graph, node_degree_file)
# +
# Summary of metanodes and cooresponding nodes
metanode_df = hetio.stats.get_metanode_df(graph)
metanode_file = os.path.join('results', 'interpret_metanode_summary.tsv')
metanode_df.to_csv(metanode_file, sep='\t', index=False)
metanode_df
# +
# Summary of metaedges and cooresponding edges
metaedge_df = hetio.stats.get_metaedge_df(graph)
rows = list()
for metaedge, edges in graph.get_metaedge_to_edges(exclude_inverts=True).items():
rows.append({'metaedge': str(metaedge)})
metaedge_file = os.path.join('results', 'interpret_metaedges.tsv')
metaedge_df = metaedge_df.merge(pd.DataFrame(rows))
sum_total = metaedge_df.sum()
sum_total.metaedge = 'Total'
sum_total.abbreviation = ''
metaedge_df = (
pd.concat([metaedge_df.T, sum_total], axis='columns')
.transpose()
.reset_index(drop=True)
)
# Number of edges in the network
metaedge_df.edges.sum()
metaedge_df.to_csv(metaedge_file, sep='\t', index=False)
metaedge_df
# -
# Summary of different styles for representing each metaedge
metaedge_style_file = os.path.join('results', 'interpret_metaedge_styles.tsv')
metaedge_style_df = hetio.stats.get_metaedge_style_df(metagraph)
metaedge_style_df.to_csv(metaedge_style_file, sep='\t', index=False)
metaedge_style_df
# How many genesets were filtered per collection?
{x: len(y) for x, y in filtered.items()}
# ## Save graph
# +
# %%time
# Write nodes to a table
nodes_file = os.path.join('hetnets', 'interpret_nodes.tsv')
hetio.readwrite.write_nodetable(graph, nodes_file)
# Write edges to a table
edges_file = os.path.join('hetnets', 'interpret_edges.sif.gz')
hetio.readwrite.write_sif(graph, edges_file)
# -
# %%time
# Write metagraph as json
metagraph_file = os.path.join('hetnets', 'interpret_metagraph.json')
hetio.readwrite.write_metagraph(metagraph, metagraph_file)
# %%time
# Write graph as json
hetnet_json_path = os.path.join('hetnets', 'interpret_hetnet.json.bz2')
hetio.readwrite.write_graph(graph, hetnet_json_path)
# ! sha256sum 'hetnets/interpret_hetnet.json.bz2'
# ## Visualize hetnet node and edge counts
ax = sns.barplot(x='metanode', y='nodes', data=metanode_df.sort_values('nodes'))
for tick in ax.get_xticklabels():
tick.set_rotation(90)
ax.set_xlabel(''); ax.set_ylabel('nodes');
ax = sns.barplot(x='metaedge', y='edges', data=metaedge_df.sort_values('edges'))
for tick in ax.get_xticklabels():
tick.set_rotation(90)
ax.set_xlabel(''); ax.set_ylabel('edges');
| 3.build-hetnets/integrate-compression-hetnet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to `pybids`
#
# [`pybids`](https://github.com/bids-standard/pybids) is a tool to query, summarize and manipulate data using the BIDS standard.
# In this tutorial we will use a `pybids` test dataset to illustrate some of the functionality of `pybids.layout`
from bids import BIDSLayout, BIDSValidator
from bids.tests import get_test_data_path
import os
# ## `BIDSLayout`
#
# One of the most fundamental tools offered by pybids is `BIDSLayout`. `BIDSLayout` is a lightweight class to represent a BIDS project file tree.
# Initialise a BIDSLayout of an example dataset
data_path = os.path.join(get_test_data_path(), '7t_trt')
layout = BIDSLayout(data_path)
layout
# ### Querying and working with `BIDSFile` objects
# a `BIDSLayout` object can be queried with the class method [`get()`](https://bids-standard.github.io/pybids/generated/bids.grabbids.BIDSLayout.html#bids.grabbids.BIDSLayout.get). The `BIDSLayout` object contains `BIDSFile` objects. We can see the whole list of these by calling `get()` with no arguments:
# Print a summary of the 10th BIDSFile in the list
layout.get()[10]
# A `BIDSFile` has various attributes we might be interested in:
# * `.path`: The full path of the associated file
# * `.filename`: The associated file's filename (without directory)
# * `.dirname`: The directory containing the file
# * `.image`: The file contents as a nibabel image, if the file is an image
# * `.metadata`: A dictionary of all metadata found in associated JSON files
# * `.entities`: A dictionary of BIDS entities (or keywords) extracted from the filename
#
# For example, here's the `dict` of entities for the 10th file in our list:
f = layout.get()[10]
f.entities
# And here's the metadata:
f.metadata
# The entity and metadata dictionaries aren't just there for our casual perusal once we've already retrieved a `BIDSFile`; we can directly filter files from the `BIDSLayout` by requesting only files that match specific values. Some examples:
# We query for any files with the suffix 'T1w', only for subject '01'
layout.get(suffix='T1w', subject='01')
# Retrieve all files where SamplingFrequency (a metadata key) = 100
# and acquisition = prefrontal, for the first two subjects
layout.get(subject=['01', '02'], SamplingFrequency=100, acquisition="prefrontal")
# By default, [`get()`](https://bids-standard.github.io/pybids/generated/bids.grabbids.BIDSLayout.html#bids.grabbids.BIDSLayout.get) returns a `BIDSFile` object, but we can also specify alternative return types using the `return_type` argument. Here, we return only the full filenames as strings:
# Ask get() to return the filenames of the matching files
layout.get(suffix='T1w', return_type='file')
# We can also ask `get()` to return unique values (or ids) of particular entities. For example, say we want to know which subjects have at least one `T1w` file. We can request that information by setting `return_type='id'` and `target='subject'`:
# Ask get() to return the ids of subjects that have T1w files
layout.get(return_type='id', target='subject')
# If our `target` is a BIDS entity that corresponds to a particular directory in the BIDS spec (e.g., `subject` or `session`) we can also use `return_type='dir'` to get all matching subdirectories:
layout.get(return_type='dir', target='subject')
# ## Other utilities
# Say you have a filename, and you want to manually extract BIDS entities from it. The `parse_file_entities` method provides the facility:
path = "/a/fake/path/to/a/BIDS/file/sub-01_run-1_T2w.nii.gz"
layout.parse_file_entities(path)
# You may want to create valid BIDS filenames for files that are new or hypothetical that would sit within your BIDS project. This is useful when you know what entity values you need to write out to, but don't want to deal with looking up the precise BIDS file-naming syntax. In the example below, imagine we've created a new file containing stimulus presentation information, and we want to save it to a `.tsv.gz` file, per the BIDS naming conventions. All we need to do is define a dictionary with the name components, and `build_path` takes care of the rest (including injecting sub-directories!):
# +
entities = {
'subject': '01',
'run': 2,
'task': 'nback',
'suffix': 'bold'
}
layout.build_path(entities)
# -
# You can also use `build_path` in more sophisticated ways—for example, by defining your own set of matching templates that cover cases not supported by BIDS out of the box. For example, suppose you want to create a template for naming a new z-stat file. You could do something like:
# +
# Define the pattern to build out of the components passed in the dictionary
pattern = "sub-{subject}[_ses-{session}]_task-{task}[_acq-{acquisition}][_rec-{reconstruction}][_run-{run}][_echo-{echo}]_{suffix<z>}.nii.gz",
entities = {
'subject': '01',
'run': 2,
'task': 'n-back',
'suffix': 'z'
}
# Notice we pass the new pattern as the second argument
layout.build_path(entities, pattern)
# -
# ### Loading derivatives
#
# By default, `BIDSLayout` objects are initialized without scanning contained `derivatives/` directories. But you can easily ensure that all derivatives files are loaded and endowed with the extra structure specified in the [derivatives config file](https://github.com/bids-standard/pybids/blob/master/bids/grabbids/config/derivatives.json):
# Define paths to root and derivatives folders
root = os.path.join(get_test_data_path(), 'synthetic')
layout2 = BIDSLayout(root, derivatives=True)
layout2
# The `domains` argument to `get()` specifies which part of the project to look in. By default, valid values are `'bids'` (for the "raw" BIDS project that excludes derivatives) and `'derivatives'` (for all BIDS-derivatives files). The following call returns the filenames of all derivatives files.
# Get all files in derivatives
layout2.get(domains='derivatives', return_type='file')
# ### `Dataframe` option
# the `BIDSLayout` class has built in support for pandas `DataFrames`:
# Convert the layout to a pandas dataframe
df = layout.as_data_frame()
df.head()
# ## Retrieving BIDS variables
# BIDS variables are stored in .tsv files at the run, session, subject, or dataset level. You can retrieve these variables with `layout.get_collections()`. The resulting objects can be converted to dataframes and merged with the layout to associate the variables with corresponding scans.
#
# In the following example, we request all subject-level variable data available anywhere in the BIDS project, and merge the results into a single `DataFrame` (by default, we'll get back a single `BIDSVariableCollection` object for each subject).
# Get subject variables as a dataframe and merge them back in with the layout
subj_df = layout.get_collections(level='subject', merge=True).to_df()
subj_df.head()
# ## BIDSValidator
#
# `pybids` includes a BIDS validator. This can tell you if a filepath is a valid BIDS filepath as well as answering questions about what kind of data it should represent
# Note that when using the bids validator, the filepath MUST be relative to the top level bids directory
validator = BIDSValidator()
validator.is_bids('/sub-02/ses-01/anat/sub-02_ses-01_T2w.nii.gz')
# Can decide if a filepath represents a file part of the specification
validator.is_file('/sub-02/ses-01/anat/sub-02_ses-01_T2w.json')
# Can check if file a dataset top
validator.is_top_level('/dataset_description.json')
# or subject (or session) level
validator.is_subject_level('/dataset_description.json')
validator.is_session_level('/sub-02/ses-01/sub-02_ses-01_scans.json')
# Can decide if a filepath represents phenotypic data
validator.is_phenotypic('/sub-02/ses-01/anat/sub-02_ses-01_T2w.nii.gz')
# And so on. See the [docs](https://bids-standard.github.io/pybids/generated/bids.grabbids.BIDSValidator.html#bids-grabbids-bidsvalidator) for the full list of `BIDSValidator` options.
| examples/pybids tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions for the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers.
# %run -i "assignment_helper.py"
# %matplotlib inline
# # The Ising model
#
# **Exercise 1** (1 point). The Ising model is a basic model of statistical mechanics that explains a lot about how quantum optimizers work. Its energy is described by its Hamiltonian:
#
# $$ H=-\sum_{<i,j>} J_{ij} \sigma_i \sigma_{j} - \sum_i h_i \sigma_i$$.
#
# Write a function that calculates this energy amount for a linear chain of spins. The function takes three arguments: `J`, `h`, and `σ`, corresponding to the coupling strengths, the onsite field at each site, and the specific spin configuration
def calculate_energy(J, h, σ):
###
H = - np.sum([J[i]*σ[i]*σ[i+1] for i in range(len(J))])
H -= np.sum(np.asarray(h)*np.asarray(σ))
return H
###
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise1", "locked": true, "points": "1", "solution": false}
J = [1.0, -1.0]
σ = [+1, -1, +1]
h = [0.5, 0.5, 0.4]
assert abs(calculate_energy(J, h, σ)+0.4) < 0.01
J = [-1.0, 0.5, 0.9]
σ = [+1, -1, -1, -1]
h = [4, 0.2, 0.4, 0.7]
assert abs(calculate_energy(J, h, σ)+5.1) < 0.01
# -
# **Exercise 2** (2 points). The sign of the coupling defines the nature of the interaction, ferromagnetic or antiferromagnetic, corresponding to positive and negative $J$ values, respectively. Setting the couplings to zero, we have a non-interacting model. Create an arbitrary antiferromagnetic model on three sites with no external field. Define the model through variables `J` and `h`. Iterate over all solutions and write the optimal one in a variable called `σ`. If the optimum is degenerate, that is, you have more than one optimal configuration, keep one.
import itertools
###
J = [-1.0, -1.0]
h = [0.0, 0.0, 0.0]
min_H = 0.0
for sigma in itertools.product([-1,1], repeat=3):
#print(sigma)
H = calculate_energy(J, h, sigma)
if H < min_H:
min_H = H
σ = sigma
###
print(σ)
print(min_H)
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise2", "locked": true, "points": "2", "solution": false}
assert all([J_i < 0 for J_i in J])
assert all([h_i == 0 for h_i in h])
assert len(J) == 2
assert len(h) == 3
assert all([σ[i]*σ[i+1] == -1 for i, _ in enumerate(J)]), "The configuration is not the optimum of an antiferromagnetic system"
# -
# **Exercise 3** (1 point). Iterating over all solutions is clearly not efficient, since there are exponentially many configurations in the number of sites. From the perspective of computer science, this is a combinatorial optimization problem, and it is a known NP-hard problem. Many heuristic methods have been invented to tackle the problem. One of them is simulated annealing. It is implemented in dimod. Create the same antiferromagnetic model in dimod as above. Keep in mind that dimod uses a plus and not a minus sign in the Hamiltonian, so the sign of your couplings should be reversed. Store the model in an object called `model`, which should be a `BinaryQuadraticModel`.
# +
###
J = {(0,1): 1.0, (1,2): 1.0}
h = {0:0, 1:0, 2:0}
model = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.SPIN)
###
# -
# The simulated annealing solver requires us to define the couplings as a dictionary between spins, and we must also pass the external field values as a dictionary. The latter is all zeros for us.
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise3", "locked": true, "points": "1", "solution": false}
assert isinstance(model, dimod.binary_quadratic_model.BinaryQuadraticModel), "Wrong model type"
assert model.vartype == dimod.SPIN, "Wrong variables: binary model instead of spin system"
assert all([J_i > 0 for J_i in J.values()]), "The model is not antiferromagnetic"
# -
# **Exercise 4** (1 point). Sample the solution space a hundred times and write the response in an object called `response`.
###
sampler = dimod.SimulatedAnnealingSampler()
response = sampler.sample(model, num_reads=100)
###
response
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise4", "locked": true, "points": "1", "solution": false}
assert len(response) == 100, "Not the correct number of samples"
sample = response.first.sample
assert all([sample[i]*sample[i+1] == -1 for i, _ in enumerate(J.values())]), "The optimal configuration is not antiferromagnetic"
# -
# # The transverse-field Ising model
# **Exercise 5** (1 point). Adiabatic quantum computation and quantum annealing rely on quantum variants of the classical Ising model, and so do some variational algorithms like the quantum approximate optimization algorithm. To understand the logic behind these simple quantum-many body systems, first let us take another look at the classical Ising model, but write the Hamiltonian of the system in the quantum mechanical formalism, that is, with operators:
#
# $$ H=-\sum_{<i,j>} J_{ij} \sigma^Z_i \sigma^Z_{j} - \sum_i h_i \sigma^Z_i$$.
#
# Assume that you only have two sites. Create the Hamiltonian $H=-\sigma^Z_1\sigma^Z_2$ as a $4\times 4$ numpy array called `H`. Recall that on a single site, $\sigma^Z$ is the Pauli-Z matrix $\begin{bmatrix}1 & 0\\ 0& -1\end{bmatrix}$.
###
PauliZ = np.array([[1,0],[0,-1]])
IZ = np.kron(np.eye(2), PauliZ)
ZI = np.kron(PauliZ, np.eye(2))
ZZ = np.kron(PauliZ, PauliZ)
H = - ZZ
###
# + deletable=false editable=false jupyter={"outputs_hidden": true} nbgrader={"grade": true, "grade_id": "exercise5", "locked": true, "points": "1", "solution": false}
###
### AUTOGRADER TEST - DO NOT REMOVE
###
# -
H
### BEGIN HIDDEN TESTS
ground_truth = np.eye(4)
ground_truth[0, 0] = -1
ground_truth[3, 3] = -1
assert np.alltrue(ground_truth == H)
### END HIDDEN TESTS
# Now take a look at the eigenvector corresponding to the two smallest eigenvalues (both are -1):
_, eigenvectors = np.linalg.eigh(H)
print(eigenvectors[:, 0:1])
print(eigenvectors[:, 1:2])
# This is just the $|00\rangle$ and $|11\rangle$ states, confirming our classical intuition that in this ferromagnetic case (J=1), the two spins should be aligned to get the minimum energy, the ground state energy.
#
# We copy the function that calculates the energy expectation value $<H>$ of a Hamiltonian $H$ and check the expectation value in the $|00\rangle$ state:
# +
def calculate_energy_expectation(state, hamiltonian):
return float(np.dot(state.T.conj(), np.dot(hamiltonian, state)).real)
ψ = np.kron([[1], [0]], [[1], [0]])
calculate_energy_expectation(ψ, H)
# -
# It comes to -1.
#
# **Exercise 6** (1 point). If we add a term that does not commute with the Pauli-Z operator, the Hamiltonian will display non-classical effects. Add a Pauli-X term to both sites, so your total Hamiltonian will be $H=-\sigma^Z_1\sigma^Z_2-\sigma^X_1-\sigma^X_2$, in the object `H`.
###
PauliX = np.array([[0,1],[1,0]])
IX = np.kron(np.eye(2), PauliX)
XI = np.kron(PauliX, np.eye(2))
H = -ZZ-XI-IX
###
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise6", "locked": true, "points": "1", "solution": false}
###
### AUTOGRADER TEST - DO NOT REMOVE
###
# -
### BEGIN HIDDEN TESTS
assert np.allclose(H, np.array([[-1., -1., -1., 0.],
[-1., 1., 0., -1.],
[ -1., 0., 1., -1.],
[ 0., -1., -1., -1.]]))
### END HIDDEN TESTS
# If you take a look at the matrix of the Hamiltonian, it has off-diagonal terms:
H
# The energy expectation value in the $|00\rangle$ is not affected, the transverse field only lowers the ground state energy:
ψ = np.kron([[1], [0]], [[1], [0]])
calculate_energy_expectation(ψ, H)
# **Exercise 7** (1 point). Is this the ground state energy? Use the eigenvector corresponding to the smallest eigenvalue and calculate the expectation value of it. Store the value in a variable called `energy_expectation_value`.
###
_, eigenvectors = np.linalg.eigh(H)
energy_expectation_value = calculate_energy_expectation(eigenvectors[:,0:1], H)
###
energy_expectation_value
eigenvectors[:,0:1]
# + deletable=false editable=false jupyter={"outputs_hidden": true} nbgrader={"grade": true, "grade_id": "exercise7", "locked": true, "points": "1", "solution": false}
###
### AUTOGRADER TEST - DO NOT REMOVE
###
# -
### BEGIN HIDDEN TESTS
assert np.isclose(energy_expectation_value, -2.23606797749979)
### END HIDDEN TESTS
# Naturally, this value also corresponds to the lowest eigenvalue and indeed, this is the ground state energy. So by calculating the eigendecomposition of the typically non-diagonal Hamiltonian, we can extract both the ground state and its energy. The difficulty comes from the exponential scaling of the matrix representing the Hamiltonian as a function of the number of sites. This is the original reason going back to the early 1980s to build a quantum computer: this device would implement (or simulate) the Hamiltonian in hardware. Say, a couple of hundred spins would be beyond the computational capacity of supercomputers, but having the physical spins and being able to set a specific Hamiltonian, we can extract quantities of interest, such the ground state.
| coding_assignments/04_Classical_and_Quantum_Many-Body_Physics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Predicting Red Wine Quality with a Support Vector Machine
#
# ## Wine Data
# Data from http://archive.ics.uci.edu/ml/datasets/Wine+Quality
#
# ### Citations
# <pre>
# <NAME>. and <NAME>. (2017).
# UCI Machine Learning Repository [http://archive.ics.uci.edu/ml/index.php].
# Irvine, CA: University of California, School of Information and Computer Science.
# </pre>
#
# <pre>
# <NAME>, <NAME>, <NAME>, <NAME> and <NAME>.
# Modeling wine preferences by data mining from physicochemical properties.
# In Decision Support Systems, Elsevier, 47(4):547-553. ISSN: 0167-9236.
# </pre>
#
# Available at:
# - [@Elsevier](http://dx.doi.org/10.1016/j.dss.2009.05.016)
# - [Pre-press (pdf)](http://www3.dsi.uminho.pt/pcortez/winequality09.pdf)
# - [bib](http://www3.dsi.uminho.pt/pcortez/dss09.bib)
#
# ## Setup
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
red_wine = pd.read_csv('../../ch_09/data/winequality-red.csv')
# -
# ## EDA
red_wine.head()
red_wine.describe()
red_wine.info()
# +
def plot_quality_scores(df, kind):
ax = df.quality.value_counts().sort_index().plot.barh(
title=f'{kind.title()} Wine Quality Scores', figsize=(12, 3)
)
ax.axes.invert_yaxis()
for bar in ax.patches:
ax.text(
bar.get_width(),
bar.get_y() + bar.get_height()/2,
f'{bar.get_width()/df.shape[0]:.1%}',
verticalalignment='center'
)
plt.xlabel('count of wines')
plt.ylabel('quality score')
for spine in ['top', 'right']:
ax.spines[spine].set_visible(False)
return ax
plot_quality_scores(red_wine, 'red')
# -
# ## Making the `high_quality` column
red_wine['high_quality'] = pd.cut(red_wine.quality, bins=[0, 6, 10], labels=[0, 1])
red_wine.high_quality.value_counts(normalize=True)
# ## Building your first Support Vector Machine
# +
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
y = red_wine.pop('high_quality')
X = red_wine.drop(columns=['quality'])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.1, random_state=0, stratify=y
)
pipeline = Pipeline([
('scale', StandardScaler()),
('svm', SVC(C=5, random_state=0, probability=True))
]).fit(X_train, y_train)
# -
# ### Evaluating the SVM
# Get the predictions:
quality_preds = pipeline.predict(X_test)
# Look at the classification report:
from sklearn.metrics import classification_report
print(classification_report(y_test, quality_preds))
# Review the confusion matrix:
# +
from ml_utils.classification import confusion_matrix_visual
confusion_matrix_visual(y_test, quality_preds, ['low', 'high'])
# -
# Examine the precision-recall curve:
# +
from ml_utils.classification import plot_pr_curve
plot_pr_curve(y_test, pipeline.predict_proba(X_test)[:,1])
# -
# <hr>
# <div>
# <a href="./exercise_4.ipynb">
# <button>← Previous Solution</button>
# </a>
# <a href="../../ch_10/red_wine.ipynb">
# <button style="float: right;">Chapter 10 →</button>
# </a>
# </div>
# <hr>
| solutions/ch_09/exercise_5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: IC-SF-PYTORCH
# language: python
# name: intentclassifier
# ---
import os
import pandas as pd
import matplotlib.pyplot as plt
import random
import numpy as np
import collections
# Lets create a dataframe
class dataframeprep:
def __init__(self, p1, p2, p3):
self.input_sequence_path = p1
self.output_slot_path = p2
self.labels_path = p3
self.vocab = {}
self.vocab_size = 0
self.slots = {}
self.slots_size = 0
self.intent = {}
self.intent_size = 0
@classmethod
def _read_file(cls, input_file ):
"""Reads a tab separated value file."""
with open(input_file, "r", encoding="utf-8") as file:
lines = []
lines = file.readlines()
return lines
def _sentence_vocab(self,lines):
"""prepare sentence vocabulary"""
for line in lines:
for word in line.split():
if word not in self.vocab:
self.vocab[word] = 1
self.vocab_size += 1
else:
self.vocab[word] += 1
self.vocab_size += 1
def _slot_vocab(self, lines):
"""prepare slot vocabulary"""
for line in lines:
for word in line.split():
if word not in self.slots:
self.slots[word] = 1
self.slots_size += 1
else:
self.slots[word] += 1
self.slots_size += 1
def _intent_vocab(self, lines):
"""prepare intent vocabulary"""
for line in lines:
for word in line.split():
if word not in self.intent:
self.intent[word] = 1
self.intent_size += 1
else:
self.intent[word] += 1
self.intent_size += 1
def createframes(self):
"""creates the dataframe"""
# save it in list
input_sent = self._read_file(self.input_sequence_path)
self._sentence_vocab(input_sent)
output_slots = self._read_file(self.output_slot_path)
self._slot_vocab(output_slots)
sentence_intent = self._read_file(self.labels_path)
self._intent_vocab(sentence_intent)
list_of_tuples = list(zip(input_sent, output_slots, sentence_intent))
# Create dataframes
df1 = pd.DataFrame(list_of_tuples, columns = ['Sentence', 'Slot', 'Intent'])
return df1, self.vocab, self.slots, self.intent
# ## Atis
# ### Train Data
# We will only explore train data
t1 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\train\\seq.in"
t2 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\train\\seq.out"
t3 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\train\\label"
obj = dataframeprep(t1, t2, t3)
df1, df2, df3, df4 = obj.createframes()
### Plotting df2 is a big mistake
od = collections.OrderedDict(sorted(df2.items(),key=lambda t: t[1], reverse = True))
od
# - The words are highly unbalanced
od1 = collections.OrderedDict(sorted(df3.items(),key=lambda t: t[1], reverse = True))
od1
# - The slots are highly unbalanced
od2 = collections.OrderedDict(sorted(df4.items(),key=lambda t: t[1], reverse = True))
od2
# - The intents are also highly unbalanced
df1
# - 4478 data points in the dataset ATIS
# ### Validation Data
# We will only explore train data
t1 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\dev\\seq.in"
t2 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\dev\\seq.out"
t3 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\dev\\label"
obj = dataframeprep(t1, t2, t3)
df1, df2, df3, df4 = obj.createframes()
od = collections.OrderedDict(sorted(df2.items(),key=lambda t: t[1], reverse = True))
od
od1 = collections.OrderedDict(sorted(df3.items(),key=lambda t: t[1], reverse = True))
od1
od2 = collections.OrderedDict(sorted(df4.items(),key=lambda t: t[1], reverse = True))
od2
df1
# ### Test
# We will only explore train data
t1 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\test\\seq.in"
t2 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\test\\seq.out"
t3 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\atis\\test\\label"
obj = dataframeprep(t1, t2, t3)
df1, df2, df3, df4 = obj.createframes()
od = collections.OrderedDict(sorted(df2.items(),key=lambda t: t[1], reverse = True))
od
od1 = collections.OrderedDict(sorted(df3.items(),key=lambda t: t[1], reverse = True))
od1
od2 = collections.OrderedDict(sorted(df4.items(),key=lambda t: t[1], reverse = True))
od2
# ## Observation
# 1. The data distribution of train, validation and test are nearly the same in terms of vocab of intent, slots and sentence.
# ## Snips
# ### Train
# We will only explore train data
t1 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\train\\seq.in"
t2 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\train\\seq.out"
t3 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\train\\label"
obj = dataframeprep(t1, t2, t3)
df1, df2, df3, df4 = obj.createframes()
### Plotting df2 is a big mistake
od = collections.OrderedDict(sorted(df2.items(),key=lambda t: t[1], reverse = True))
od
# - Unbalance is observed
od1 = collections.OrderedDict(sorted(df3.items(),key=lambda t: t[1], reverse = True))
od1
# - High unbalance is observed
od2 = collections.OrderedDict(sorted(df4.items(),key=lambda t: t[1], reverse = True))
od2
# - Balance is observed
df1
# - 13084 datapoints are observed
# ### Validation
# We will only explore train data
t1 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\dev\\seq.in"
t2 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\dev\\seq.out"
t3 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\dev\\label"
obj = dataframeprep(t1, t2, t3)
df1, df2, df3, df4 = obj.createframes()
od = collections.OrderedDict(sorted(df2.items(),key=lambda t: t[1], reverse = True))
od
od1 = collections.OrderedDict(sorted(df3.items(),key=lambda t: t[1], reverse = True))
od1
od2 = collections.OrderedDict(sorted(df4.items(),key=lambda t: t[1], reverse = True))
od2
# ## Test
t1 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\test\\seq.in"
t2 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\test\\seq.out"
t3 = "D:\\Machine Learning Projects\\CV PROJECTS\\ICANDSF\\data\\snips\\test\\label"
obj = dataframeprep(t1, t2, t3)
df1, df2, df3, df4 = obj.createframes()
od = collections.OrderedDict(sorted(df2.items(),key=lambda t: t[1], reverse = True))
od
od1 = collections.OrderedDict(sorted(df3.items(),key=lambda t: t[1], reverse = True))
od1
od2 = collections.OrderedDict(sorted(df4.items(),key=lambda t: t[1], reverse = True))
od2
# ## Observation
# 1. The data distribution of train, validation and test are nearly the same in terms of vocab of intent, slots and sentence.
| notebooks/EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#importing matplotlib
import matplotlib.pyplot as plt
x=range(10) #date first
x
#we can create two lists
plt.xlabel("time")b #x axis label
plt.ylabel("dist")
x1=[2,6,8,12]
y1=[3,7,12,20]
plt.plot(x1,y1) #TO PUT GRAPHLINE
plt.grid(c='green')
plt.plot(x1,y1) #using matplot p
x2=[1,2,3,4]
y2=[4,5,6,7]
plt.xlabel("time") #x axis label
plt.ylabel("dist") #y
x1=[2,6,8,12]
y1=[3,7,12,20]
plt.plot(x1,y1,label="cars") #TO PUT GRAPHLINE
plt.plot(x2,y2,label="bikes")
plt.grid(c='green')
plt.legend() #to show lable of plot
# # bar plots
# +
plt.xlabel('time')
plt.ylabel('price')
plt.bar(x1,y1,label="apple")
plt.plot(x1,y1,label="amazon",c="orange")
plt.bar(x2,y2,label="microsoft")
plt.grid(c='red')
plt.legend()
# -
# # cricket score
players=["viral","dhoni","sachin"]
runs=[234,900,901]
# plt.bar(players,runs)
# plt.xlabel("players")
# plt.ylabel("runs")
# plt.grid(c='r')
# # scatter/dots plot
plt.scatter(x1,y1,marker='*',s=100) #cange the mARKER sign (bydefaut -dot) marker andsize are optional
plt.scatter(x2,y2,marker='^',s=100)
plt.plot(x1,y1)
plt.plot(x2,y2)
plt.grid(c='y')
#using numpy
import numpy as np
x=np.array([1,2,3,4,5,6])
x
x*2
y=x**2
y
plt.scatter(x,y)
#plt.scatter(x,x**4,marker='^')
plt.bar(x,y)
plt.plot(x,y)
| data_visulisation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
Nome = str(input("Nome do aluno"))
Bim1 = float(input("Nota do Primeiro Bimestre"))
Bim2 = float(input("Nota do Segundo Bimestre"))
Bim3 = float(input("Nota do Terceiro Bimsetre"))
Bim4 = float(input("Nota do Quarto Bimestre"))
Media = (Bim1, Bim2, Bim3, Bim4):
# -
| Jupyter/Aula-008-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import logging
import importlib
importlib.reload(logging) # see https://stackoverflow.com/a/21475297/1469195
log = logging.getLogger()
log.setLevel('INFO')
import sys
logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
# +
# %%capture
import os
import site
os.sys.path.insert(0, '/home/schirrmr/code/reversible/')
os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/')
os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//')
# %load_ext autoreload
# %autoreload 2
import numpy as np
import logging
log = logging.getLogger()
log.setLevel('INFO')
import sys
logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
import matplotlib
from matplotlib import pyplot as plt
from matplotlib import cm
# %matplotlib inline
# %config InlineBackend.figure_format = 'png'
matplotlib.rcParams['figure.figsize'] = (12.0, 1.0)
matplotlib.rcParams['font.size'] = 14
import seaborn
seaborn.set_style('darkgrid')
from reversible2.sliced import sliced_from_samples
from numpy.random import RandomState
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import copy
import math
import itertools
import torch as th
from braindecode.torch_ext.util import np_to_var, var_to_np
from reversible2.splitter import SubsampleSplitter
from reversible2.view_as import ViewAs
from reversible2.affine import AdditiveBlock
from reversible2.plot import display_text, display_close
from reversible2.bhno import load_file, create_inputs
th.backends.cudnn.benchmark = True
# +
sensor_names = ['C3','C4']
# +
orig_train_cnt = load_file('/data/schirrmr/schirrmr/HGD-public/reduced/train/4.mat')
train_cnt = orig_train_cnt.reorder_channels(sensor_names)
train_inputs = create_inputs(train_cnt, final_hz=64, half_before=True)
n_split = len(train_inputs[0]) - 40
test_inputs = [t[-40:] for t in train_inputs]
train_inputs = [t[:-40] for t in train_inputs]
# -
cuda = True
if cuda:
train_inputs = [i.cuda() for i in train_inputs]
test_inputs = [i.cuda() for i in test_inputs]
from reversible2.graph import Node
from reversible2.branching import CatChans, ChunkChans, Select
# +
def invert(feature_model, out):
return feature_model.invert(out)
from copy import deepcopy
from reversible2.graph import Node
from reversible2.distribution import TwoClassDist
from reversible2.wrap_invertible import WrapInvertible
from reversible2.blocks import dense_add_const, conv_add_3x3_const
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
set_random_seeds(2019011641, cuda)
n_chans = train_inputs[0].shape[1]
n_time = train_inputs[0].shape[2]
base_model = nn.Sequential(
WrapInvertible(SubsampleSplitter(stride=[2,1],chunk_chans_first=False),
grad_is_inverse=True, keep_input=True),# 4 x 32
conv_add_3x3_const(2*n_chans,32),
conv_add_3x3_const(2*n_chans,32),
WrapInvertible(SubsampleSplitter(stride=[2,1],chunk_chans_first=True), grad_is_inverse=True), # 8 x 16
conv_add_3x3_const(4*n_chans,32),
conv_add_3x3_const(4*n_chans,32),
WrapInvertible(SubsampleSplitter(stride=[2,1],chunk_chans_first=True), grad_is_inverse=True), # 16 x 8
conv_add_3x3_const(8*n_chans,32),
conv_add_3x3_const(8*n_chans,32, keep_output=True))
base_model.cuda();
branch_1_a = nn.Sequential(
WrapInvertible(SubsampleSplitter(stride=[2,1],chunk_chans_first=False),
grad_is_inverse=True, keep_input=True), # 16 x 4
conv_add_3x3_const(8*n_chans,32),
conv_add_3x3_const(8*n_chans,32),
WrapInvertible(SubsampleSplitter(stride=[2,1],chunk_chans_first=True),
grad_is_inverse=True),# 32 x 2
conv_add_3x3_const(16*n_chans,32),
conv_add_3x3_const(16*n_chans,32),
WrapInvertible(SubsampleSplitter(stride=[2,1],chunk_chans_first=True),
grad_is_inverse=True,), # 64 x 1
WrapInvertible(ViewAs((-1,64,1,1), (-1,64)), keep_output=True),
)
branch_1_b = deepcopy(branch_1_a)
branch_1_a.cuda();
branch_1_b.cuda();
final_model = nn.Sequential(
dense_add_const(n_time*n_chans,256,keep_input=True),
dense_add_const(n_time*n_chans,256),
dense_add_const(n_time*n_chans,256),
dense_add_const(n_time*n_chans,256),
WrapInvertible(RFFT(), keep_output=True),
)
final_model.cuda();
o = Node(None, base_model)
o = Node(o, ChunkChans(2))
o1a = Node(o, Select(0))
o1b = Node(o, Select(1))
o1a = Node(o1a, branch_1_a)
o1b = Node(o1b, branch_1_b)
o = Node([o1a,o1b], CatChans())
o = Node(o, final_model)
feature_model = o
if cuda:
feature_model.cuda()
feature_model.eval();
# -
clear_ctx_dicts(feature_model)
# +
from reversible2.constantmemory import clear_ctx_dicts
# Check that forward + inverse is really identical
t_out = feature_model(train_inputs[0][:2])
inverted = invert(feature_model, t_out)
clear_ctx_dicts(feature_model)
assert th.allclose(train_inputs[0][:2], inverted, rtol=1e-3,atol=1e-4)
device = list(feature_model.parameters())[0].device
from reversible2.ot_exact import ot_euclidean_loss_for_samples
class_dist = TwoClassDist(2, np.prod(train_inputs[0].size()[1:]) - 2, [0, 1])
class_dist.cuda()
optim_model = th.optim.Adam(feature_model.parameters())
optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2)
# +
# %%writefile plot.py
import torch as th
import matplotlib.pyplot as plt
import numpy as np
from reversible2.util import var_to_np
from reversible2.plot import display_close
from matplotlib.patches import Ellipse
import seaborn
def plot_outs(feature_model, train_inputs, test_inputs, class_dist):
with th.no_grad():
# Compute dist for mean/std of encodings
data_cls_dists = []
for i_class in range(len(train_inputs)):
this_class_outs = feature_model(train_inputs[i_class])[:,:2]
data_cls_dists.append(
th.distributions.MultivariateNormal(th.mean(this_class_outs, dim=0),
covariance_matrix=th.diag(th.std(this_class_outs, dim=0) ** 2)))
for setname, set_inputs in (("Train", train_inputs), ("Test", test_inputs)):
outs = [feature_model(ins) for ins in set_inputs]
c_outs = [o[:,:2] for o in outs]
c_outs_all = th.cat(c_outs)
cls_dists = []
for i_class in range(len(c_outs)):
mean, std = class_dist.get_mean_std(i_class)
cls_dists.append(
th.distributions.MultivariateNormal(mean[:2],covariance_matrix=th.diag(std[:2] ** 2)))
preds_per_class = [th.stack([cls_dists[i_cls].log_prob(c_out)
for i_cls in range(len(cls_dists))],
dim=-1) for c_out in c_outs]
pred_labels_per_class = [np.argmax(var_to_np(preds), axis=1)
for preds in preds_per_class]
labels = np.concatenate([np.ones(len(set_inputs[i_cls])) * i_cls
for i_cls in range(len(train_inputs))])
acc = np.mean(labels == np.concatenate(pred_labels_per_class))
data_preds_per_class = [th.stack([data_cls_dists[i_cls].log_prob(c_out)
for i_cls in range(len(cls_dists))],
dim=-1) for c_out in c_outs]
data_pred_labels_per_class = [np.argmax(var_to_np(data_preds), axis=1)
for data_preds in data_preds_per_class]
data_acc = np.mean(labels == np.concatenate(data_pred_labels_per_class))
print("{:s} Accuracy: {:.1f}%".format(setname, acc * 100))
fig = plt.figure(figsize=(5,5))
ax = plt.gca()
for i_class in range(len(c_outs)):
#if i_class == 0:
# continue
o = var_to_np(c_outs[i_class]).squeeze()
incorrect_pred_mask = pred_labels_per_class[i_class] != i_class
plt.scatter(o[:,0], o[:,1], s=20, alpha=0.75, label=["Right", "Rest"][i_class])
assert len(incorrect_pred_mask) == len(o)
plt.scatter(o[incorrect_pred_mask,0], o[incorrect_pred_mask,1], marker='x', color='black',
alpha=1, s=5)
means, stds = class_dist.get_mean_std(i_class)
means = var_to_np(means)[:2]
stds = var_to_np(stds)[:2]
for sigma in [0.5,1,2,3]:
ellipse = Ellipse(means, stds[0]*sigma, stds[1]*sigma)
ax.add_artist(ellipse)
ellipse.set_edgecolor(seaborn.color_palette()[i_class])
ellipse.set_facecolor("None")
for i_class in range(len(c_outs)):
o = var_to_np(c_outs[i_class]).squeeze()
plt.scatter(np.mean(o[:,0]), np.mean(o[:,1]),
color=seaborn.color_palette()[i_class+2], s=80, marker="^",
label=["Right Mean", "Rest Mean"][i_class])
plt.title("{:6s} Accuracy: {:.1f}%\n"
"From data mean/std: {:.1f}%".format(setname, acc * 100, data_acc * 100))
plt.legend(bbox_to_anchor=(1,1,0,0))
display_close(fig)
return
# +
from reversible2.constantmemory import clear_ctx_dicts
from reversible2.timer import Timer
from plot import plot_outs
i_start_epoch_out = 401
n_epochs = 1001
for i_epoch in range(n_epochs):
with Timer(name='EpochLoop', verbose=True) as loop_time:
optim_model.zero_grad()
optim_dist.zero_grad()
for i_class in range(len(train_inputs)):
with Timer(name='invert'):
class_ins = train_inputs[i_class]
samples = class_dist.get_samples(i_class, len(train_inputs[i_class]) * 2)
inverted = feature_model.invert(samples)
with Timer(name='ot_in'):
ot_loss_in = ot_euclidean_loss_for_samples(class_ins.view(class_ins.shape[0], -1),
inverted.view(inverted.shape[0], -1))
del inverted
with Timer(name='outs'):
outs = feature_model(class_ins)
if i_epoch < i_start_epoch_out:
ot_loss_out = th.zeros(1, device=class_ins.device)
else:
ot_loss_out = ot_euclidean_loss_for_samples(outs[:,:2].squeeze(), samples[:,:2].squeeze())
del samples
with Timer(name='invertother'):
other_class_ins = train_inputs[1-i_class]
changed_to_other_class = class_dist.change_to_other_class(outs, i_class_from=i_class, i_class_to=1-i_class)
other_inverted = feature_model.invert(changed_to_other_class)
with Timer(name='ottin'):
ot_transformed_in = ot_euclidean_loss_for_samples(other_class_ins.view(other_class_ins.shape[0], -1),
other_inverted.view(other_inverted.shape[0], -1))
with Timer(name='ottout'):
if i_epoch < i_start_epoch_out:
ot_transformed_out = th.zeros(1, device=class_ins.device)
else:
other_samples = class_dist.get_samples(1-i_class, len(train_inputs[i_class]) * 2)
ot_transformed_out = ot_euclidean_loss_for_samples(changed_to_other_class[:,:2].squeeze(),
other_samples[:,:2].squeeze(),)
with Timer(name='backward'):
loss = ot_loss_out + ot_transformed_in + ot_transformed_out + ot_loss_in
loss.backward()
del outs, other_class_ins, other_inverted
clear_ctx_dicts(feature_model)
optim_model.step()
optim_dist.step()
if i_epoch % (n_epochs // 20) == 0:
print("Epoch {:d} of {:d}".format(i_epoch, n_epochs))
print("Loss: {:.2E}".format(loss.item()))
print("OT Loss In: {:.2E}".format(ot_loss_in.item()))
print("OT Loss Out: {:.2E}".format(ot_loss_out.item()))
print("Transformed OT Loss In: {:.2E}".format(ot_transformed_in.item()))
print("Transformed OT Loss Out: {:.2E}".format(ot_transformed_out.item()))
print("Loop Time: {:.0f} ms".format(loop_time.elapsed_secs * 1000))
plot_outs(feature_model, train_inputs, test_inputs,
class_dist)
fig = plt.figure(figsize=(8,2))
plt.plot(var_to_np(th.cat((th.exp(class_dist.class_log_stds),
th.exp(class_dist.non_class_log_stds)))),
marker='o')
display_close(fig)
# +
tight_bcic_4_2a_positions = [
['','','','Fz','','',''],
['','FC3','FC1','FCz','FC2','FC4',''],
['C5','C3','C1','Cz','C2','C4','C6'],
['','CP3','CP1','CPz','CP2','CP4',''],
['','','P1','Pz','P2','',''],
['','','','POz','','','']]
def get_sensor_pos(sensor_name, sensor_map=tight_bcic_4_2a_positions):
sensor_pos = np.where(np.char.lower(np.char.array(sensor_map)) == sensor_name.lower())
# unpack them: they are 1-dimensional arrays before
assert len(sensor_pos[0]) == 1, ("there should be a position for the sensor "
"{:s}".format(sensor_name))
return sensor_pos[0][0], sensor_pos[1][0]
def plot_head_signals_tight(signals, sensor_names=None, figsize=(12, 7),
plot_args=None, hspace=0.35, sensor_map=tight_bcic_4_2a_positions,
tsplot=False, sharex=True, sharey=True):
assert sensor_names is None or len(signals) == len(sensor_names), ("need "
"sensor names for all sensor matrices")
assert sensor_names is not None
if plot_args is None:
plot_args = dict()
figure = plt.figure(figsize=figsize)
sensor_positions = [get_sensor_pos(name, sensor_map) for name in sensor_names]
sensor_positions = np.array(sensor_positions) # sensors x 2(row and col)
maxima = np.max(sensor_positions, axis=0)
minima = np.min(sensor_positions, axis=0)
max_row = maxima[0]
max_col = maxima[1]
min_row = minima[0]
min_col = minima[1]
rows = max_row - min_row + 1
cols = max_col - min_col + 1
first_ax = None
for i in range(0, len(signals)):
sensor_name = sensor_names[i]
sensor_pos = sensor_positions[i]
assert np.all(sensor_pos == get_sensor_pos(sensor_name, sensor_map))
# Transform to flat sensor pos
row = sensor_pos[0]
col = sensor_pos[1]
subplot_ind = (row - min_row) * cols + col - min_col + 1 # +1 as matlab uses based indexing
if first_ax is None:
ax = figure.add_subplot(rows, cols, subplot_ind)
first_ax = ax
elif sharex is True and sharey is True:
ax = figure.add_subplot(rows, cols, subplot_ind, sharey=first_ax,
sharex=first_ax)
elif sharex is True and sharey is False:
ax = figure.add_subplot(rows, cols, subplot_ind,
sharex=first_ax)
elif sharex is False and sharey is True:
ax = figure.add_subplot(rows, cols, subplot_ind, sharey=first_ax)
else:
ax = figure.add_subplot(rows, cols, subplot_ind)
signal = signals[i]
if tsplot is False:
ax.plot(signal, **plot_args)
else:
seaborn.tsplot(signal.T, ax=ax, **plot_args)
ax.set_title(sensor_name)
ax.set_yticks([])
if len(signal) == 600:
ax.set_xticks([150, 300, 450])
ax.set_xticklabels([])
else:
ax.set_xticks([])
ax.xaxis.grid(True)
# make line at zero
ax.axhline(y=0, ls=':', color="grey")
figure.subplots_adjust(hspace=hspace)
return figure
# -
# ## Investigate Outliers
th.set_grad_enabled(False)
# +
test_outs = feature_model(test_inputs[1])
clear_ctx_dicts(feature_model)
max_val, i_max = th.max(th.max(test_outs, dim=1)[0], dim=0)
# -
plt.plot(var_to_np(th.max(test_outs, dim=1)[0]), marker='o')
plt.yscale('symlog')
fig = plot_head_signals_tight(var_to_np(test_inputs[1][28:36]).squeeze().transpose(1,2,0),
sensor_names=sensor_names,
figsize=(20,12));
plt.ylim(-3,3)
fig = plot_head_signals_tight(var_to_np(test_inputs[1][17:25]).squeeze().transpose(1,2,0),
sensor_names=sensor_names,
figsize=(20,12));
plt.ylim(-3,3)
fig = plot_head_signals_tight(var_to_np(test_inputs[1][i_max.item()]).squeeze(),
sensor_names=sensor_names,
figsize=(20,12));
plt.ylim(-3,3)
fig = plot_head_signals_tight(var_to_np(test_inputs[1][i_max.item() - 1]).squeeze(),
sensor_names=sensor_names,
figsize=(20,12));
fig = plot_head_signals_tight(var_to_np(train_inputs[1][i_max.item()]).squeeze(),
sensor_names=sensor_names,
figsize=(20,12));
plt.ylim(-3,3
mean_bps_per_class = []
for i_class in range(len(train_inputs)):
samples = class_dist.get_samples(i_class, 400)
inverted = feature_model.invert(samples)
mean_bps_per_class.append(
np.mean(np.abs(np.fft.rfft(var_to_np(inverted.squeeze()))), axis=0))
fig = plot_head_signals_tight(np.stack(mean_bps_per_class, axis=-1), sensor_names=sensor_names,
figsize=(20,12));
fig = plot_head_signals_tight(np.log(mean_bps_per_class[0]/mean_bps_per_class[1]), sensor_names=sensor_names,
figsize=(20,12));
clear_ctx_dicts(feature_model)
plt.figure(figsize=(16,3))
plt.plot(np.fft.rfftfreq(inverted.shape[2],d=1/256.0),
np.mean(mean_bps_per_class[0], axis=0))
plt.plot(np.fft.rfftfreq(inverted.shape[2],d=1/256.0),
np.mean(mean_bps_per_class[1], axis=0))
for i_class in range(len(train_inputs)):
samples = class_dist.get_samples(i_class, 20)
inverted = feature_model.invert(samples)
fig, axes = plt.subplots(5,4, figsize=(16,12), sharex=True, sharey=True)
for ax, curve in zip(axes.flatten(), var_to_np(inverted).squeeze()):
ax.plot(curve.T, color=seaborn.color_palette()[i_class])
display_close(fig)
fig,axes = plt.subplots(1,2, figsize=(12,2), sharex=True, sharey=True)
for i_class in range(2):
cur_mean, cur_std = class_dist.get_mean_std(i_class, )
inverted = invert(feature_model, cur_mean.unsqueeze(0))
axes[0].plot(var_to_np(inverted.squeeze())[7], color=seaborn.color_palette()[i_class])
axes[1].plot(var_to_np(inverted.squeeze())[11], color=seaborn.color_palette()[i_class])
axes[0].set_title(sensor_names[7])
axes[1].set_title(sensor_names[11])
plt.legend(("Right Hand", "Rest"), bbox_to_anchor=(1,1,0,0))
display(fig)
plt.close(fig)
# +
inverted_per_class = []
for i_class in range(2):
cur_mean, cur_std = class_dist.get_mean_std(i_class, )
inverted = invert(feature_model, cur_mean.unsqueeze(0))
inverted_per_class.append(var_to_np(inverted).squeeze())
signals = np.stack(inverted_per_class, axis=-1)
fig = plot_head_signals_tight(signals, sensor_names, sensor_map=tight_bcic_4_2a_positions)
# -
fig = plot_head_signals_tight(signals[:,:,0], sensor_names, sensor_map=tight_bcic_4_2a_positions,)
plt.ylim(-3,3)
fig.suptitle("Right Hand")
fig = plot_head_signals_tight(signals[:,:,1], sensor_names, sensor_map=tight_bcic_4_2a_positions)
plt.ylim(-3,3)
fig.suptitle("Rest")
| notebooks/constant-memory/ConstantMemory2Chans64.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 4.4e-05, "end_time": "2018-06-12T14:57:03.995204", "exception": false, "start_time": "2018-06-12T14:57:03.995160", "status": "completed"} tags=[]
# # PSF Generation Validation Template
# + [markdown] papermill={"duration": 2e-05, "end_time": "2018-06-12T14:57:04.004165", "exception": false, "start_time": "2018-06-12T14:57:04.004145", "status": "completed"} tags=[]
# ### Parameters
# + papermill={"duration": 0.014332, "end_time": "2018-06-12T14:57:04.032453", "exception": false, "start_time": "2018-06-12T14:57:04.018121", "status": "completed"} tags=[]
# Debug
# psf_args = '{"pz": 0}'
# + papermill={"duration": 0.013751, "end_time": "2018-06-12T14:57:04.048157", "exception": false, "start_time": "2018-06-12T14:57:04.034406", "status": "completed"} tags=["default parameters"]
# Parameters
psf_args = None
# + papermill={"duration": 0.018548, "end_time": "2018-06-12T14:57:04.066980", "exception": false, "start_time": "2018-06-12T14:57:04.048432", "status": "completed"} tags=["parameters"]
# Parameters
psf_args = "{\"pz\": 0.0, \"size_x\": 512, \"size_y\": 400, \"size_z\": 16}"
# + papermill={"duration": 0.022263, "end_time": "2018-06-12T14:57:04.089324", "exception": false, "start_time": "2018-06-12T14:57:04.067061", "status": "completed"} tags=[]
# Parse parameters
import json
psf_args = json.loads(psf_args)
psf_args
# + [markdown] papermill={"duration": 1.3e-05, "end_time": "2018-06-12T14:57:04.089913", "exception": false, "start_time": "2018-06-12T14:57:04.089900", "status": "completed"} tags=[]
# ### Initialize
# + papermill={"duration": 1.126863, "end_time": "2018-06-12T14:57:05.227825", "exception": false, "start_time": "2018-06-12T14:57:04.100962", "status": "completed"} tags=[]
# %run utils.py
import os
import os.path as osp
import shutil
import tempfile
import numpy as np
import scipy
import pandas as pd
import papermill as pm
from skimage.measure import compare_ssim, compare_psnr
from skimage.exposure import rescale_intensity
import matplotlib.pyplot as plt
from scipy.stats import describe
from skimage import io
from flowdec import psf as fd_psf
import papermill as pm
PSFGEN_JAR_PATH = osp.join(osp.expanduser('~/apps/psfgenerator'), 'PSFGenerator.jar')
# + papermill={"duration": 0.012577, "end_time": "2018-06-12T14:57:05.241008", "exception": false, "start_time": "2018-06-12T14:57:05.228431", "status": "completed"} tags=[]
psf = fd_psf.GibsonLanni(**psf_args)
psf.config
# + [markdown] papermill={"duration": 1.2e-05, "end_time": "2018-06-12T14:57:05.241446", "exception": false, "start_time": "2018-06-12T14:57:05.241434", "status": "completed"} tags=[]
# ### Compute PSFs
# + papermill={"duration": 0.013796, "end_time": "2018-06-12T14:57:05.266274", "exception": false, "start_time": "2018-06-12T14:57:05.252478", "status": "completed"} tags=[]
def run_psfgenerator(config, mode, jar_path, delete_working_dir=True, dtype='64-bits'):
working_dir = tempfile.mkdtemp()
print('Using working directory:', working_dir)
cwd = os.getcwd()
try:
os.chdir(working_dir)
# Convert the configuration for the given Flowdec PSF to a PSFGenerator config
psfg_config = flowdec_config_to_psfgenerator_config(config, mode=mode, dtype=dtype)
config_string = psfgenerator_config_to_string(psfg_config)
# Write the config to a file
config_path = osp.join(working_dir, 'config.txt')
with open(config_path, 'w') as fd:
fd.write(config_string)
# Run PSFGenerator and read the output from it
# !java -cp $jar_path PSFGenerator config.txt
output_path = osp.join(working_dir, 'PSF {}.tif'.format(mode))
res = io.imread(output_path)
# Delete the working directory if requested
if delete_working_dir:
shutil.rmtree(working_dir)
return res, psfg_config, working_dir
finally:
os.chdir(cwd)
# + papermill={"duration": 8.688561, "end_time": "2018-06-12T14:57:13.954899", "exception": false, "start_time": "2018-06-12T14:57:05.266338", "status": "completed"} tags=[]
pg_res, pg_conf, pg_dir = run_psfgenerator(psf.config, 'GL', PSFGEN_JAR_PATH)
# + papermill={"duration": 0.026785, "end_time": "2018-06-12T14:57:13.982552", "exception": false, "start_time": "2018-06-12T14:57:13.955767", "status": "completed"} tags=[]
pg_conf
# + papermill={"duration": 0.017523, "end_time": "2018-06-12T14:57:14.000154", "exception": false, "start_time": "2018-06-12T14:57:13.982631", "status": "completed"} tags=[]
pg_res.shape, pg_res.dtype
# + papermill={"duration": 0.202437, "end_time": "2018-06-12T14:57:14.202876", "exception": false, "start_time": "2018-06-12T14:57:14.000439", "status": "completed"} tags=[]
fd_res = psf.generate().astype(np.float32)
# + papermill={"duration": 0.02033, "end_time": "2018-06-12T14:57:14.223990", "exception": false, "start_time": "2018-06-12T14:57:14.203660", "status": "completed"} tags=[]
fd_res.shape, fd_res.dtype
# + papermill={"duration": 0.193246, "end_time": "2018-06-12T14:57:14.417472", "exception": false, "start_time": "2018-06-12T14:57:14.224226", "status": "completed"} tags=[]
describe(fd_res.ravel()), describe(pg_res.ravel())
# + [markdown] papermill={"duration": 1.5e-05, "end_time": "2018-06-12T14:57:14.418215", "exception": false, "start_time": "2018-06-12T14:57:14.418200", "status": "completed"} tags=[]
# ### Visualize
# + papermill={"duration": 0.015221, "end_time": "2018-06-12T14:57:14.449793", "exception": false, "start_time": "2018-06-12T14:57:14.434572", "status": "completed"} tags=[]
def compare_orthogonal_views(img_fd, img_pg, pct=None, figsize=(16, 16), log=True):
fig, ax = plt.subplots(3, 2)
fig.set_size_inches(figsize)
sh = img_fd.shape
crop_slice = [slice(None)] * 3
if pct:
m = np.array(sh) // 2
md = np.array(sh) // (1/pct)
crop_slice = [slice(int(m[i] - md[i]), int(m[i] + md[i])) for i in range(len(m))]
ax_map = ['Z', 'Y', 'X']
for i in range(3):
im1, im2 = img_fd.max(axis=i), img_pg.max(axis=i)
if log:
im1, im2 = np.log(im1), np.log(im2)
ax[i][0].imshow(im1[[cs for j, cs in enumerate(crop_slice) if j != i]])
ax[i][0].set_title('Max {} Projection (Flowdec)'.format(ax_map[i]))
ax[i][1].imshow(im2[[cs for j, cs in enumerate(crop_slice) if j != i]])
ax[i][1].set_title('Max {} Projection (PSFGenerator)'.format(ax_map[i]))
# + papermill={"duration": 0.889868, "end_time": "2018-06-12T14:57:15.341144", "exception": false, "start_time": "2018-06-12T14:57:14.451276", "status": "completed"} tags=[]
# Full PSF orthognal views (no zoom)
compare_orthogonal_views(fd_res, pg_res, None)
# + papermill={"duration": 0.808662, "end_time": "2018-06-12T14:57:16.150338", "exception": false, "start_time": "2018-06-12T14:57:15.341676", "status": "completed"} tags=[]
# PSF orthognal views at 50% zoom
compare_orthogonal_views(fd_res, pg_res, .25, log=True)
# + papermill={"duration": 0.817071, "end_time": "2018-06-12T14:57:16.968024", "exception": false, "start_time": "2018-06-12T14:57:16.150953", "status": "completed"} tags=[]
# PSF orthognal views at 25% zoom
compare_orthogonal_views(fd_res, pg_res, .125, log=True)
# + [markdown] papermill={"duration": 1.3e-05, "end_time": "2018-06-12T14:57:16.968550", "exception": false, "start_time": "2018-06-12T14:57:16.968537", "status": "completed"} tags=[]
# ### Quantify
# + papermill={"duration": 1.113761, "end_time": "2018-06-12T14:57:18.101718", "exception": false, "start_time": "2018-06-12T14:57:16.987957", "status": "completed"} tags=[]
def get_summary_df(fd_res, pg_res):
return pd.concat([
pd.Series(fd_res.ravel()).describe().rename('Flowdec'),
pd.Series(fd_res.ravel()).describe().rename('PSFGenerator'),
pd.Series((fd_res - pg_res).ravel()).describe().rename('Diff')
], axis=1)
df_orig = get_summary_df(fd_res, pg_res)
df_log = get_summary_df(np.log(fd_res), np.log(pg_res))
pm.record('df_original', df_orig.to_dict())
pm.record('df_log', df_log.to_dict())
# + papermill={"duration": 0.025813, "end_time": "2018-06-12T14:57:18.128432", "exception": false, "start_time": "2018-06-12T14:57:18.102619", "status": "completed"} tags=[]
df_orig
# + papermill={"duration": 0.018234, "end_time": "2018-06-12T14:57:18.147527", "exception": false, "start_time": "2018-06-12T14:57:18.129293", "status": "completed"} tags=[]
df_log
# + papermill={"duration": 1.407434, "end_time": "2018-06-12T14:57:19.560669", "exception": false, "start_time": "2018-06-12T14:57:18.153235", "status": "completed"} tags=[]
measures = {
'ssim_original': compare_ssim(fd_res, pg_res),
'psnr_original': compare_psnr(fd_res, pg_res),
'ssim_log': compare_ssim(
rescale_intensity(np.log(fd_res), out_range=(0, 1)),
rescale_intensity(np.log(pg_res), out_range=(0, 1))
),
'psnr_log': compare_psnr(
rescale_intensity(np.log(fd_res), out_range=(0, 1)),
rescale_intensity(np.log(pg_res), out_range=(0, 1))
)
}
pm.record('measures', measures)
measures
| python/validation/psfgeneration/results/large-xy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
class first:
def fun():
return print('first')
class second(first):
first.fun()
class person:
x = 'i am public'# can be change from outside class
_y = 'hi i am protected' # can be change from outside of class
__z = 'hi i am private' # cannot be changed from outside of class
def show(self):
print(person.x)
print(person._y)
print(person.__z)
def change( self,x,y,z):
person.x = x
person._y = y
person.__z = z
one = person()
two = person()
person.__z = 'i am hacked you'
one.show()
one.change(10,20,30)
one.show()
print()
two.show()
two.change(50,60,70)
two.show()
print(dir(person))
print(person.__z)
person._y = 'y is hacked'
print(dir(person))
print(person._person_y)
print(person._person__z)
# +
# self always create its own variable weather it present in class or not
#
| Inheritance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Irreducible active flows: effect of plane boundaries
#
# The expression of fluid flow can be written in terms of modes of the force per unit area and surface slip. These include
#
# $$\boldsymbol{v}^{1s}(\boldsymbol{r})=-(1+\tfrac{b^{2}}{6}\nabla^{2})\,\mathbf{G}\cdot\mathbf{F}^{(1s)},$$
# $$\boldsymbol{v}^{2s}(\boldsymbol{r})=\tfrac{28\pi\eta b^{2}}{3}(1+\tfrac{b^{2}}{10}\nabla^{2})\,\boldsymbol{\nabla}\mathbf{G}\cdot\mathbf{V}^{(2s)},$$
# $$\boldsymbol{v}^{3t}(\boldsymbol{r})=\tfrac{2\pi\eta b^{3}}{5}\nabla^{2}\mathbf{G}\cdot\mathbf{V}^{(3t)}$$
#
# We emphasise that these expressions are valid for any Green's function of the Stokes equation, provided they satisfy the additional boundary conditions that may be imposed.
#
# Our second example illustrates how irreducible flows are modified by the proximity to plane boundaries. This is of relevance to experiments, where confinement by boundaries is commonplace [#goldstein2015green, #thutupalli2018FIPS]. This also illustrates the flexibility of our method, as the only quantity that needs to be changed is the Green's function. The Green's function for a no-slip wall is the Lorentz-Blake tensor
#
# $$
# G_{\alpha\beta}^{\text{w}}(\boldsymbol{R}_{i},\,\boldsymbol{R}_{j}) = G_{\alpha\beta}^{\text{o}}(\boldsymbol{r}_{ij})-G_{\alpha\beta}^{\text{o}}(\boldsymbol{r}_{ij}^{*})-2h\nabla_{{\scriptscriptstyle \boldsymbol{r}_{\gamma}^{*}}}G_{\alpha3}^{\text{o}}(\boldsymbol{r}_{ij}^{*})\mathcal{M}_{\beta\gamma}+h^{2}\nabla_{{\scriptscriptstyle \boldsymbol{r}^{*}}}^{2}G_{\alpha\gamma}^{\text{o}}(\boldsymbol{r}_{ij}^{*})\mathcal{M}_{\beta\gamma}.Here \boldsymbol{r}_{ij}^{*}=\mathbf{\boldsymbol{R}}_{i}-\mathbf{\boldsymbol{R}}_{j}^{*},
# $$
#
# where $\boldsymbol{R}_{j}^{*}=\boldsymbol{\mathcal{M}}\cdot\boldsymbol{R}$ is the image of the j-th colloid at a distance h from plane boundary and $\boldsymbol{\mathcal{M}}=\boldsymbol{I}-2\mathbf{\hat{z}}\mathbf{\hat{z}}$ is the reflection operator. The Green's function for a no-shear plane air-water interface is
#
# $$
# G_{\alpha\beta}^{\text{i}}(\boldsymbol{R}_{i},\,\boldsymbol{R}_{j}) = G_{\alpha\beta}^{\text{o}}(\boldsymbol{r}_{ij})+(\delta_{\beta\rho}\delta_{\rho\gamma}-\delta_{\beta3}\delta_{3\gamma})G_{\alpha\gamma}^{\text{o}}(\boldsymbol{r}_{ij}^{*}).
# $$
# The plane boundary is placed at z=0 and the flows are plotted in the half-space z>0.
# %%capture
## compile PyStokes for this notebook
import os
owd = os.getcwd()
os.chdir('../')
# %run setup.py install
os.chdir(owd)
# %matplotlib inline
import pystokes
import numpy as np, matplotlib.pyplot as plt
# +
# particle radius, fluid viscosity, and number of particles
b, eta, Np = 1.0, 1.0/6.0, 1
#initialise position, orientation and body force on the colloid
r, p, F = np.array([0.0, 0.0, 3.4]), np.array([0.0, 0.0, -1]), np.array([0.0, 0.0, 1])
# irreducible coeffcients
V2s = pystokes.utils.irreducibleTensors(2, p)
V3t = pystokes.utils.irreducibleTensors(1, p)
# +
# space dimension , extent , discretization
dim, L, Ng = 3, 10, 128
#Instantiate the Flow class near a planw wall and interface
wFlow = pystokes.wallBounded.Flow(radius=b, particles=Np, viscosity=eta, gridpoints=Ng*Ng)
iFlow = pystokes.interface.Flow(radius=b, particles=Np, viscosity=eta, gridpoints=Ng*Ng)
# +
plt.figure(figsize=(24, 8), edgecolor='gray', linewidth=4)
# create the grid
rr, vv = pystokes.utils.gridYZ(dim, L, Ng)
plt.subplot(231); vv=vv*0;
wFlow.flowField1s(vv, rr, r, F)
pystokes.utils.plotStreamlinesYZsurf(vv, rr, r, ms=44,offset=1e-1, title='lσ = 1s', density=2)
plt.subplot(232); vv=vv*0;
wFlow.flowField2s(vv, rr, r, V2s)
pystokes.utils.plotStreamlinesYZsurf(vv, rr, r, ms=44, offset=1e-1, title='lσ = 2s', density=2)
plt.subplot(233); vv=vv*0;
wFlow.flowField3t(vv, rr, r, V3t)
pystokes.utils.plotStreamlinesYZsurf(vv, rr, r, ms=44,offset=4e-2, title='lσ = 3t', density=2)
plt.subplot(234); vv=vv*0;
iFlow.flowField1s(vv, rr, r, F)
pystokes.utils.plotStreamlinesYZsurf(vv, rr, r, ms=44,offset=4e-1, title='None', density=2)
plt.subplot(235); vv=vv*0;
iFlow.flowField2s(vv, rr, r, V2s)
pystokes.utils.plotStreamlinesYZsurf(vv, rr, r, ms=44,offset=1e-1, title='None', density=2)
plt.subplot(236); vv=vv*0;
iFlow.flowField3t(vv, rr, r, V3t)
pystokes.utils.plotStreamlinesYZsurf(vv, rr, r, ms=44,offset=6e-2, title='None', density=2)
| examples/ex2-flowPlaneSurface.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # How to download from SRA/NCBI
# This will be a quick tutorial on how to download datasets more easily from SRA or NCBI.
# We will take advantage of several scripts and tools that are open sourced for ease.
# ## NCBI download
# For certain things, you can just go directly to [NCBI genomes](https://www.ncbi.nlm.nih.gov/guide/genomes-maps/), search for the genome of interest, open the page and download as a link. For something more programmatic, we utilize the two tools produced by <NAME> (also creator of antiSMASH):
# ### [NCBI Genomes downloader](https://github.com/kblin/ncbi-genome-download)
# Allows for download based upon NCBI genome search criteria
# ### [NCBI ACC downloader](https://github.com/kblin/ncbi-acc-download)
# Allows for download based upon NCBI direct accessions
#
# In the example below, we will use the same download item we would use for the WGS 101 session upcoming. Since there is a specific accession we have for our reference genome and it is known, we can use the NCBI ACC downloader, set the output format to be fasta, and then enter the accession number. It will then download this sequence.
# + language="bash"
# ncbi-acc-download --format fasta NZ_CP017669.1
# -
# ## SRA Download
# Alternatively for SRA downloads, I dont believe you can directly download anymore and have to use their [SRA toolkit](https://github.com/ncbi/sra-tools). This can sometimes be a bit finicky- occasionally it will refuse to download reads that are present for an unknown reason. In cases like that, usually the same SRA project is accessible through ENA, and then you can use a standard [wget]() call as an alternative. We will review that a bit more in the unix sessions.
# ### Direct SRA toolkit download
# To find your specific SRA files for download, you must first find the relevant project in [GEO](https://www.ncbi.nlm.nih.gov/geo/)/[SRA](https://www.ncbi.nlm.nih.gov/sra). Once you have found a project of interest, you can identify the specific sample names you need to download. If using the SRA run downloader tool, you will be able to download a .txt file that contains all of the sample names for a specific project instead of manually going into each one.
#
# After installing the SRA toolkit, you can use the fastq-dump function. Adding "--split-files" allows it to natively split it into R1 and R2 (if applicable), "--gzip" compresses the output to .gz, and then "--outdir" defines the output directory- in this case "./" is the local directory, and then lastly the SRA accession information for the direct sample.
fastq-dump --split-files --gzip --outdir ./ SAMN09914146
# ### Wget from ENA alternative
# As mentioned, you can directly download using wget (a standard unix download tool) from ENA if SRA is failing. This will require a little bit more manual digging to set up and moving through the full FTP site that ENA hosts.
# This breaks down into the call "wget", then followed by the FTP path of the data.
wget ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR840/001/SRR8404401/SRR8404401_1.fastq.gz
| SRA_download.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Analyse predictions by cTaG2
# This notebook is used to analyse predictions made by cTaG2 for COAD data.
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# ## Load the prediction file
# Variables to be updated for loading the prediction file.
# Either the absolute path is given to *filepath* or set *PATH* and the relative path is set based on cancer type, label method, number of features and model.
#
# Default file name is "cTaG2_predictions.tsv". For other filenames, update *fname*.
# +
PATH = "/data/malvika/cTaG2.0"
ctype = "COAD"
lab_type = "bailey"
feat_num = "some"
model = "BalBag"
folderpath = "/output/GDC_{}/predict/multiomic".format(ctype)
filepath = PATH + folderpath + "/{}_{}_{}".format(lab_type, feat_num, model)
## Uncomment line below to set absolute path
# filepath = "\path\to\prediction\file"
# -
os.chdir(filepath)
fname = "cTaG2_predictions.tsv"
data = pd.read_csv(fname, sep="\t", header=0, index_col=0)
data.head(5)
print("Number of total samples = {}".format(len(data["Tumor_Sample_Barcode"].unique())))
print("Total number of unique genes = {}".format(len(data["Hugo_Symbol"].unique())))
print("Total number of unique TSG = {}".format(list(data[["Hugo_Symbol", "Predicted label"]].drop_duplicates()["Predicted label"]).count("Tumor suppressor")))
print("Total number of unique OG = {}".format(list(data[["Hugo_Symbol", "Predicted label"]].drop_duplicates()["Predicted label"]).count("Oncogene")))
data_unigenes = data[["Hugo_Symbol", "Predicted label"]].drop_duplicates()
multi_genes = [gene for gene in data_unigenes["Hugo_Symbol"].unique() if data_unigenes[data_unigenes["Hugo_Symbol"]==gene].shape[0]>1]
len(multi_genes)
# ## Summary statistics for sample
# Calculates the total number of genes predicted for each sample along with other counts.
#
# Counts the number of genes with mutation, and CNV alterations.
# Counts number of TSGs and OGs predicted for the sample.
data_samp = pd.DataFrame()
data_samp["Tumor_Sample_Barcode"] = data["Tumor_Sample_Barcode"].unique()
data_samp["Num_genes"] = [data[data["Tumor_Sample_Barcode"] == samp].shape[0] for samp in data_samp["Tumor_Sample_Barcode"]]
data_samp["Num_mut"] = [data[(data["Tumor_Sample_Barcode"] == samp) & ~(data["Variant_Classification"].isna())].shape[0] for samp in data_samp["Tumor_Sample_Barcode"]]
data_samp["Num_cnv"] = [data[(data["Tumor_Sample_Barcode"] == samp) & ~(data["CNV"] == 0)].shape[0] for samp in data_samp["Tumor_Sample_Barcode"]]
data_samp["Num_TSG"] = [data[(data["Tumor_Sample_Barcode"] == samp) & (data["Predicted label"] == "Tumor suppressor")].shape[0] for samp in data_samp["Tumor_Sample_Barcode"]]
data_samp["Num_OG"] = [data[(data["Tumor_Sample_Barcode"] == samp) & (data["Predicted label"] == "Oncogene")].shape[0] for samp in data_samp["Tumor_Sample_Barcode"]]
data_samp.head()
# ### Distribution of number of genes predicted
# The distribution of number of genes identified for each sample.
# +
ax = sns.distplot(data_samp["Num_genes"])
# plt.ylim(-0.05, 1)
ax.set_xlabel("Number of genes predicted",fontsize=15)
ax.set_ylabel("Density of samples",fontsize=15)
ax.tick_params(labelsize=14)
plt.title(ctype, fontsize=16)
plt.tight_layout()
# os.chdir(filepath)
# fname = "{}_geneXsamp.png".format(ctype)
# plt.savefig(fname, dpi=300)
# plt.close()
# -
# ### Distribution of number of mutated genes predicted
# The distribution of number of genes identified for each sample that were mutated.
# +
ax = sns.distplot(data_samp["Num_mut"])
ax.set_xlabel("Number of mutated genes",fontsize=15)
ax.set_ylabel("Density of samples",fontsize=15)
ax.tick_params(labelsize=14)
plt.title(ctype, fontsize=16)
plt.tight_layout()
# os.chdir(filepath)
# fname = "{}_mutXsamp.png".format(ctype)
# plt.savefig(fname, dpi=300)
# plt.close()
# -
# ### Distribution of number of genes predicted with CNVs
# The distribution of number of genes identified for each sample that showed CNVs.
# +
ax = sns.distplot(data_samp["Num_cnv"])
ax.set_xlabel("Number of CNV",fontsize=15)
ax.set_ylabel("Density of samples",fontsize=15)
ax.tick_params(labelsize=14)
plt.title(ctype, fontsize=16)
plt.tight_layout()
# os.chdir(filepath)
# fname = "{}_cnvXsamp.png".format(ctype)
# plt.savefig(fname, dpi=300)
# plt.close()
# -
# ### Distribution of number of TSGs predicted
# The distribution of number of genes labelled as TSG for each sample.
# +
ax = sns.distplot(data_samp["Num_TSG"])
ax.set_xlabel("Number of TSGs",fontsize=15)
ax.set_ylabel("Density of samples",fontsize=15)
ax.tick_params(labelsize=14)
plt.title(ctype, fontsize=16)
plt.tight_layout()
# os.chdir(filepath)
# fname = "{}_tsgXsamp.png".format(ctype)
# plt.savefig(fname, dpi=300)
# plt.close()
# -
# ### Distribution of number of OGs predicted
# The distribution of number of genes labelled as OG for each sample.
# +
ax = sns.distplot(data_samp["Num_OG"])
ax.set_xlabel("Number of OGs",fontsize=15)
ax.set_ylabel("Density of samples",fontsize=15)
ax.tick_params(labelsize=14)
plt.title(ctype, fontsize=16)
plt.tight_layout()
# os.chdir(filepath)
# fname = "{}_ogXsamp.png".format(ctype)
# plt.savefig(fname, dpi=300)
# plt.close()
# -
# ### Distribution of degree of genes predicted
# The distribution of degree of genes identified for each sample.
# +
ax = sns.distplot(data["Degree"])
ax.set_xlabel("Degree of a gene",fontsize=15)
ax.set_ylabel("Density of samples",fontsize=15)
ax.tick_params(labelsize=14)
plt.title(ctype, fontsize=16)
plt.tight_layout()
# os.chdir(filepath)
# fname = "{}_degreeXsamp.png".format(ctype)
# plt.savefig(fname, dpi=300)
# plt.close()
# -
# ## Gene based statistics
#
data_genes = pd.DataFrame(data.groupby(["Hugo_Symbol"])["Tumor_Sample_Barcode"].count())
data_genes["Degree"] = data.groupby(["Hugo_Symbol"])["Degree"].mean()
# ### Consensus with CGC
# Load CGC data and check for consensus
os.chdir(PATH + "/data/driver genes/CGC")
fname = "cancer_gene_census_9nov2021.csv"
data_cgc = pd.read_csv(fname, sep=",", header=0, index_col=0)
data_genes.loc[:, "CGC"] = [True if gene in list(data_cgc.index) else False for gene in list(data_genes.index)]
print("Total number of genes showing consensus with CGC = {}".format(data_genes["CGC"].sum()))
# ### Distribution of number of samples a gene is predicted as driver
# The distribution of number of samples a gene is identified as driver.
# +
ax = sns.distplot(data_genes["Tumor_Sample_Barcode"], label="All genes")
sns.distplot(data_genes[data_genes.CGC ==True]["Tumor_Sample_Barcode"], label="CGC genes")
sns.distplot(data_genes[data_genes.CGC ==False]["Tumor_Sample_Barcode"], label="novel genes")
ax.set_xlabel("Number of samples",fontsize=15)
ax.set_ylabel("Density of genes altered",fontsize=15)
ax.tick_params(labelsize=14)
plt.title(ctype, fontsize=16)
plt.tight_layout()
plt.legend()
# os.chdir(filepath)
# fname = "{}_sampXgenes.png".format(ctype)
# plt.savefig(fname, dpi=300)
# plt.close()
# -
# ### Top 10 genes frequently predicted
# The list of genes and the number of samples they are predicted as TSG or OG.
data_genes.sort_values(by=["Tumor_Sample_Barcode"], ascending=False)[:10]
# ### Top 10 genes rarely predicted as driver
# The list of genes and the number of samples they are predicted as TSG or OG. Random 10 genes are listed below. More genes mutated in one sample may be
data_genes.sort_values(by=["Tumor_Sample_Barcode"], ascending=True)[:10]
| code/analyse_predictions_COAD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FCFS WITH ARRIVAL TIME
# +
def findWaitingTime(processes, n, bt, wt, at):
service_time = [0] * n
service_time[0] = 0
wt[0] = 0
for i in range(1, n):
service_time[i] = (service_time[i - 1] + bt[i - 1])
wt[i] = service_time[i] - at[i]
if (wt[i] < 0):
wt[i] = 0
def findTurnAroundTime(processes, n, bt, wt, tat):
for i in range(n):
tat[i] = bt[i] + wt[i]
def findavgTime(processes, n, bt, at):
wt = [0] * n
tat = [0] * n
findWaitingTime(processes, n, bt, wt, at)
findTurnAroundTime(processes, n, bt, wt, tat)
print("Processes Burst Time Arrival Time Waiting",
"Time Turn-Around Time Completion Time \n")
total_wt = 0
total_tat = 0
for i in range(n):
total_wt = total_wt + wt[i]
total_tat = total_tat + tat[i]
compl_time = tat[i] + at[i]
print(" ", i + 1, "\t\t", bt[i], "\t\t", at[i],
"\t\t", wt[i], "\t\t ", tat[i], "\t\t ", compl_time)
print("Average waiting time = %.5f "%(total_wt /n))
print("\nAverage turn around time = ", total_tat / n)
processes = [0,1, 2, 3]
n = 4
burst_time = [11, 10,7, 16]
arrival_time = [0, 5,2,9]
findavgTime(processes, n, burst_time,
arrival_time)
# -
# # FCFS WITHOUT ARRIVAL TIME
def findWaitingTime(processes, n, bt, wt):
wt[0] = 0
for i in range(1, n ):
wt[i] = bt[i - 1] + wt[i - 1]
def findTurnAroundTime(processes, n, bt, wt, tat):
for i in range(n):
tat[i] = bt[i] + wt[i]
def findavgTime( processes, n, bt):
wt = [0] * n
tat = [0] * n
total_wt = 0
total_tat = 0
findWaitingTime(processes, n, bt, wt)
findTurnAroundTime(processes, n,bt, wt, tat)
print( "Processes Burst time " +
" Waiting time " +
" Turn around time")
for i in range(n):
total_wt = total_wt + wt[i]
total_tat = total_tat + tat[i]
print(" " + str(i + 1) + "\t\t" +
str(bt[i]) + "\t " +
str(wt[i]) + "\t\t " +
str(tat[i]))
print( "Average waiting time = "+
str(total_wt / n))
print("Average turn around time = "+
str(total_tat / n))
processes = [0,1, 2, 3]
n = 4
burst_time = [11, 10,7, 16]
arrival_time = [0, 5,2,9]
findavgTime(processes, n, burst_time)
# # Priority Scheduling
print("Enter the number of processess: ")
n=int(input())
processes=[]
for i in range(0,n):
processes.insert(i,i+1)
print("\nEnter the burst time of the processes: \n")
bt=list(map(int, input().split()))
print("\nEnter the priority of the processes: \n")
priority=list(map(int, input().split()))
tat=[]
wt=[]
for i in range(0,len(priority)-1):
for j in range(0,len(priority)-i-1):
if(priority[j]>priority[j+1]):
swap=priority[j]
priority[j]=priority[j+1]
priority[j+1]=swap
swap=bt[j]
bt[j]=bt[j+1]
bt[j+1]=swap
swap=processes[j]
processes[j]=processes[j+1]
processes[j+1]=swap
wt.insert(0,0)
tat.insert(0,bt[0])
for i in range(1,len(processes)):
wt.insert(i,wt[i-1]+bt[i-1])
tat.insert(i,wt[i]+bt[i])
avgtat=0
avgwt=0
for i in range(0,len(processes)):
avgwt=avgwt+wt[i]
avgtat=avgtat+tat[i]
avgwt=float(avgwt)/n
avgwt=float(avgtat)/n
print("\n")
print("Process\t Burst Time\t Waiting Time\t Turn Around Time")
for i in range(0,n):
print(str(processes[i])+"\t\t"+str(bt[i])+"\t\t"+str(wt[i])+"\t\t"+str(tat[i]))
print("\n")
print("Average Waiting time is: "+str(avgwt))
print("Average Turn Around Time is: "+str(avgtat))
# # Round Robin Algo
def findWaitingTime(processes, n, bt, wt, quantum):
rem_bt = [0] * n
for i in range(n):
rem_bt[i] = bt[i]
t = 0
while(1):
done = True
for i in range(n):
if (rem_bt[i] > 0) :
done = False
if (rem_bt[i] > quantum) :
t = t+quantum
rem_bt[i] =rem_bt[i]- quantum
else:
t = t + rem_bt[i]
wt[i] = t - bt[i]
rem_bt[i] = 0
if (done == True):
break
def findTurnAroundTime(processes, n, bt, wt, tat):
for i in range(n):
tat[i] = bt[i] + wt[i]
def findavgTime(processes, n, bt, quantum):
wt = [0] * n
tat = [0] * n
findWaitingTime(processes, n, bt, wt, quantum)
findTurnAroundTime(processes, n, bt,wt, tat)
print("Processes Burst Time Waiting","Time Turn-Around Time")
total_wt = 0
total_tat = 0
for i in range(n):
total_wt = total_wt + wt[i]
total_tat = total_tat + tat[i]
print(" ", i + 1, "\t\t", bt[i],"\t\t", wt[i], "\t\t", tat[i])
print("\nAverage waiting time = %.5f "%(total_wt /n) )
print("Average turn around time = %.5f "% (total_tat / n))
proc = [1, 2, 3]
n = 3
burst_time = [5,3,2]
quantum = 2;
findavgTime(proc, n, burst_time, quantum)
# # SJF Scheduling
def findWaitingTime(processes, n, wt):
rt = [0] * n
for i in range(n):
rt[i] = processes[i][1]
complete = 0
t = 0
minm = 999999999
short = 0
check = False
while (complete != n):
for j in range(n):
if ((processes[j][2] <= t) and
(rt[j] < minm) and rt[j] > 0):
minm = rt[j]
short = j
check = True
if (check == False):
t += 1
continue
rt[short] -= 1
minm = rt[short]
if (minm == 0):
minm = 999999999
if (rt[short] == 0):
complete += 1
check = False
fint = t + 1
wt[short] = (fint - proc[short][1] -
proc[short][2])
if (wt[short] < 0):
wt[short] = 0
t += 1
def findTurnAroundTime(processes, n, wt, tat):
for i in range(n):
tat[i] = processes[i][1] + wt[i]
def findavgTime(processes, n):
wt = [0] * n
tat = [0] * n
findWaitingTime(processes, n, wt)
findTurnAroundTime(processes, n, wt, tat)
print("Processes Burst Time Waiting",
"Time Turn-Around Time")
total_wt = 0
total_tat = 0
for i in range(n):
total_wt = total_wt + wt[i]
total_tat = total_tat + tat[i]
print(" ", processes[i][0], "\t\t",
processes[i][1], "\t\t",
wt[i], "\t\t", tat[i])
print("\nAverage waiting time = %.5f "%(total_wt /n) )
print("Average turn around time = ", total_tat / n)
if __name__ =="__main__":
proc=[]
num_proc = int(input("Enter number of processes"))
for i in range(num_proc):
pid=int(input("Enter process id for process {} :".format(i+1)))
bt=int(input("Enter Burst Time for process {} :".format(i+1)))
at=int(input("Enter arrival time for process {} :".format(i+1)))
proc.append([pid,bt,at])
n = len(proc)
findavgTime(proc, n)
| OS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import csv
import math
import random
def loadCsv(filename):
lines = csv.reader(open('NaiveBayes.csv'))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i]=[float(x) for x in dataset[i]]
return dataset
#converting strings to float
def splitDataset(dataset,splitRatio):
trainSize = int(len(dataset)*splitRatio)
trainSet = []
copy = list(dataset)
while len(trainSet) < trainSize:
index = random.randrange(len(copy))
trainSet.append(copy.pop(index))
return[trainSet,copy]
| NaiveBayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from numpy import *
from PIL import *
import pickle
from pylab import *
import knn
knn = reload(knn)
import imtools
imtools = reload(imtools)
with open('points_normal.pkl', 'r') as f:
class_1 = pickle.load(f)
class_2 = pickle.load(f)
labels = pickle.load(f)
model = knn.KnnClassifier(labels, vstack((class_1, class_2)))
with open('points_normal_test.pkl', 'r') as f:
class_1 = pickle.load(f)
class_2 = pickle.load(f)
labels = pickle.load(f)
print model.classify(class_1[0])
for k in arange(1, 10):
def classify(x, y, model=model, k=k):
return array([model.classify([xx, yy], k) for (xx, yy) in zip(x, y)])
imtools.plot_2D_boundary([-6, 6, -6, 6], [class_1, class_2], classify, [1, -1])
show()
# +
with open('points_ring.pkl', 'r') as f:
class_1 = pickle.load(f)
class_2 = pickle.load(f)
labels = pickle.load(f)
model = knn.KnnClassifier(labels, vstack((class_1, class_2)))
with open('points_ring_test.pkl', 'r') as f:
class_1 = pickle.load(f)
class_2 = pickle.load(f)
labels = pickle.load(f)
def classify2(x, y, model=model):
return array([model.classify([xx, yy]) for (xx, yy) in zip(x, y)])
imtools.plot_2D_boundary([-6, 6, -6, 6], [class_1, class_2], classify2, [1, -1])
show()
# -
| Chapter-8/CV Book Chapter 8 Exercise 1-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# SummarizeCFEL_Rebuild
# -
import pandas as pd
import numpy as np
import os
import json
import altair as alt
import numpy as np
#import datum
import glob
# +
DATA_DIR = r"..\results\BDNF\Recombinants"
CFEL_FILES = glob.glob(os.path.join(DATA_DIR, "*.CFEL.json"))
print("# Returned:", len(CFEL_FILES))
pvalue_threshold = 0.1
# +
def getCFEL_headers(json_file):
with open(json_file) as in_d:
json_data = json.load(in_d)
return json_data["MLE"]["headers"]
#end method
def getCFEL_fits(JSON):
with open(JSON) as in_d:
json_data = json.load(in_d)
return json_data["fits"]
#end method
def getCFEL_MLE(json_file):
with open(json_file) as in_d:
json_data = json.load(in_d)
return json_data["MLE"]["content"]["0"]
#end method
# -
for n, i in enumerate(CFEL_FILES):
print(n, i)
# Trim
# (Artiodactyla, Carnivora, Chiroptera, Glires, Primates)
CFEL_FILES_TRIMMED = [CFEL_FILES[0],CFEL_FILES[1], CFEL_FILES[2], CFEL_FILES[9], CFEL_FILES[16]]
CFEL_FILES_TRIMMED
Label = CFEL_FILES_TRIMMED[1].split("\\")[-1].split(".")[2]
Label
# +
#TEST_FILE = CFEL_FILES[6]
TEST_FILE = CFEL_FILES_TRIMMED[4]
print("# Processing", TEST_FILE)
# MAC OSX
#Label = TEST_FILE.split("/")[-1].split(".")[2]
# Windows
Label = TEST_FILE.split("\\")[-1].split(".")[2]
columns = getCFEL_headers(TEST_FILE)
headers = [x[0] for x in columns]
df_Test = pd.DataFrame(getCFEL_MLE(TEST_FILE), columns=headers, dtype = float)
df_Test.index += 1
df_Test["Site"] = df_Test.index
df_Background = pd.DataFrame(getCFEL_MLE(TEST_FILE), columns=headers, dtype = float)
df_Background.index += 1
df_Background["Site"] = df_Background.index
header_group = "beta (" + Label + ")"
df_Test["omega"] = df_Test[header_group] / df_Test["alpha"]
df_Background["omega"] = df_Background["beta (background)"] / df_Background["alpha"]
df_Test["Color"] = Label
df_Background["Color"] = "Background"
# -
df_Test
df_Background
frames = [df_Test, df_Background]
result = pd.concat(frames)
result
# +
source = result
#print(\"BDNF_Recombinants_RELAX_\" + Label + \".png\")
order = ["Background", Label]
# Test set plot
line1 = alt.Chart(source).mark_bar(opacity=0.9).encode(
x='Site',
y=alt.Y('omega', scale=alt.Scale(domain=(0, 12), clamp=True)),
color=alt.Color('Color', sort=order),
).properties(
width=800,
height=600,
title=Label)
line1
# -
| notebooks/SummarizeCFEL_Rebuild.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Dependencies
import pandas as pd
from sqlalchemy import create_engine
from config import Password
# %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import scipy.stats as stats
# -
engine = create_engine(f'postgresql://postgres:{Password}@localhost/EmployeeSQL')
connection = engine.connect()
query="SELECT e.emp_no, t.title, s.salary \
FROM employees AS e \
LEFT JOIN titles AS t \
ON e.emp_title=t.title_id \
LEFT JOIN salaries AS s \
ON e.emp_no=s.emp_no"
salary_df=pd.read_sql(query,connection)
salary_df.head()
#Create a histogram to visualize the most common salary ranges for employees.
plt.hist(salary_df["salary"],20,label="Salary")
plt.axvline(salary_df["salary"].mean(),color='k',linestyle="dashed",linewidth=1,label="Salary Mean")
plt.axvline(salary_df["salary"].median(),color='k',linestyle="solid",linewidth=1,label="Salary Meadian")
plt.xlabel("Salary")
plt.ylabel("Number of Employees")
plt.legend()
plt.title("Common Salary Ranges")
plt.savefig("salary_range.png")
plt.show()
avg_salary=round(salary_df.groupby('title').mean()['salary'].reset_index(),2)
avg_salary
#Create a bar chart of average salary by title.
plt.bar(avg_salary["title"],avg_salary["salary"])
plt.xticks(rotation=90)
plt.xlabel("Title")
plt.ylabel("Average Salar($)")
plt.title("Average Salary By Title")
plt.grid(axis='y')
plt.savefig("sal_by_title.png")
plt.show()
| EmployeeSQL/Bonus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:general]
# language: python
# name: conda-env-general-py
# ---
# %matplotlib inline
import betterplotlib as bpl
import numpy as np
import matplotlib.pyplot as plt
# +
xs1 = np.random.normal(0, 1, 500)
ys1 = np.random.normal(0, 1, 500)
xs2 = np.random.normal(1, 1, 500)
ys2 = np.random.normal(1, 1, 500)
# -
fig, [ax0, ax1] = plt.subplots(figsize=[7, 10], nrows=2, tight_layout=True)
ax0.hist(xs1)
ax0.set_xlabel("Data Values")
ax0.set_ylabel("Number")
ax0.set_xlim(-4, 4)
ax1.scatter(xs1, ys1, label="Data 1")
ax1.scatter(xs2, ys2, label="Data 2")
ax1.set_xlabel("X Values")
ax1.set_ylabel("Y Values")
ax1.legend(loc=2)
ax1.set_xlim(-4, 6)
ax1.set_ylim(-4, 6)
ax0.set_title("matplotlib defaults", fontsize=24)
fig.savefig("plt.png", dpi=300)
bpl.presentation_style()
fig, [ax0, ax1] = bpl.subplots(figsize=[7, 10], nrows=2, tight_layout=True)
ax0.hist(xs1, bin_size=0.5)
ax0.set_xlabel("Data Values")
ax0.set_ylabel("Number")
ax0.set_xlim(-4, 4)
ax1.scatter(xs1, ys1, label="Data 1")
ax1.scatter(xs2, ys2, label="Data 2")
ax1.set_xlabel("X Values")
ax1.set_ylabel("Y Values")
ax1.legend(loc=2)
ax1.set_xlim(-4, 6)
ax1.set_ylim(-4, 6)
ax0.set_title("betterplotlib defaults", fontsize=24)
fig.savefig("bpl.png", dpi=300)
# +
x1 = np.random.normal(-1, 2, 5000)
x2 = np.random.normal(1, 2, 5000)
contour_xs = np.concatenate([np.random.normal(0, 1, 10000),
np.random.normal(3, 1, 10000),
np.random.normal(0, 1, 10000)])
contour_ys = np.concatenate([np.random.normal(0, 1, 10000),
np.random.normal(3, 1, 10000),
np.random.normal(3, 1, 10000)])
# +
x3 = np.random.normal(0, 1, 500)
y3 = np.random.normal(0, 1, 500)
x4 = np.random.normal(2, 1, 500)
y4 = np.random.normal(0, 1, 500)
x5 = np.random.normal(2, 1, 500)
y5 = np.random.normal(2, 1, 500)
# +
sc_x1 = np.random.uniform(-2, 3, 200)
sc_y1 = sc_x1 + np.random.normal(0, 0.5, 200)
sc_x2 = np.random.normal(1, 1, 200)
sc_y2 = np.random.normal(1, 1, 200)
# +
fig, [ax1, ax2] = bpl.subplots(ncols=2, nrows=1, figsize=[15, 7])
# [ax1, ax2], [ax3, ax4] = axs
ax1.hist(x1, rel_freq=True, histtype="step", bin_size=0.5, lw=3, hatch="\\", label="Data 1")
ax1.hist(x2, rel_freq=True, histtype="step", bin_size=0.5, lw=3, hatch= "/", label="Data 2")
ax1.remove_spines(["top", "right"])
ax1.add_labels("X Value", "Relative Frequency")
ax1.set_limits(-10, 10, 0, 0.12)
ax1.legend()
ax2.contour_scatter(contour_xs, contour_ys, bin_size=0.3, scatter_kwargs={"label":"Outliers"})
ax2.equal_scale()
ax2.make_ax_dark()
ax2.set_limits(-4, 8, -4, 8)
ax2.legend("light")
ax2.add_labels("X Value", "Y Value")
# ax3.scatter(x3, y3)
# ax3.scatter(x4, y4)
# ax3.scatter(x5, y5)
# ax3.add_labels("X Value", "Y Value")
# ax4.scatter(sc_x1, sc_y1, label="Data 1")
# ax4.scatter(sc_x2, sc_y2, label="Data 2")
# ax4.remove_spines(["top", "right"])
# ax4.legend(loc=2)
# ax4.add_labels("X Value", "Y Value")
fig.savefig("bpl_demo.png")
# +
fig, ax = bpl.subplots()
xs = [0, 1]
ax.fill_between(xs, 0, 1, color=bpl.almost_black)
ax.fill_between([0.75, 1.0], 0, 1, color="k")
ax.fill_between(xs, 1, 2, color=bpl.light_gray)
ax.fill_between(xs, 2, 3, color=bpl.steel_blue)
ax.fill_between(xs, 3, 4, color=bpl.parks_canada_heritage_green)
for idx, col in enumerate(bpl.color_cycle[::-1]):
ax.fill_between(xs, 4 + 2 * idx / len(bpl.color_cycle), 4 + 2 * (idx + 1) / len(bpl.color_cycle), color=col)
ax.remove_spines(["all"])
ax.remove_labels("both")
ax.add_text(0.5, 0.5, "almost_black", ha="center", va="center", color="w")
ax.add_text((1.0 + 0.75) / 2.0, 0.5, "black", ha="center", va="center", color="w")
ax.add_text(0.5, 1.5, "light_gray", ha="center", va="center", color=bpl.almost_black)
ax.add_text(0.5, 2.5, "steel_blue", ha="center", va="center", color="w")
ax.add_text(0.5, 3.5, "parks_canada_heritage_green", ha="center", va="center", color="w")
ax.text(0.25, 5.0, "Color Cycle", rotation=45, ha="center", va="center", color="w", fontsize=28)
ax.add_text(0.75, 4 + 1.0 / len(bpl.color_cycle), "9", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 3.0 / len(bpl.color_cycle), "8", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 5.0 / len(bpl.color_cycle), "7", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 7.0 / len(bpl.color_cycle), "6", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 9.0 / len(bpl.color_cycle), "5", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 11.0 / len(bpl.color_cycle), "4", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 13.0 / len(bpl.color_cycle), "3", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 15.0 / len(bpl.color_cycle), "2", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 17.0 / len(bpl.color_cycle), "1", ha="center", va="center", color="w", fontsize=14)
ax.add_text(0.75, 4 + 19.0 / len(bpl.color_cycle), "0", ha="center", va="center", color="w", fontsize=14)
fig.savefig("colors.png")
# -
| docs/images/demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Labs
# language: python
# name: myenv
# ---
from selenium.webdriver import Chrome
from selenium.webdriver.support.ui import WebDriverWait
import re
from urllib.request import urlretrieve
driver = Chrome('/Users/ridleyleisy/Downloads/chromedriver 2')
driver.get('http://insideairbnb.com/get-the-data.html')
tables = driver.find_elements_by_tag_name('table')
us_list = []
for table in tables:
sub_table = table.find_elements_by_css_selector('a')
for sub in sub_table:
url = sub.get_attribute('href')
if ('united-states' in url) & ('visualisations' not in url):
us_list.append(url)
if ('united-states' in url) & ('visualisations/neighbourhoods.csv' in url):
us_list.append(url)
breakdown = [re.split('/',x) for x in us_list]
unique_cities = []
for index, inner in enumerate(breakdown):
unique_cities.append(inner[5:6])
flat_list = []
for sublist in unique_cities:
for item in sublist:
flat_list.append(item)
us_list = [x.strip('.gz') for x in us_list]
us_list[0:4]
lists = []
for x in us_list:
if 'calendar' in x:
lists.append(x)
lists
lists[10]
urlretrieve(lists[10],'cal.csv')
import pandas as pd
df = pd.read_csv('cal.csv')
df['date'].min()
df['price'].isna().sum()
df.drop(index=0)
| notebooks/webscrape.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import json
path_results = "test_results/02_11_20A_20B_GPU_5EPO/" #20A_20C_06Sept_20EPO #20A_20B_17Sept_CPU
def read_json(file_path):
with open(file_path) as file:
return json.loads(file.read())
f_dict = read_json(path_results + "f_word_dictionaries.json")
#f_dict
r_dict = read_json(path_results + "r_word_dictionaries.json")
#r_dict
def read_txt(file_path):
data = None
with open(file_path) as file:
data = file.read()
data = data.split("\n")
data = data[:len(data) - 1]
data = [float(i) for i in data]
return data
# +
tr_gen_true_loss = read_txt(path_results + "tr_gen_true_loss.txt")
tr_gen_fake_loss = read_txt(path_results + "tr_gen_fake_loss.txt")
tr_gen_total_loss = read_txt(path_results + "tr_gen_total_loss.txt")
tr_disc_total_loss = read_txt(path_results + "tr_disc_total_loss.txt")
tr_disc_fake_loss = read_txt(path_results + "tr_disc_fake_loss.txt")
tr_disc_true_loss = read_txt(path_results + "tr_disc_true_loss.txt")
te_loss = read_txt(path_results + "te_loss.txt")
te_loss
n_epochs = len(tr_gen_true_loss)
epochs = np.arange(0, n_epochs)
# -
te_loss
import seaborn as sns
import matplotlib.pyplot as plt
# +
plt.figure(figsize=(8, 6), dpi=150)
plt.plot(epochs, tr_gen_true_loss)
plt.plot(epochs, te_loss)
plt.legend(["Training true loss", "Test true loss"])
plt.grid(True)
plt.show()
# +
plt.figure(figsize=(8, 6), dpi=150)
plt.plot(epochs, tr_gen_fake_loss)
plt.plot(epochs, tr_disc_fake_loss)
plt.plot(epochs, tr_disc_true_loss)
plt.plot(epochs, tr_disc_total_loss)
plt.legend(["G fake loss", "D fake loss", "D true loss", "D total loss"])
plt.grid(True)
plt.show()
# +
import json
epo = 5
# ave_batch_x_y_mut_epo_0
mut_freq = dict()
for e in range(epo):
f_name = "{}ave_batch_x_y_mut_epo_{}.json".format(path_results, str(e))
with open(f_name, "r") as f:
mut_dict = json.loads(f.read())
for key in mut_dict:
if key not in mut_freq:
mut_freq[key] = int(mut_dict[key])
else:
mut_freq[key] += mut_dict[key]
# -
len(mut_freq), mut_freq
mut_freq_aa_keys = dict()
print(f_dict)
for key in mut_freq:
key_split = key.split(">")
aa_key = "{}>{}".format(f_dict[key_split[0]], f_dict[key_split[1]])
mut_freq_aa_keys[aa_key] = mut_freq[key] / float(epo)
mut_freq_aa_keys = {k: v for k, v in sorted(mut_freq_aa_keys.items(), key=lambda item: item[1], reverse=True)}
mut_freq_aa_keys
tr_A_C_par_child_mut_pos = read_json(path_results + "tr_parent_child_pos.json")
A_C_par_child_mut_pos = read_json(path_results + "parent_child_pos.json")
merged_A_C_par_child_mut_pos = {**tr_A_C_par_child_mut_pos, **A_C_par_child_mut_pos}
print(len(tr_A_C_par_child_mut_pos), len(A_C_par_child_mut_pos), len(merged_A_C_par_child_mut_pos))
A_C_par_gen_mut_pos = read_json(path_results + "parent_gen_pos.json")
print(len(merged_A_C_par_child_mut_pos), merged_A_C_par_child_mut_pos)
print(len(tr_A_C_par_child_mut_pos), tr_A_C_par_child_mut_pos)
print(len(A_C_par_child_mut_pos), A_C_par_child_mut_pos)
print(len(A_C_par_gen_mut_pos), A_C_par_gen_mut_pos)
true_mut_A_C = list(merged_A_C_par_child_mut_pos.keys()) # tr_A_C_par_child_mut_pos
gen_mut_A_C = list(A_C_par_gen_mut_pos.keys())
inter_true_gen_mut_A_C = list(set(true_mut_A_C).intersection(set(gen_mut_A_C)))
print(len(inter_true_gen_mut_A_C), inter_true_gen_mut_A_C)
C_c_20C_par_child_mut_pos = read_json(path_results + "mut_pos_parent_child.json")
C_c_20C_par_gen_mut_pos = read_json(path_results + "mut_pos_parent_gen.json")
print(len(C_c_20C_par_child_mut_pos), C_c_20C_par_child_mut_pos)
mut_dict = dict()
for key in C_c_20C_par_gen_mut_pos:
if C_c_20C_par_gen_mut_pos[key] > 10:
mut_dict[key] = C_c_20C_par_gen_mut_pos[key]
#C_c_20C_par_gen_mut_pos = mut_dict
#print(len(C_c_20C_par_gen_mut_pos), C_c_20C_par_gen_mut_pos)
true_c_20C = list(C_c_20C_par_child_mut_pos.keys())
gen_c_20C = list(C_c_20C_par_gen_mut_pos.keys())
inter_true_gen_c_20C = list(set(true_c_20C).intersection(set(gen_c_20C)))
print(len(inter_true_gen_c_20C), inter_true_gen_c_20C)
# +
true_par_gen_ctr = 0
for key in C_c_20C_par_child_mut_pos:
if key in C_c_20C_par_gen_mut_pos:
print(key, C_c_20C_par_child_mut_pos[key], C_c_20C_par_gen_mut_pos[key])
else:
print(key, C_c_20C_par_child_mut_pos[key], 0)
true_par_gen_ctr += 1
print("---")
print(len(C_c_20C_par_child_mut_pos), len(C_c_20C_par_child_mut_pos) - true_par_gen_ctr, 1 - (float(true_par_gen_ctr)/len(C_c_20C_par_child_mut_pos)))
# -
inter_true_gen_A_C_c_20C = list(set(inter_true_gen_c_20C).intersection(set(inter_true_gen_mut_A_C)))
#print(len(inter_true_gen_A_C_c_20C), inter_true_gen_A_C_c_20C)
inter_A_C_c_20C = list(set(inter_true_gen_c_20C).intersection(set(true_mut_A_C)))
#print(len(inter_A_C_c_20C), inter_A_C_c_20C)
inter_true_A_C_gen_c_20C = list(set(gen_c_20C).intersection(set(true_mut_A_C)))
#print(len(inter_true_A_C_gen_c_20C), inter_true_A_C_gen_c_20C)
true_A_C_c_20C = list(set(true_c_20C).intersection(set(true_mut_A_C)))
#print(len(true_A_C_c_20C), true_A_C_c_20C)
# +
#list(set(C_c_20C_par_gen_mut_pos).intersection(set(true_mut_A_C)))
# -
path_results_20A_20B = "test_results/20A_20B_17Sept_CPU/"
tr_A_B_par_child_mut_pos = read_json(path_results_20A_20B + "tr_parent_child_pos.json")
A_B_par_child_mut_pos = read_json(path_results_20A_20B + "parent_child_pos.json")
print(len(A_B_par_child_mut_pos), A_B_par_child_mut_pos)
merged_A_B_par_child_mut_pos = {**tr_A_B_par_child_mut_pos, **A_B_par_child_mut_pos}
print(len(tr_A_B_par_child_mut_pos), len(A_B_par_child_mut_pos), len(merged_A_B_par_child_mut_pos))
A_B_par_gen_mut_pos = read_json(path_results_20A_20B + "parent_gen_pos.json")
B_c_20B_par_child_mut_pos = read_json(path_results_20A_20B + "mut_pos_parent_child.json")
B_c_20B_par_gen_mut_pos = read_json(path_results_20A_20B + "mut_pos_parent_gen.json")
B_c_20B_par_gen_mut_pos
true_mut_A_B = list(merged_A_B_par_child_mut_pos.keys()) # tr_A_C_par_child_mut_pos
gen_mut_A_B = list(A_B_par_gen_mut_pos.keys())
inter_true_gen_mut_A_B = list(set(true_mut_A_B).intersection(set(gen_mut_A_B)))
print(len(inter_true_gen_mut_A_B), inter_true_gen_mut_A_B)
true_c_20B = list(B_c_20B_par_child_mut_pos.keys())
gen_c_20B = list(B_c_20B_par_gen_mut_pos.keys())
inter_true_gen_c_20B = list(set(true_c_20B).intersection(set(gen_c_20B)))
print(len(true_c_20B), len(gen_c_20B), len(inter_true_gen_c_20B), inter_true_gen_c_20B)
# +
true_par_gen_ctr = 0
for key in B_c_20B_par_child_mut_pos:
if key in B_c_20B_par_gen_mut_pos:
print(key, B_c_20B_par_child_mut_pos[key], B_c_20B_par_gen_mut_pos[key])
else:
print(key, B_c_20B_par_child_mut_pos[key], 0)
true_par_gen_ctr += 1
print("---")
print(len(B_c_20B_par_child_mut_pos), len(B_c_20B_par_child_mut_pos) - true_par_gen_ctr, 1 - (float(true_par_gen_ctr)/len(B_c_20B_par_child_mut_pos)))
# -
print(len(B_c_20B_par_child_mut_pos))
| analyse_results.ipynb |