code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ML@Cezeaux
#
#
# ### Machine Learning Tutorial
#
# # Supervised Learning: regression
# by [<NAME>](https://www.emilleishida.com/)
#
# ### *Take home message 3: choosing a machine learning algorithm is an art!*
#
# **Goal:** Get acquainted with basic machine learning algorithms for regression
#
# **Task**: Estimate the redshift based on photometric magnitudes
#
# **Data**: Extract from the [Teddy photometric redshift catalog](https://github.com/COINtoolbox/photoz_catalogues)
# First presented by [Beck et al., 2017, MNRAS, 468 (4323)](https://cosmostatistics-initiative.org/portfolio-item/representativeness-photoz/)
#
# 5000 objects for training (teddy_A)
# 5000 objects for testing (teddy_B)
# Features:
# $mag\_r$: standardized r-band magnitude
# $u-g$: standardized SDSS u-g color
# $g-r$: standardized SDSS g-r color
# $r-i$: standardized SDSS r-i color
# $i-z$: standardized SDSS i-z color
# $z\_spec$: spectroscopic redshift (label)
# import some basic libaries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pylab as plt
from sklearn.model_selection import train_test_split
# ### Step 1: Digest the data
#
# As always, we start by loading and visualizing the data
# +
# read the data
data = pd.read_csv('../../data/teddy_A.csv')
# check available columns (features)
data.keys()
# -
# check dimensionality of the data
data.shape
# We see from the documentation that the test data is given in a separate file.
# As a consequence, we only need to split the training data intro train and validation.
# +
# separate 80% for training and 20% for testing
X_train, X_validation, y_train, y_validation = \
train_test_split(data[['mag_r', 'u-g', 'g-r', 'r-i', 'i-z']], data['z_spec'], test_size=0.2, random_state=1)
# check your samples (size, features, etc.)
print('training sample: ', X_train.shape, y_train.shape)
print('validation sample: ', X_validation.shape, y_validation.shape)
# -
# plot the data
g = sns.PairGrid(data, diag_sharey=False)
g.map_lower(sns.kdeplot)
g.map_upper(sns.scatterplot)
g.map_diag(sns.kdeplot, lw=3)
plt.show()
# ### Step 2: train a few algorithms
#
# Using [scikit-learn](https://scikit-learn.org/stable/) we are able to quickly train a set of algorithms:
#
#
# 2.a) [Linear regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html):
# +
from sklearn import linear_model
# Create linear regression object
regr = linear_model.LinearRegression()
# train
regr.fit(X_train, y_train)
# estimate the photoz
photoz_linear_val = regr.predict(X_validation)
# quality of the fit
R2_linear_val = regr.score(X_validation, y_validation)
R2_linear_val
# -
# Cross-Validation
# +
from sklearn.model_selection import cross_val_score
#create a new KNN model
regr_cv = linear_model.LinearRegression()
#train model with cv of 5
cv_scores = cross_val_score(regr_cv, X_train, y_train, cv=5)
#print each cv score (accuracy) and average them
print(cv_scores)
print('cv_scores mean:{}'.format(np.mean(cv_scores)))
# -
# There is not much more to optimize in this simple model, so we can use the trained algorithm to estimate the redshift in the test sample:
# +
# read test sample
data_test = pd.read_csv('../../data/teddy_B.csv')
# check the features
data_test.keys()
# +
# estimate the photoz
photoz_linear_test = regr.predict(data_test[['mag_r', 'u-g', 'g-r', 'r-i', 'i-z']])
# quality of the fit
R2_linear_test = regr.score(data_test[['mag_r', 'u-g', 'g-r', 'r-i', 'i-z']], data_test[['z_spec']])
R2_linear_test
# -
# plot result
sns.set_style('ticks')
fig = plt.figure()
plt.title('Teddy catalog: A-> B, Linear reg. score: ' + str(round(R2_linear_test,2)))
plt.scatter(data_test[['z_spec']], photoz_linear_test, marker='x')
plt.plot([0,0.65], [0,0.65], color='red', lw=2, ls='--')
plt.xlabel('true redshift')
plt.ylabel('estimated redshift')
plt.show()
# 2.b) [Nearest Neighbor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html):
#
# Here we have a little more room for improvement.
# Try changing the number of neighbors, or other parameters (check documentation), to improve the quality of the fit.
# +
from sklearn.neighbors import KNeighborsRegressor
# choose number of neighbors
nn = 9
# initiate a KNN instance
knn = KNeighborsRegressor(n_neighbors=nn)
# fit the model using training data
knn.fit(X_train, y_train)
# estimate photometric redshift for the validation data
photoz_knn_validation = knn.predict(X_validation)
# quality of the fit
R2_knn_val = knn.score(X_validation, y_validation)
R2_knn_val
# -
# Search for the best value of number of neighbors
# +
from sklearn.model_selection import GridSearchCV
#create new a knn model
knn2 = KNeighborsRegressor()
#create a dictionary of all values we want to test for n_neighbors
param_grid = {'n_neighbors': np.arange(1, 200)}
#use gridsearch to test all values for n_neighbors
knn_gscv = GridSearchCV(knn2, param_grid, cv=5)
#fit model to data
knn_gscv.fit(X_train, y_train)
#check top performing n_neighbors value
knn_gscv.best_params_
# -
# Once you are happy with the optimization, estimate the photometric redshift values for the test sample:
# +
# initiate a KNN instance
knn3 = KNeighborsRegressor(n_neighbors=33)
# fit the model using training data
knn3.fit(X_train, y_train)
# estimate the photoz
photoz_knn_test = knn3.predict(data_test[['mag_r', 'u-g', 'g-r', 'r-i', 'i-z']])
# quality of the fit
R2_knn_test = knn3.score(data_test[['mag_r', 'u-g', 'g-r', 'r-i', 'i-z']], data_test[['z_spec']])
R2_knn_test
# -
# plot result
sns.set_style('ticks')
fig = plt.figure()
plt.title('Teddy catalog: A-> B, kNN reg. score: ' + str(round(R2_knn_test,2)))
plt.scatter(data_test[['z_spec']], photoz_knn_test, marker='x')
plt.plot([0,0.65], [0,0.65], color='red', lw=2, ls='--')
plt.xlabel('true redshift')
plt.ylabel('estimated redshift')
plt.show()
# 2.c) [Random Forest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html):
#
# Here we have still more freedom. Start by optimizing the number of trees and maximum depth.
#
# +
from sklearn.ensemble import RandomForestRegressor
#create new a knn model
randforest = RandomForestRegressor()
#create a dictionary of all values we want to test for n_neighbors
param_grid = {'n_estimators': np.arange(50, 150), 'max_depth': np.arange(1,30)}
#use gridsearch to test all values for n_neighbors
rf_gscv = GridSearchCV(randforest, param_grid, cv=5)
#fit model to data
rf_gscv.fit(X_train, y_train)
#check top performing n_neighbors value
rf_gscv.best_params_
# +
# choose number of trees in the forest
n_trees = 100
# define maximum depth, None=> split continues until the leafs are pure
depth = 8
# initiate a Random Forest instance
randforest = RandomForestRegressor(max_depth=depth, n_estimators=n_trees)
# train the model
randforest.fit(X_train, y_train)
# estimate the photometric redshift for the validation sample
photoz_randforest_validation = randforest.predict(X_validation)
# quality of the fit
R2_randforest_val = randforest.score(X_validation, y_validation)
R2_randforest_val
# -
# If you are satisfied, see how your regression perform in the test sample:
# +
# estimate the photoz
photoz_randforest_test = randforest.predict(data_test[['mag_r', 'u-g', 'g-r', 'r-i', 'i-z']])
# quality of the fit
R2_randforest_test = randforest.score(data_test[['mag_r', 'u-g', 'g-r', 'r-i', 'i-z']], data_test[['z_spec']])
R2_randforest_test
# -
# plot result
sns.set_style('ticks')
fig = plt.figure()
plt.title('Teddy catalog: A-> B, reg. RandForest score: ' + str(round(R2_randforest_test,2)))
plt.scatter(data_test[['z_spec']], photoz_randforest_test, marker='x')
plt.plot([0,0.65], [0,0.65], color='red', lw=2, ls='--')
plt.xlabel('true redshift')
plt.ylabel('estimated redshift')
plt.show()
# #### ... few free to try other algorithms if you wish to
# ### Step 3: Compare results
# Let's take a look at the results we have so far:
# +
print(' Test sample Validation sample')
print('Linear regression: ', R2_linear_test, R2_linear_val)
print('kNN: ', R2_knn_test, R2_knn_val)
print('Random Forest: ', R2_randforest_test, R2_randforest_val)
# -
# These results seem pretty stable, which give us still another ensurance that the results from the machine learning algorithms are consistent.
#
# **Can you guess what characteristics of the data helps this stability?**
# Answer: This expected if data quality is consistent and there is representativeness between training and test samples. Which happens in this case:
# +
fig = plt.figure(figsize=(8.75*3, 8))
plt.subplot(1,3,1)
plt.hist(X_train['mag_r'], color='blue', alpha=0.3, label='train', density=True)
plt.hist(data_test['mag_r'], color='red', alpha=0.3, label='test', density=True)
plt.legend(fontsize=22)
plt.xlabel('mag_r', fontsize=22)
plt.subplot(1,3,2)
plt.hist(X_train['u-g'], color='blue', alpha=0.3, label='train', density=True)
plt.hist(data_test['u-g'], color='red', alpha=0.3, label='test', density=True)
plt.legend(fontsize=22)
plt.xlabel('u-g', fontsize=22)
plt.subplot(1,3,3)
plt.hist(X_train['g-r'], color='blue', alpha=0.3, label='train', density=True)
plt.hist(data_test['g-r'], color='red', alpha=0.3, label='test', density=True)
plt.legend(fontsize=22)
plt.xlabel('g-r', fontsize=22)
plt.show()
# -
# ### Food for thought:
#
# In the `data` folder of the github repository there are other 2 files: `teddy_C` and `teddy_D`.
# This files should be used only for testing.
#
# Try applying your trained regression models in these data sets and compare the results with the ones above.
#
# As always, remember to weight your expectations before you start.
#
# **Are the results any different? If so, can you guess why?**
| notebooks/answers/Regression_answers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PySpark Demo Notebook 5
#
# ## Contents
#
# 1. [Setup Spark](#Setup-Spark)
# 2. [Load Kaggle Data](#Load-Kaggle-Dataset)
# 3. [Analyze Data with Spark SQL](#Analyze-Data-with-Spark-SQL)
# 4. [Graph Data with Plotly](#Graph-Data-with-Plotly)
#
# ## Requirements
#
# 1. Create a free [Plotly Chart Studio](https://chart-studio.plot.ly) account
# 2. Generate a Plotly API key
# 3. Place Plotly username and API key to .env file
#
# ## Background
#
# _Prepared by: [<NAME>](https://twitter.com/GaryStafford)
# Associated article: [Getting Started with Data Analytics using Jupyter Notebooks, PySpark, and Docker](https://wp.me/p1RD28-6Fj)_
# ### Setup Spark
# Setup the SparkSession, the entry point to programming Spark with the Dataset and DataFrame API.
from pyspark.sql import SparkSession
# reference: https://spark.apache.org/docs/latest/configuration.html#viewing-spark-properties
spark = SparkSession \
.builder \
.appName('05_notebook') \
.getOrCreate()
spark.sparkContext.getConf().getAll()
# +
df1 = spark.read \
.format('csv') \
.option('header', 'true') \
.option('delimiter', ',') \
.option('inferSchema', True) \
.load('BreadBasket_DMS.csv')
print('DataFrame rows: %d' % df1.count())
print('DataFrame schema: %s' % df1)
df1.show(10, False)
# -
# ## Analyze Data with Spark SQL
# Analyze the DataFrame's bakery data using Spark SQL.
# +
df1.createOrReplaceTempView('tmp_bakery')
df2 = spark.sql("SELECT date, count(*) as count " + "FROM tmp_bakery " +
"GROUP BY date " + "ORDER BY date")
print('DataFrame rows: %d' % df2.count())
df3 = df2.withColumn("hourly_period", df2['date'].substr(1, 2))
print(df3.show(10))
# -
# ## Graph Data with Plotly
# Use [Plotly](https://plot.ly/python/) to create a chart showing bakery items sold over time. Demostrates linear fit and data smoothing.
# * [Plotly Python Open Source Graphing Library](https://plot.ly/python/)
# * [Smoothing in Python](https://plot.ly/python/smoothing/)
# * [Linear Fit in Python](https://plot.ly/python/linear-fits/)
# ## Load Kaggle Dataset
# Load the Kaggle dataset from the CSV file, containing ~21K rows, into a Spark DataFrame.
# +
import os
from dotenv import load_dotenv
import chart_studio.tools
import chart_studio.plotly as py
import plotly.graph_objs as go
from numpy import arange
from scipy import stats, signal
import warnings
warnings.filterwarnings('ignore')
# -
# load your plotly credentials
load_dotenv()
chart_studio.tools.set_credentials_file(username=os.getenv('PLOTLY_USERNAME'),
api_key=os.getenv('PLOTLY_API_KEY'))
# +
# convert the Spark DataFrame into a pandas DataFrame
pdf = df2.toPandas()
# calculates a linear least-squares regression using scipy
xi = arange(0, len(pdf.index))
slope, intercept, r_value, p_value, std_err = stats.linregress(
xi, pdf['count'])
line = slope * xi + intercept
layout = dict(title='Bakery Sales',
xaxis=dict(title='Month',
showgrid=True,
zeroline=True,
showline=True,
ticks='outside',
tickangle=45,
showticklabels=True),
yaxis=dict(title='Items Sold/Day',
showgrid=True,
zeroline=True,
showline=True,
ticks='outside',
showticklabels=True))
trace1 = go.Bar(x=pdf['date'], y=pdf['count'], name='Items Sold')
trace2 = go.Scatter(x=pdf['date'], y=line, mode='lines', name='Linear Fit')
trace3 = go.Scatter(x=pdf['date'],
y=signal.savgol_filter(pdf['count'], 53, 3),
mode='lines',
name='Savitzky-Golay')
data = [trace1, trace2, trace3]
fig = dict(data=data, layout=layout)
py.iplot(fig, filename='jupyter-basic_bar.html')
| work/05_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import itertools
import math
import scipy
from scipy import spatial
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.patches as patches
from matplotlib import animation
from matplotlib import transforms
from mpl_toolkits.axes_grid1 import make_axes_locatable
import xarray as xr
import dask
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
import pandas as pd
import netCDF4
# +
def plot_generator_paper(sample, X, Z):
fz = 15*1.25
lw = 4
siz = 100
XNNA = 1.25 # Abscissa where architecture-constrained network will be placed
XTEXT = 0.25 # Text placement
YTEXT = 0.3 # Text placement
plt.rc('text', usetex=False)
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
#mpl.rcParams["font.serif"] = "STIX"
plt.rc('font', family='serif', size=fz)
matplotlib.rcParams['lines.linewidth'] = lw
cmap="RdBu_r"
fig, ax = plt.subplots(1,1, figsize=(15,6))
cs0 = ax.pcolor(X, Z, sample, cmap=cmap, vmin=-1.0, vmax = 1.0)
ax.set_title("Anomalous Vertical Velocity Field Detected By ELBO")
ax.set_ylim(ax.get_ylim()[::-1])
ax.set_xlabel("CRMs", fontsize=fz*1.5)
ax.xaxis.set_label_coords(0.54,-0.05)
h = ax.set_ylabel("hPa", fontsize = fz*1.5)
h.set_rotation(0)
ax.yaxis.set_label_coords(-0.10,0.44)
#y_ticks = np.arange(1350, 0, -350)
#ax.set_yticklabels(y_ticks, fontsize=fz*1.33)
ax.tick_params(axis='x', labelsize=fz*1.33)
ax.tick_params(axis='y', labelsize=fz*1.33)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = fig.colorbar(cs0, cax=cax)
cbar.set_label(label=r'$\left(\mathrm{m\ s^{-1}}\right)$', rotation="horizontal", fontsize=fz*1.5, labelpad=30, y = 0.65)
plt.show()
#plt.savefig("/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/CI_Figure_Data/Anomaly.pdf")
#plot_generator(test[0,:,:])
# -
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-20-00000.nc'
extra_variables = xr.open_dataset(path_to_file)
lats = np.squeeze(extra_variables.LAT_20s_to_20n.values)
lons = np.squeeze(extra_variables.LON_0e_to_360e.values)
print(lats)
print(lons[-80])
# +
#print(int(round((lons[-30]/360.)*96.)))
#print(int(round((lons[1]/360.)*96.)))
#print(int(round((lons[20]/360.)*96.)))
#print(int(round((lons[-56]/360.)*96.)))
# +
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.20*'
extra_variables = xr.open_mfdataset(path_to_file)
amazon = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[20+96*3:96*13+20,:,:,:,10,-30]).values
sc_cloud = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[95+96*3:96*13+95,:,:,:,4,1]).values
african_horn = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[83+96*3:96*13+83,:,:,:,-6,20]).values
warm_pool = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[37+96*3:96*13+37,:,:,:,-11,-80]).values
# -
Max_Scalar = np.load("/fast/gmooers/Preprocessed_Data/W_Variable/Space_Time_Max_Scalar.npy")
Min_Scalar = np.load("/fast/gmooers/Preprocessed_Data/W_Variable/Space_Time_Min_Scalar.npy")
amazon = np.interp(amazon, (Min_Scalar, Max_Scalar), (0, 1))
sc_cloud = np.interp(sc_cloud, (Min_Scalar, Max_Scalar), (0, 1))
african_horn = np.interp(african_horn, (Min_Scalar, Max_Scalar), (0, 1))
warm_pool = np.interp(warm_pool, (Min_Scalar, Max_Scalar), (0, 1))
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_synoptic_amazon_point.npy",amazon)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_synoptic_sc_point.npy",sc_cloud)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_synoptic_desert_point.npy",african_horn)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_synoptic_warm_pool_point.npy",warm_pool)
# +
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.20*'
extra_variables = xr.open_mfdataset(path_to_file)
amazon = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[20+96*3:,:,:,:,10,-30]).values
sc_cloud = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[95+96*3:,:,:,:,4,1]).values
african_horn = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[83+96*3:,:,:,:,-6,20]).values
warm_pool = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[37+96*3:,:,:,:,-11,-80]).values
# -
amazon = np.interp(amazon, (Min_Scalar, Max_Scalar), (0, 1))
sc_cloud = np.interp(sc_cloud, (Min_Scalar, Max_Scalar), (0, 1))
african_horn = np.interp(african_horn, (Min_Scalar, Max_Scalar), (0, 1))
warm_pool = np.interp(warm_pool, (Min_Scalar, Max_Scalar), (0, 1))
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_all_amazon_point.npy",amazon)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_all_sc_point.npy",sc_cloud)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_all_desert_point.npy",african_horn)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_all_warm_pool_point.npy",warm_pool)
| MAPS/Mooers_Logbook/Fully_Convolutional_W/Big_Animations/Synoptic_Cycle_Tracker/Synoptic_Cycle_Preprocessors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
#
# <a href="https://colab.research.google.com/drive/123w8VOCEviUIx-m0M--mwxExRqRKxAhY#scrollTo=zQw2GjuAbESO" target="_blank" >
# <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
#
#
# # Performance comparision on SOTAs
# **(Note: This notebook will not run on Ed. Please click the button above to run in Google Colab)**
# +
# Import required libraries
import sys, os, time
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from matplotlib.colors import ListedColormap
colors = ['k', 'g', 'r','b','c']
plt.style.use('seaborn-whitegrid')
from helper import ellipse
import pickle
from tensorflow.keras import backend as K
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import Input
import timeit
# -
# ## Loading the data
# Useful dictionary to go from label index to actual label
with open('idx2name.pkl', 'rb') as handle:
keras_idx_to_name = pickle.load(handle)
# +
# Loading input image and labels
images = np.load("/course/data/x_val.npy") # loaded as RGB
labels = np.load("/course/data/y_val.npy")
# Taking only 100 samples for quicker computation
x_val = images[:100]
y_val = labels[:100]
# One hot encoding the labels
y_val_one_hot = to_categorical(y_val, 1000)
# -
# Print a sample image and set the label as title
plt.title(___)
plt.imshow(___)
# ### ⏸ What is the label for the first image in the validation set?
# **(Please answer this in quiz)**
#
#
# #### A. Cabbage Butterfly
# #### B. Mixing bowl
# #### C. Wok
# #### D. French horn
# Submit an answer choice as a string below (eg. if you choose option C, put 'C')
answer1 = '___'
# ## Benchmark models
# Helper function to get key stats
# (evaluation speed, top-1 % accuracy, total model parameters)
def model_stats(model,x_val,name):
#Time for evaluation
time = timeit.timeit(lambda: model.predict(x_val, verbose=1), number=1)
# Accuracy
y_pred = model.predict(x_val)
top_1 = np.any(np.argsort(y_pred)[:,-1:].T == y_val_one_hot.argmax(axis=1),axis=0).mean()
# Model size
params = model.count_params()
return (time,top_1,params,name)
# ## SOTA architectures
#
# For this exercise, we will consider the following SOTAs:
# - VGG16
# - VGG19
# - InceptionV3
# - ResNet50
# - MobileNet
# +
# VGG16 stats
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
# Preprocess step
# We need to call the data because some preprocess steps
# change the value inplace
x_val = np.load("/course/data/x_val.npy") # loaded as RGB
x_val = x_val[:100]
x_val = preprocess_input(x_val)
# Call the VGG16 model
model = ___
# Collect stats
vgg16stats = model_stats(model,x_val,'VGG16')
# +
# VGG19 stats
from tensorflow.keras.applications.vgg19 import VGG19, preprocess_input
x_val = np.load("/course/data/x_val.npy") # loaded as RGB
x_val = x_val[:100]
x_val = preprocess_input(x_val)
# Call the VGG19 model
model = ___
# Collect stats
vgg19stats = model_stats(model,x_val,'VGG19')
# +
# Inception Stats
from tensorflow.keras.applications.inception_v3 import InceptionV3,preprocess_input
x_val = np.load("/course/data/x_val.npy") # loaded as RGB
x_val = x_val[:100]
x_val = preprocess_input(x_val)
# Call the InceptionV3 model
model = ___
# Collect stats
inceptionstats = model_stats(model,x_val,'Inception')
# +
# Resnet50 stats
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
x_val = np.load("/course/data/x_val.npy") # loaded as RGB
x_val = x_val[:100]
x_val = preprocess_input(x_val)
# Call the ResNet50 model
model = ___
# Collect stats
resnetstats = model_stats(model,x_val,'Resnet50')
# +
# MobileNet stats
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input
x_val = np.load("/course/data/x_val.npy") # loaded as RGB
x_val = x_val[:100]
x_val = preprocess_input(x_val)
# Call the MobielNetV2 model
model = ___
# Collect stats
mobilestats = model_stats(model,x_val,'MobileNet')
# -
# ### ⏸ Which SOTA architecture from above has the **highest** number of trainable parameters?
# **(Please answer this in quiz)**
#
# #### A. VGG-16
# #### B. VGG-19
# #### C. ResNet50
# #### D. MobileNet
# Submit an answer choice as a string below (eg. if you choose option C, put 'C')
answer2 = '___'
# +
# Use the helper code below
# to plot the model stats for each SOTA
fig, ax = plt.subplots(figsize=(10,6))
for i,val in enumerate([vgg16stats, vgg19stats, inceptionstats,resnetstats, mobilestats]):
r = val[2]/10**9 + 0.04
ellipse(val[0]/40,val[1],width=r,height=0.44*r,color = colors[i],ax=ax)
ax.text(val[0]/40 + 0.035, val[1]+r/4+ 0.004, val[3], va='center', ha='center',fontsize=12)
ax.set_ylim([0.6,0.85])
ax.set_ylabel('Top-1 accuracy [%]',fontsize=20)
ax.set_xlabel('Time for evaluation [s]',fontsize=20)
ax.set_xticklabels(range(0,60,8));
ax.set_yticklabels(range(50,110,10));
for axis in ['bottom','left']:
ax.spines[axis].set_linewidth(3)
ax.spines[axis].set_color('k')
# -
# ### 🍲 Larger dataset
#
# Go back and take a larger sample of images, do your results remain consistent?
# Type your answer within in the quotes given
answer3 = 'all models perform equally well. this look wrong'
| content/a-sections/a-sec05/a-sec5_ex1_performance-comparison-of-different-SOTAs/s8-challenge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.append('.')
import model
# -
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
import random
import seaborn as sns
import statistics
# This time, run the model with no contact tracing at all.
N = 2000
K = 4
p_star = 0.256
def ws_case_generator(N, K, p_star):
def wscg(**kwargs):
return model.watts_strogatz_case_p_star(N, K, p_star, **kwargs)
return wscg
# This time, assume all the agents are adopters.
# We will use $(q,r)$ to regulate the percentage of distant and close edgees that are traced.
## Population parameters:
base_params = {
# Node parameter
'A' : 1, # This is A* from the second study.
# Edge parameter
'W' : .5, # probability of edge activation; 2/K
'C' : 1.0, ## all edges can be traced.
## Disease parameters
'beta_hat' : .4, # probability of transmission upon contact
'alpha' : .25, # probability of exposed becoming infectious
'gamma' : .1, # probability of infectious becoming recovered
'zeta' : .1, # probability of infectious becoming symptomatic
## Contact tracing parameters
'limit' : 10, # number of time steps the contact tracing system remembers
}
conditions = {
'q-0.0_r-0.0' : {'C' : model.qr_knockout(0.0, 0.0), 'q' : 0.0, 'r' : 0.0},
'q-0.0_r-0.1' : {'C' : model.qr_knockout(0.0, 0.1), 'q' : 0.0, 'r' : 0.1},
'q-0.0_r-0.2' : {'C' : model.qr_knockout(0.0, 0.2), 'q' : 0.0, 'r' : 0.2},
'q-0.0_r-0.3' : {'C' : model.qr_knockout(0.0, 0.3), 'q' : 0.0, 'r' : 0.3},
'q-0.0_r-0.4' : {'C' : model.qr_knockout(0.0, 0.4), 'q' : 0.0, 'r' : 0.4},
'q-0.0_r-0.5' : {'C' : model.qr_knockout(0.0, 0.5), 'q' : 0.0, 'r' : 0.5},
'q-0.0_r-0.6' : {'C' : model.qr_knockout(0.0, 0.6), 'q' : 0.0, 'r' : 0.6},
'q-0.0_r-0.7' : {'C' : model.qr_knockout(0.0, 0.7), 'q' : 0.0, 'r' : 0.7},
'q-0.0_r-0.8' : {'C' : model.qr_knockout(0.0, 0.8), 'q' : 0.0, 'r' : 0.8},
'q-0.0_r-0.9' : {'C' : model.qr_knockout(0.0, 0.9), 'q' : 0.0, 'r' : 0.9},
'q-0.0_r-1.0' : {'C' : model.qr_knockout(0.0, 1.0), 'q' : 0.0, 'r' : 1.0},
'q-0.2_r-0.0' : {'C' : model.qr_knockout(0.2, 0.0), 'q' : 0.2, 'r' : 0.0},
'q-0.2_r-0.1' : {'C' : model.qr_knockout(0.2, 0.1), 'q' : 0.2, 'r' : 0.1},
'q-0.2_r-0.2' : {'C' : model.qr_knockout(0.2, 0.2), 'q' : 0.2, 'r' : 0.2},
'q-0.2_r-0.3' : {'C' : model.qr_knockout(0.2, 0.3), 'q' : 0.2, 'r' : 0.3},
'q-0.2_r-0.4' : {'C' : model.qr_knockout(0.2, 0.4), 'q' : 0.2, 'r' : 0.4},
'q-0.2_r-0.5' : {'C' : model.qr_knockout(0.2, 0.5), 'q' : 0.2, 'r' : 0.5},
'q-0.2_r-0.6' : {'C' : model.qr_knockout(0.2, 0.6), 'q' : 0.2, 'r' : 0.6},
'q-0.2_r-0.7' : {'C' : model.qr_knockout(0.2, 0.7), 'q' : 0.2, 'r' : 0.7},
'q-0.2_r-0.8' : {'C' : model.qr_knockout(0.2, 0.8), 'q' : 0.2, 'r' : 0.8},
'q-0.2_r-0.9' : {'C' : model.qr_knockout(0.2, 0.9), 'q' : 0.2, 'r' : 0.9},
'q-0.2_r-1.0' : {'C' : model.qr_knockout(0.2, 1.0), 'q' : 0.2, 'r' : 1.0},
'q-0.4_r-0.0' : {'C' : model.qr_knockout(0.4, 0.0), 'q' : 0.4, 'r' : 0.0},
'q-0.4_r-0.1' : {'C' : model.qr_knockout(0.4, 0.1), 'q' : 0.4, 'r' : 0.1},
'q-0.4_r-0.2' : {'C' : model.qr_knockout(0.4, 0.2), 'q' : 0.4, 'r' : 0.2},
'q-0.4_r-0.3' : {'C' : model.qr_knockout(0.4, 0.3), 'q' : 0.4, 'r' : 0.3},
'q-0.4_r-0.4' : {'C' : model.qr_knockout(0.4, 0.4), 'q' : 0.4, 'r' : 0.4},
'q-0.4_r-0.5' : {'C' : model.qr_knockout(0.4, 0.5), 'q' : 0.4, 'r' : 0.5},
'q-0.4_r-0.6' : {'C' : model.qr_knockout(0.4, 0.6), 'q' : 0.4, 'r' : 0.6},
'q-0.4_r-0.7' : {'C' : model.qr_knockout(0.4, 0.7), 'q' : 0.4, 'r' : 0.7},
'q-0.4_r-0.8' : {'C' : model.qr_knockout(0.4, 0.8), 'q' : 0.4, 'r' : 0.8},
'q-0.4_r-0.9' : {'C' : model.qr_knockout(0.4, 0.9), 'q' : 0.4, 'r' : 0.9},
'q-0.4_r-1.0' : {'C' : model.qr_knockout(0.4, 1.0), 'q' : 0.4, 'r' : 1.0},
'q-0.6_r-0.0' : {'C' : model.qr_knockout(0.6, 0.0), 'q' : 0.6, 'r' : 0.0},
'q-0.6_r-0.1' : {'C' : model.qr_knockout(0.6, 0.1), 'q' : 0.6, 'r' : 0.1},
'q-0.6_r-0.2' : {'C' : model.qr_knockout(0.6, 0.2), 'q' : 0.6, 'r' : 0.2},
'q-0.6_r-0.3' : {'C' : model.qr_knockout(0.6, 0.3), 'q' : 0.6, 'r' : 0.3},
'q-0.6_r-0.4' : {'C' : model.qr_knockout(0.6, 0.4), 'q' : 0.6, 'r' : 0.4},
'q-0.6_r-0.5' : {'C' : model.qr_knockout(0.6, 0.5), 'q' : 0.6, 'r' : 0.5},
'q-0.6_r-0.6' : {'C' : model.qr_knockout(0.6, 0.6), 'q' : 0.6, 'r' : 0.6},
'q-0.6_r-0.7' : {'C' : model.qr_knockout(0.6, 0.7), 'q' : 0.6, 'r' : 0.7},
'q-0.6_r-0.8' : {'C' : model.qr_knockout(0.6, 0.8), 'q' : 0.6, 'r' : 0.8},
'q-0.6_r-0.9' : {'C' : model.qr_knockout(0.6, 0.9), 'q' : 0.6, 'r' : 0.9},
'q-0.6_r-1.0' : {'C' : model.qr_knockout(0.6, 1.0), 'q' : 0.6, 'r' : 1.0},
'q-0.8_r-0.0' : {'C' : model.qr_knockout(0.8, 0.0), 'q' : 0.8, 'r' : 0.0},
'q-0.8_r-0.1' : {'C' : model.qr_knockout(0.8, 0.1), 'q' : 0.8, 'r' : 0.1},
'q-0.8_r-0.2' : {'C' : model.qr_knockout(0.8, 0.2), 'q' : 0.8, 'r' : 0.2},
'q-0.8_r-0.3' : {'C' : model.qr_knockout(0.8, 0.3), 'q' : 0.8, 'r' : 0.3},
'q-0.8_r-0.4' : {'C' : model.qr_knockout(0.8, 0.4), 'q' : 0.8, 'r' : 0.4},
'q-0.8_r-0.6' : {'C' : model.qr_knockout(0.8, 0.6), 'q' : 0.8, 'r' : 0.6},
'q-0.8_r-0.8' : {'C' : model.qr_knockout(0.8, 0.8), 'q' : 0.8, 'r' : 0.8},
'q-0.8_r-1.0' : {'C' : model.qr_knockout(0.8, 1.0), 'q' : 0.8, 'r' : 1.0},
'q-1.0_r-0.0' : {'C' : model.qr_knockout(1.0, 0.0), 'q' : 1.0, 'r' : 0.0},
'q-1.0_r-0.1' : {'C' : model.qr_knockout(1.0, 0.1), 'q' : 1.0, 'r' : 0.1},
'q-1.0_r-0.2' : {'C' : model.qr_knockout(1.0, 0.2), 'q' : 1.0, 'r' : 0.2},
'q-1.0_r-0.4' : {'C' : model.qr_knockout(1.0, 0.4), 'q' : 1.0, 'r' : 0.4},
'q-1.0_r-0.6' : {'C' : model.qr_knockout(1.0, 0.6), 'q' : 1.0, 'r' : 0.6},
'q-1.0_r-0.8' : {'C' : model.qr_knockout(1.0, 0.8), 'q' : 1.0, 'r' : 0.8},
'q-1.0_r-1.0' : {'C' : model.qr_knockout(1.0, 1.0), 'q' : 1.0, 'r' : 1.0},
}
def dfr(rs):
return pd.DataFrame(
[r for case in rs
for r in model.data_from_results(rs, case)])
# +
runs = 100
base_params['A'] = 1
rs = model.experiment(
ws_case_generator(N, K, p_star),
base_params,
conditions,
runs)
temp = dfr(rs)
temp.to_csv('qr_study.csv')
#del rs
# -
temp = pd.read_csv('qr_study.csv')
temp.head()
# +
temp["r-cat"] = temp["r"].apply(lambda x: f"r = {x}")
temp["q-cat"] = temp["q"].apply(lambda x: f"q = {x}")
splot = sns.lineplot(x='q', y='infected_ratio', hue="r-cat", data=temp)
# -
splot = sns.lineplot(x='r', y='infected_ratio', hue="q-cat", data=temp)
splot = sns.lineplot(x='r', y='traced_edges', hue="q-cat", data=temp)
splot = sns.lineplot(x='r', y='traced_edges_distant', hue="q-cat", data=temp)
# +
data = temp
data['traced_edges_close'] = data['traced_edges'] - data['traced_edges_distant']
data['traced_edges_ratio'] = data['traced_edges'] / (data['N'] * data['K'] / 2)
data['traced_edges_distant_ratio'] = data['traced_edges_distant'] / data['traced_edges']
data['D'] = (data['p'] * data['q']) / ((1 - data['p']) * data['r'] + data['p'] * data['q'])
data['T'] = ((1 - data['p']) * data['r'] + data['p'] * data['q'])
# -
splot = sns.lineplot(x='traced_edges_distant', y='infected_ratio', data=temp)
splot = sns.lineplot(x='traced_edges_close', y='infected_ratio', data=temp)
# Computing a few other statistics on the data.
plt.hist(data['traced_edges'], bins = 100)
g = sns.scatterplot(
data = data,
y = 'infected_ratio',
x = 'traced_edges',
hue = 'traced_edges_close'
)
g = sns.scatterplot(
data = data,
y = 'infected_ratio',
x = 'traced_edges',
hue = 'traced_edges_distant_ratio'
)
sns.pairplot(
data[[
'infected_ratio',
'traced_edges_close',
'traced_edges_distant',
'q',
'r',
'D',
'T']
])
# +
data["D-cat"] = data["D"].apply(lambda x: f"D = {round(x,2)}")
splot = sns.lineplot(
x='T',
y='infected_ratio',
data=data,
hue = 'D'
)
splot.set(#xscale="log",
xlabel='T - traced edges',
ylabel='average final infected ratio')
# +
g, xyz, db = model.binned_heatmap(
data,
x = 'traced_edges_distant',
x_base = 200,
y = 'traced_edges_close',
y_base = 200,
z = 'infected_ratio'
)
g.set(#xscale="log",
xlabel='traced_edges_distant',
ylabel='traced_edges_close')
# +
g, xyz, db = model.binned_heatmap(
data,
x = 'traced_edges_distant',
x_base = 200,
y = 'traced_edges',
y_base = 200,
z = 'infected_ratio'
)
g.set(#xscale="log",
xlabel='traced_edges_distant',
ylabel='traced_edges')
# -
| contact-tracing/code/Python/Experiment--(q,r) on p* WS.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.2
# language: julia
# name: julia-0.6
# ---
using Interact
using Gadfly
using DifferentialEquations
# Pendulum
# +
using OrdinaryDiffEq
#Constants
const g = 9.81
L = 1.0
#Initial Conditions
u₀ = [0,π/2]
tspan = (0.0,6.3)
#Define the problem
function simplependulum(t,u,du)
θ = u[1]
dθ = u[2]
du[1] = dθ
du[2] = -(g/L)*sin(θ)
end
#Pass to solvers
prob = ODEProblem(simplependulum,u₀, tspan)
sol = solve(prob,Tsit5())
# #Plot
# plot(sol,linewidth=2,title ="Simple Pendulum Problem", xaxis = "Time", yaxis = "Height", label = ["Theta","dTheta"])
# -
# plot(sol.t, sol.u[:,1])
plot(x=sol.t, y=[u[2] for u in sol.u])
plot(x=sol.t, y=[u[1] for u in sol.u])
# Gravity
# +
using OrdinaryDiffEq
#Constants
G = 6.67408e-11
M = 5.972e24
R = 6371e3
#position, velocity
x0 = [R,0]
tspan = (0.0,1)
function a(r)
(-G*M) / (r^2)
end
function gravity(t,u,du)
r = u[1]
v = u[2]
du[1] = v
du[2] = a(r)
end
#Pass to solvers
prob = ODEProblem(gravity, x0, tspan)
sol = solve(prob,Tsit5())
# -
plot(x=sol.t, y=[u[1] for u in sol.u])
# difference is 5 meters
plot(x=sol.t, y=[u[2] for u in sol.u])
# difference is ~9.81 m/s^2
plot(x=sol.t, y=[a(r) for r in [x[1] for x in sol.u]])
#difference is basically none
| lab9/lab9_julia.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#18 FINDING SIMILARITY BETWEEN DOCUMENTS
from sklearn.metrics.pairwise import cosine_similarity
similarity= cosine_similarity(vect.transform(["Google Cloud Vision is a character recognition engine"]).toarray(),vect.transform(["OCR is an optical character recognition engine"]).toarray())
print(similarity)
#17 COUNT VECTORIZER
from sklearn.feature_extraction.text import CountVectorizer
vect=CountVectorizer(binary=True)
corpus=["Tessaract is good optical character recognition engine ","optical character recognition is significant"]
vect.fit(corpus)
vocab=vect.vocabulary_
for key in sorted(vocab.keys()):
print("{}:{}".format(key,vocab[key]))
print(vect.transform(["This is a good optical illusion"]).toarray())
#18 FINDING SIMILARITY BETWEEN DOCUMENTS
from sklearn.metrics.pairwise import cosine_similarity
similarity= cosine_similarity(vect.transform(["Google Cloud Vision is a character recognition engine"]).toarray(),vect.transform(["OCR is an optical character recognition engine"]).toarray())
print(similarity)
import spacy
m=spacy.load('en_core_web_sm')
example1="Google, a company founded by <NAME> and <NAME> in the United States Of America "+" it has one of the world's most advanced technology"
doc=m(example1)
for ent in doc.ents:
print(ent.text,ent.label_)
example2="Manchester United Football Club is a professional football club based in Old Trafford, Greater Manchester, England, that competes in the Premier League, the top flight of English football. Nicknamed the Red Devils, the club was founded as Newton Heath LYR Football Club in 1878, changed its name to Manchester United in 1902 and moved to its current stadium, Old Trafford, in 1910."
doc=m(example2)
for ent in doc.ents:
print(ent.text,ent.label_)
#QUERYING
locs=[('Omnicom','IN','New York'),('BBDO South','IN','Atlanta')]
query=[e1 for (e1,rel,e2) in locs if e2=='Atlanta']
print(query)
import nltk
sentence=[("the", "DT"), ("little", "JJ"), ("yellow", "JJ"),("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]
grammar="NP:{<DT>?<JJ>*<NN>}"
#PRODUCTIONS FROM CFG FOR NP AND VP
cp=nltk.RegexpParser(grammar)
result=cp.parse(sentence)
print(result)
| similarities.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <img src='../img/dust_banner.png' alt='Training school and workshop on dust' align='center' width='100%'></img>
#
# <br>
# + [markdown] Collapsed="false"
# # Day 2 - Assignment
# -
# ### About
# > So far, we analysed Aerosol Optical Depth from different types of data (satellite, model-based and ground-based observations) for a single dust event. Let us now broaden our view and analyse the annual cycle in 2020 of Aerosol Optical Depth from AERONET and compare it with the CAMS global reanalysis data.
# ### Tasks
# + [markdown] tags=[]
# #### 1. Download and plot time-series of AERONET data for Santa Cruz, Tenerife in 2020
# * **Hint**
# * [AERONET - Example notebook](../../dust_workshop_part1/02_ground-based_observations/21_AERONET.ipynb)
# * you can select daily aggregates of the station observations by setting the `AVG` key to `AVG=20`
# * **Interpret the results:**
# * Have there been other times in 2020 with increased AOD values?
# * If yes, how could you find out if the increase in AOD is caused by dust? Try to find out by visualizing the AOD time-series together with another parameter from the AERONET data.
# * [MSG SEVIRI Dust RGB](https://sds-was.aemet.es/forecast-products/dust-observations/msg-2013-eumetsat) and [MODIS RGB](https://worldview.earthdata.nasa.gov/) quick looks might be helpful to get a more complete picture of other events that might have happened in 2020
#
#
# #### 2. Download CAMS global reanalysis (EAC4) and select 2020 time-series for *Santa Cruz, Tenerife*
# * **Hint**
# * [CAMS global forecast - Example notebook](../../dust_workshop_part1/03_model-based_data/32_CAMS_global_forecast_duaod_load_browse.ipynb) (**Note:** the notebook works with CAMS forecast data, but they have a similar data structure to the CAMS global reanalysis data)
# * [Data access](https://ads.atmosphere.copernicus.eu/cdsapp#!/dataset/cams-global-reanalysis-eac4?tab=form) with the following specifications:
# > Variable on single levels: `Dust aerosol optical depth at 550 nm` <br>
# > Date: `Start=2020-01-01`, `End=2020-12-31` <br>
# > Time: `[00:00, 03:00, 06:00, 09:00, 12:00, 15:00, 18:00, 21:00]` <br>
# > Restricted area: `N: 30., W: -20, E: 14, S: 20.` <br>
# >Format: `netCDF` <br>
# * With the xarray function `sel()` and keyword argument `method='nearest'` you can select data based on coordinate information
# * We also recommend you to transform your xarray.DataArray into a pandas.DataFrame with the function `to_dataframe()`
#
#
# #### 3. Visualize both time-series of CAMS reanalysis and AERONET daily aggregates in one plot
# * **Interpret the results:** What can you say about the annual cycle in 2020 of AOD in Santa Cruz, Tenerife?
# + [markdown] Collapsed="false"
# ### Module outline
# * [1 - Select latitude / longitude values for Santa Cruz, Tenerife](#select_lat_lon)
# * [2 - Download and plot time-series of AERONET data](#aeronet)
# * [3 - Download CAMS global reanalysis (EAC4) and select 2020 time-series for Santa Cruz, Tenerife](#cams_reanalysis)
# * [4 - Combine both annual time-series and visualize both in one plot](#visualize_annual_ts)
#
# -
# <hr>
# + [markdown] Collapsed="false"
# ##### Load required libraries
# + Collapsed="false"
# %matplotlib inline
import os
import xarray as xr
import numpy as np
import netCDF4 as nc
import pandas as pd
from IPython.display import HTML
import matplotlib.pyplot as plt
import matplotlib.colors
from matplotlib.cm import get_cmap
from matplotlib import animation
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import cartopy.feature as cfeature
from matplotlib.axes import Axes
from cartopy.mpl.geoaxes import GeoAxes
GeoAxes._pcolormesh_patched = Axes.pcolormesh
import wget
import warnings
warnings.simplefilter(action = "ignore", category = RuntimeWarning)
# -
# ##### Load helper functions
# %run ../functions.ipynb
# + [markdown] Collapsed="false"
# <hr>
# + [markdown] tags=[]
# ### <a id='select_lat_lon'></a>1. Select latitude / longitude values for Santa Cruz, Tenerife
# -
# You can see an overview of all available AERONET Site Names [here](https://aeronet.gsfc.nasa.gov/cgi-bin/draw_map_display_aod_v3?long1=-180&long2=180&lat1=-90&lat2=90&multiplier=2&what_map=4&nachal=1&formatter=0&level=3&place_code=10&place_limit=0).
# <br>
# ### <a id='aeronet'></a>2. Download and plot time-series of AERONET data
# <br>
# ### <a id='cams_reanalysis'></a> 3. Download CAMS global reanalysis (EAC4) and select 2020 time-series for Santa Cruz, Tenerife
# <br>
# ### <a id='visualize_annual_ts'></a>4. Combine both annual time-series and visualize both in one plot
# <br>
# <hr>
# <img src='../img/copernicus_logo.png' alt='Logo EU Copernicus' align='left' width='20%'><br><br><br><br>
# <p style="text-align:right;">This project is licensed under the <a href="./LICENSE">MIT License</a> and is developed under a Copernicus contract.
| 90_workshops/202111_dust_training_school/dust_workshop_part2/day2/day2_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # **Deep learning for image analysis with Python**
# #### <NAME>, Systems Analyst I, Imaging Solutions, Research IT
# #### <EMAIL> (slack) @fernando.cervantes
# Use ssh, or create a tunnel using MobaXTerm, or Putty to connect to the GCP<br>
# **ssh -nNfg -L8888:computenodename:8080 student-##@###.###.###.### <br>**
# _To be used only during the workshop only. To login into JAX HPC use ssh as usual_<br>
# Run the singularity container using the following command:<br>
# **singularity run --nv --env CUDA_VISIBLE_DEVICES=0 --bind /fastscratch/data/:/mnt/data/:ro,/fastscratch/models/:/mnt/models/:ro /fastscratch/pytorch_jupyter.sif -m jupyterlab --no-browser --ip=$(hostname -i)**<br>
# - **--nv** tells Singularity to use the NVIDIA drivers and allows us to use the GPUs inside the container
# - **--env CUDA_VISIBLE_DEVICES=0** sets an environment variable that specifies what GPU device is going to be used by PyTorch
# - **--bind /fastscratch/data/:/mnt/data/:ro** bind the location of the datasets to be visible inside the container (under the path _/mnt/data/_)
# Copy the URL and paste into the search bar of your browser.<br>
# If jupyter asks for a password setup, use the token from the URL that you copied and use as password: **student-#**<br>
# The token looks something like: http://some-ip:8888/lab?token= **A-long-alphanumeric-string**
# ## **2 Getting started with PyTorch**
# ### 2.1 _Tensors_
# The PyTorch library in python is called _torch_
import torch
# PyTorch basic object is the tensor (multidimensional array), with default 32-bytes float datype.
x = torch.tensor(
[[1., 0.],
[0., 1.]]
)
x
x.dtype
# ***
# PyTorch has a from_numpy function to convert numpy arrays to tensors.<br>
# The datatype and shape of the source numpy array are kept when converted to a pytorch tensor
import numpy as np
a = np.array([
[[0., 1.],
[1., 0.]],
[[0., 2.],
[2., 0.]]
])
a
type(a), a.dtype, a.shape
b = torch.from_numpy(a)
b
type(b), b.dtype, b.shape
# ***
# Tensors have built-in function to convert them to numpy arrays
c = b.numpy()
c
type(c), c.dtype, c.shape
# ***
# In PyTorch, the shape of a tensor is retrieved using the built-in function _size_
print(b.size())
print(b.shape)
# + [markdown] jupyter={"source_hidden": true} tags=[]
# ### 2.2 _Initializing tensors_
# + [markdown] jupyter={"source_hidden": true} tags=[]
# Like in numpy, tensors can be initialized by giving the desired shape and datatype
# + jupyter={"source_hidden": true} tags=[]
x = torch.zeros((2, 3, 1, 5))
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
x.size(), x.dtype
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
x
# + jupyter={"source_hidden": true} tags=[]
x = torch.ones((4, 1, 1, 5), dtype=torch.float64)
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
x.size(), x.dtype
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
x
# -
# ### 2.3 _Operations on tensors_
# There is a wide variety of arithmetic, linear algebra, and matix manipulation operations already implemented to be performed on tensors.
x = torch.tensor([7.])
2 * x + 3
# Mathematical operations can be applied directly as build-in functions from tensor objects, or calling the torch library
x.cos()
torch.cos(x)
# Most operations are applied _element-wise_ to each entry of the tensor
x = torch.zeros((2, 2))
x.cos()
# + [markdown] jupyter={"source_hidden": true} tags=[]
# ### 2.4 _Random tensors_
# + [markdown] jupyter={"source_hidden": true} tags=[]
# PyTorch has a random number generator to create random tensors
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
x = torch.rand((2, 1, 3)) # Random numbers drawn from an uniform distribution in [0, 1]
x
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
x = torch.randn((5, 4, 3)) # Random numbers drawn from a normal distribution N(0, 1)
x
# + [markdown] jupyter={"source_hidden": true} tags=[]
# For reproducibility, the seed for random number generation is set using torch.random.manual_seed
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
torch.random.manual_seed(777)
x = torch.rand(1)
x
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
x = torch.rand(1)
x
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
torch.random.manual_seed(777)
x = torch.rand(1)
x
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
x = torch.rand(1)
x
# -
# ### 2.5 _Automatic differentiation (autograd)_
# The autograd module of PyTorch allows to compute the gradient of _almost_ any operation on tensors that are implemented and applied using torch
x = torch.tensor(4.)
y = torch.tensor(5.)
z = 2*x + y + x * y
z
# The autograd functionality of PyTorch is enabled when at least for one tensor, its gradient is required to be computed.<br>
# That means that internally, a graph is generated to compute the gradients on the tensors.
x = torch.tensor(4., requires_grad=True)
y = torch.tensor(5., requires_grad=True)
z = 2*x + y + x * y
z
# ***
# The gradient is computed when the _backward_ built-in function is called.<br>
# This will compute the gradients of all involved tensors. Then the graph is destroyed to save memory (cannot call _backward_ twice)
z.backward()
# We expect the following result for function $f$.<br>
# $z = f(x, y) = 2 x + y + x y$<br>
# $\frac{\delta f}{\delta x} = 2 + y = 2 + 5 = 7$<br>
# $\frac{\delta f}{\delta y} = 1 + x = 1 + 4 = 5$
x.grad
y.grad
# + [markdown] jupyter={"source_hidden": true} tags=[]
# ***
# Example of a linear transform on $x$
# + jupyter={"source_hidden": true} tags=[]
x = torch.tensor([1., 2., 1., 2., 3.]) # input tensor
w = torch.randn(3, 5, requires_grad=True)
b = torch.randn(3, requires_grad=True)
# + jupyter={"source_hidden": true} tags=[]
z = torch.matmul(w, x)+b
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
z
# + [markdown] jupyter={"source_hidden": true} tags=[]
# z is of dimension 1$\times$3, so to compute the gradient on z, we need a tensor with shape 1$\times$3
# + jupyter={"source_hidden": true} tags=[]
z.backward(torch.tensor([1., 1., 1.]))
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
w.grad
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
b.grad
# + [markdown] jupyter={"source_hidden": true} tags=[]
# ***
# Compute the gradient for a loss function in an optimization problem
# + jupyter={"source_hidden": true} tags=[]
w = torch.randn(3, 5, requires_grad=True)
b = torch.randn(3, requires_grad=True)
z = torch.matmul(w, x)+b
# + [markdown] jupyter={"source_hidden": true} tags=[]
# For this example, lets use the Mean Squared Error (MSE) as target function
# + jupyter={"source_hidden": true} tags=[]
y = torch.zeros(3) # target output, ground-truth
# + jupyter={"source_hidden": true} tags=[]
loss = torch.mean((y - z) ** 2)
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
loss
# + jupyter={"source_hidden": true} tags=[]
loss.backward()
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
w.grad
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
b.grad
# + [markdown] jupyter={"source_hidden": true} tags=[]
# In an optimization step (e.g. gradient descent), the new values for **w** and **b** are updated from the computed gradient
# + [markdown] jupyter={"source_hidden": true} tags=[]
# ### 2.6 _Loss functions_
# + [markdown] jupyter={"source_hidden": true} tags=[]
# PyTorch has implemented several loss functions that are ready for use.<br>
# These can be found in the *nn* (neural networks) module.<br>
# In most of the cases, these functions have some level of optimization in their code.
# + jupyter={"source_hidden": true} tags=[]
import torch.nn as nn
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
nn.MSELoss
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
criterion = nn.MSELoss()
loss = criterion(y, z)
loss
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
nn.CrossEntropyLoss
# + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[]
nn.BCEWithLogitsLoss
# + [markdown] jupyter={"source_hidden": true} tags=[]
# The complete list of loss functions can be found in this [link](https://pytorch.org/docs/stable/nn.html#loss-functions)
| code/2-Getting_started_with_Pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ML
# language: python
# name: ml
# ---
# # Window Method in PyTorch
#
# In the previous section we built a method for calculating the average sentiment for long pieces of text by breaking the text up into *windows* and calculating the sentiment for each window individually.
#
# Our approach in the last section was a quick-and-dirty solution. Here, we will work on improving this process and implementing it solely using PyTorch functions to improve efficiency.
#
# The first thing we will do is import modules and initialize our model and tokenizer.
# +
from transformers import BertForSequenceClassification, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('ProsusAI/finbert')
model = BertForSequenceClassification.from_pretrained('ProsusAI/finbert')
# -
# We will be using the same text example as we did previously.
txt = """
I would like to get your all thoughts on the bond yield increase this week. I am not worried about the market downturn but the sudden increase in yields. On 2/16 the 10 year bonds yields increased by almost 9 percent and on 2/19 the yield increased by almost 5 percent.
Key Points from the CNBC Article:
* **The “taper tantrum” in 2013 was a sudden spike in Treasury yields due to market panic after the Federal Reserve announced that it would begin tapering its quantitative easing program.**
* **Major central banks around the world have cut interest rates to historic lows and launched unprecedented quantities of asset purchases in a bid to shore up the economy throughout the pandemic.**
* **However, the recent rise in yields suggests that some investors are starting to anticipate a tightening of policy sooner than anticipated to accommodate a potential rise in inflation.**
The recent rise in bond yields and U.S. inflation expectations has some investors wary that a repeat of the 2013 “taper tantrum” could be on the horizon.
The benchmark U.S. 10-year Treasury note climbed above 1.3% for the first time since February 2020 earlier this week, while the 30-year bond also hit its highest level for a year. Yields move inversely to bond prices.
Yields tend to rise in lockstep with inflation expectations, which have reached their highest levels in a decade in the U.S., powered by increased prospects of a large fiscal stimulus package, progress on vaccine rollouts and pent-up consumer demand.
The “taper tantrum” in 2013 was a sudden spike in Treasury yields due to market panic after the Federal Reserve announced that it would begin tapering its quantitative easing program.
Major central banks around the world have cut interest rates to historic lows and launched unprecedented quantities of asset purchases in a bid to shore up the economy throughout the pandemic. The Fed and others have maintained supportive tones in recent policy meetings, vowing to keep financial conditions loose as the global economy looks to emerge from the Covid-19 pandemic.
However, the recent rise in yields suggests that some investors are starting to anticipate a tightening of policy sooner than anticipated to accommodate a potential rise in inflation.
With central bank support removed, bonds usually fall in price which sends yields higher. This can also spill over into stock markets as higher interest rates means more debt servicing for firms, causing traders to reassess the investing environment.
“The supportive stance from policymakers will likely remain in place until the vaccines have paved a way to some return to normality,” said <NAME>, chief investment officer at Beaufort Investment, in a research note this week.
“However, there will be a risk of another ‘taper tantrum’ similar to the one we witnessed in 2013, and this is our main focus for 2021,” Balkham projected, should policymakers begin to unwind this stimulus.
Long-term bond yields in Japan and Europe followed U.S. Treasurys higher toward the end of the week as bondholders shifted their portfolios.
“The fear is that these assets are priced to perfection when the ECB and Fed might eventually taper,” said <NAME>, senior macro strategist at Nordea Asset Management, in a research note entitled “Little taper tantrum.”
“The odds of tapering are helped in the United States by better retail sales after four months of disappointment and the expectation of large issuance from the $1.9 trillion fiscal package.”
Galy suggested the Fed would likely extend the duration on its asset purchases, moderating the upward momentum in inflation.
“Equity markets have reacted negatively to higher yield as it offers an alternative to the dividend yield and a higher discount to long-term cash flows, making them focus more on medium-term growth such as cyclicals” he said. Cyclicals are stocks whose performance tends to align with economic cycles.
Galy expects this process to be more marked in the second half of the year when economic growth picks up, increasing the potential for tapering.
## Tapering in the U.S., but not Europe
Allianz CEO <NAME> told CNBC on Friday that there was a geographical divergence in how the German insurer is thinking about the prospect of interest rate hikes.
“One is Europe, where we continue to have financial repression, where the ECB continues to buy up to the max in order to minimize spreads between the north and the south — the strong balance sheets and the weak ones — and at some point somebody will have to pay the price for that, but in the short term I don’t see any spike in interest rates,” Bäte said, adding that the situation is different stateside.
“Because of the massive programs that have happened, the stimulus that is happening, the dollar being the world’s reserve currency, there is clearly a trend to stoke inflation and it is going to come. Again, I don’t know when and how, but the interest rates have been steepening and they should be steepening further.”
## Rising yields a ‘normal feature’
However, not all analysts are convinced that the rise in bond yields is material for markets. In a note Friday, Barclays Head of European Equity Strategy <NAME> suggested that rising bond yields were overdue, as they had been lagging the improving macroeconomic outlook for the second half of 2021, and said they were a “normal feature” of economic recovery.
“With the key drivers of inflation pointing up, the prospect of even more fiscal stimulus in the U.S. and pent up demand propelled by high excess savings, it seems right for bond yields to catch-up with other more advanced reflation trades,” Cau said, adding that central banks remain “firmly on hold” given the balance of risks.
He argued that the steepening yield curve is “typical at the early stages of the cycle,” and that so long as vaccine rollouts are successful, growth continues to tick upward and central banks remain cautious, reflationary moves across asset classes look “justified” and equities should be able to withstand higher rates.
“Of course, after the strong move of the last few weeks, equities could mark a pause as many sectors that have rallied with yields look overbought, like commodities and banks,” Cau said.
“But at this stage, we think rising yields are more a confirmation of the equity bull market than a threat, so dips should continue to be bought.”
"""
# This time, because we are using PyTorch, we will specify `return_tensors='pt'` when encoding our input text.
# +
tokens = tokenizer.encode_plus(txt, add_special_tokens=False,
return_tensors='pt')
print(len(tokens['input_ids'][0]))
tokens
# -
# Now we have a set of tensors where each tensor contains **1345** tokens. We will use a similiar approach to what we used before where we will pull out a length of **510** tokens (or less), add the CLS and SEP tokens, then add PAD tokens when needed. To create these tensors of length **510** we need to use the `torch.split` method.
a = torch.arange(10)
a
torch.split(a, 4)
# Now we apply `split` to our *input IDs* and *attention mask* tensors. Note that we must access the first element of each tensor because they are shaped like a list within a list (you can see this by comparing the number of square brackets between tensor `a` above, and the tensors shown when outputting `tokens` above.
input_id_chunks = tokens['input_ids'][0].split(510)
mask_chunks = tokens['attention_mask'][0].split(510)
# To add our CLS (**101**) and SEP (**102**) tokens, we can use the `torch.cat` method. This method takes a *list* of tensors and con**cat**enates them. Let's try it on our example tensor `a` first:
# +
a = torch.cat(
[torch.Tensor([101]), a, torch.Tensor([102])]
)
a
# -
# It's that easy! We're almost there now, but we still need to add padding to our tensors to push them upto a length of *512*, which should only be required for the final chunk. To do this we will build an if-statement that checks if the tensor length requires padding, and if so add the correct amount of padding which will be something like `required_len = 512 - tensor_len`. Again, let's test it on tensor `a` first:
# +
padding_len = 20 - a.shape[0]
padding_len
# +
if padding_len > 0:
a = torch.cat(
[a, torch.Tensor([0] * padding_len)]
)
a
# -
# Now let's use the same logic with our `tokens` tensors.
# +
# define target chunksize
chunksize = 512
# split into chunks of 510 tokens, we also convert to list (default is tuple which is immutable)
input_id_chunks = list(tokens['input_ids'][0].split(chunksize - 2))
mask_chunks = list(tokens['attention_mask'][0].split(chunksize - 2))
# loop through each chunk
for i in range(len(input_id_chunks)):
# add CLS and SEP tokens to input IDs
input_id_chunks[i] = torch.cat([
torch.tensor([101]), input_id_chunks[i], torch.tensor([102])
])
# add attention tokens to attention mask
mask_chunks[i] = torch.cat([
torch.tensor([1]), mask_chunks[i], torch.tensor([1])
])
# get required padding length
pad_len = chunksize - input_id_chunks[i].shape[0]
# check if tensor length satisfies required chunk size
if pad_len > 0:
# if padding length is more than 0, we must add padding
input_id_chunks[i] = torch.cat([
input_id_chunks[i], torch.Tensor([0] * pad_len)
])
mask_chunks[i] = torch.cat([
mask_chunks[i], torch.Tensor([0] * pad_len)
])
# check length of each tensor
for chunk in input_id_chunks:
print(len(chunk))
# print final chunk so we can see 101, 102, and 0 (PAD) tokens are all correctly placed
chunk
# -
# It all looks good! Now the final step of placing our tensors back into the dictionary style format we had before.
# +
input_ids = torch.stack(input_id_chunks)
attention_mask = torch.stack(mask_chunks)
input_dict = {
'input_ids': input_ids.long(),
'attention_mask': attention_mask.int()
}
input_dict
# -
# We can now process all chunks and calculate probabilities using softmax in parallel like so:
outputs = model(**input_dict)
probs = torch.nn.functional.softmax(outputs[0], dim=-1)
probs = probs.mean(dim=0)
probs
| course/language_classification/04_window_method_in_pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Services Web
# ## Générer un QR code
#
# En utilisant le service Web [*QR code*](https://github.com/sayanarijit/qrcode.show), générez un QR code qui renvoie vers le [programme TV du jour](https://www.programme-tv.net/).
# ```bash
# # your code here
# curl qrcode.show/https://www.programme-tv.net/
# ```
# ## Chuck Norris never dies
#
# Cette fois-ci, en utilisant le service Web [The Internet Chuck Norris Database](http://www.icndb.com/api/) (ICNDB), invoquez une *joke* en remplaçant le nom de Chuck Norris par vos nom et prénom.
# ```bash
# # your code here
# curl http://api.icndb.com/jokes/random?firstName=Alexandre&lastName=Roulois
# ```
# ## Une recherche bibliographique
#
# Grâce à [l’API de recherche HAL](https://api.archives-ouvertes.fr/docs/search), effectuez une recherche de la chaîne "Doctor Who" sur le titre des notices (paramètre `title_t`) et produisez une sortie en XML (paramètre `wt`).
# ```bash
# # your code here
# curl https://api.archives-ouvertes.fr/search/?q=title_t:%22doctor%20who%22&wt=xml
# ```
| 0.intro-to-unix-commands/answers/4.web-services.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Get track uris
# ## Import all the things
# +
import pandas as pd
import spotipy as sp
from spotipy.oauth2 import SpotifyClientCredentials # will update so user can create their own playlist
# -
# ## Get Spotify API credentials
#
# Create a text file named creds.txt in the same folder as this notebook. On the top line put your client id and on the second put your secret id. Don't add a return at the end of the secret id.
# + tags=[]
# read Spotify credentials from txt file
with open("creds.txt") as f:
creds = f.readlines()
# -
cid = creds[0][:-1] # 0 means top number and -1 gets rid of \n
secret = creds[1] # second number
# Authentication - without user
client_credentials_manager = SpotifyClientCredentials(client_id=cid, client_secret=secret)
sp = sp.Spotify(client_credentials_manager = client_credentials_manager)
# ## Function that does all the work
def get_songs(genre):
'''Function to get 1000 songs of any genre from spotify
input: genre wanted (see readme to get a link to a list of genres)
output: dataframe that includes artist, track name, and track uri
'''
# create empty lists to append track information to
artist_name = []
track_name = []
track_id = []
# for loop for offset value
# offset value makes it get songs from multiple pages
for i in range(0, 1000, 50):
track_results = sp.search(q="genre:" + genre, limit = 50, type='track', offset=i)
# get information for all tracks and append to lists above.
for i, t in enumerate(track_results['tracks']['items']):
artist_name.append(t['artists'][0]['name'])
track_name.append(t['name'])
track_id.append(t['id'])
# dictionary created with lists saved to dictionary.
songs = {}
songs['artist'] = artist_name
songs['track_name'] = track_name
songs['track_id'] = track_id
songs['genre'] = genre
# dictionary saved as data frame.
songs = pd.DataFrame(songs)
return songs
# ## Call function and clean data
# call function to get songs
songs_df = get_songs('glam metal')
# drop all live and demo version
songs_df = songs_df[~songs_df['track_name'].str.contains("Live")]
songs_df = songs_df[~songs_df['track_name'].str.contains("live")]
songs_df = songs_df[~songs_df['track_name'].str.contains("Demo")]
songs_df = songs_df[~songs_df['track_name'].str.contains("demo")]
# sort by artist
songs_df = songs_df.sort_values('artist')
songs_df.head()
# get a list of artists
artists = songs_df.artist.unique()
artists
# count songs by each artist
num_songs = songs_df.value_counts('artist')
num_songs
# + tags=[]
songs_df.info()
| puzzle pieces/get_track_ids_only.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:cadd-py36]
# language: python
# name: conda-env-cadd-py36-py
# ---
# # Talktorial 7
#
# # Ligand-based screening: machine learning
#
# #### Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin
#
# <NAME> and <NAME>
# ## Aim of this talktorial
#
# Due to larger available data sources, machine learning (ML) gained momentum in drug discovery and especially in ligand-based virtual screening. In this talktorial, we will learn how to use different supervised ML algorithms to predict the activity of novel compounds against our target of interest (EGFR).
#
# ## Learning goals
#
# * Different fingerprints to encode the molecules for usage in ML
# * Different ML algorithms and their application
# * Evaluation of ML model performance
#
# ### Theory
#
# * Introduce different types of fingerprints
# * Different types of supervised ML algorithms
# * Model performance evaluation and measurements
#
# ### Practical
#
# * Set up and evaluation of a ML-based screening pipeline for potential EGFR inhibitors
#
# ## References
#
# * RdKit fingerprints, e.g. see [presentation by G. Landrum at rdkit UGM 2012](https://www.rdkit.org/UGM/2012/Landrum_RDKit_UGM.Fingerprints.Final.pptx.pdf):
# * ML:
# * Random forest (RF): [http://ect.bell-labs.com/who/tkh/publications/papers/odt.pdf](http://ect.bell-labs.com/who/tkh/publications/papers/odt.pdf)
# * Support vector machines (SVM): [https://link.springer.com/article/10.1007%2FBF00994018](https://link.springer.com/article/10.1007%2FBF00994018)
# * Artificial neural networks (ANN): [https://www.frontiersin.org/research-topics/4817/artificial-neural-networks-as-models-of-neural-information-processing](https://www.frontiersin.org/research-topics/4817/artificial-neural-networks-as-models-of-neural-information-processing)
# * Performance:
# * [Sensitivity_and_specificity (wikipedia)](https://en.wikipedia.org/wiki/Sensitivity_and_specificity)
# * [Roc curve and AUC (wikipedia)](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve)
# * See also [git hub notebook by <NAME>](https://github.com/Team-SKI/Publications/tree/master/Profiling_prediction_of_kinase_inhibitors) from [*J. Med. Chem.*, 2017, 60, 474−485](https://pubs.acs.org/doi/10.1021/acs.jmedchem.6b01611)
# _____________________________________________________________________________________________________________________
#
#
# ## Theory
#
# <img src="./images/ML_overview.png" width="200" align='right'>
#
# To successfully apply ML, we need a large data set of molecules, a molecular encoding, a label per molecule in the data set, and a ML algorithm to train a model. Then, we can make predictions for new molecules.
#
# ### Data preparation
#
# For ML, molecules need to be converted into a list of features. Often molecular fingerprints are used as representation.
#
# Fingerprints used in this talktorial and implemented in rdkit (more info can be found in a [presentation by <NAME>](https://www.rdkit.org/UGM/2012/Landrum_RDKit_UGM.Fingerprints.Final.pptx.pdf)):
# * **maccs**: MACCS keys are 166 bit structural key descriptors in which each bit is associated with a SMARTS pattern.
# * **ecfp4** and *ecfp6*: Extended-Connectivity Fingerprints (ECFPs) are circular topological fingerprints designed for molecular characterization, similarity searching, and structure-activity modeling. Most important parameters of ECFPs are maximum diameter and fingerprint length. The so called diameter specifies the maximum diameter of the circular neighborhoods considered for each atom. Here there are two diameters: 4 and 6. The length parameter specifies the length of the bit string representation. The default length is 2048.
# * **torsion**: The Torsion Fingerprint Deviation (TFD) extracts, weights, and compares Torsion Fingerprints from a query molecule and generated conformations under consideration of acyclic bonds as well as ring systems.
# * **rdk5**: rdk5 is a path based fingerprint. A path fingerprint is generated by exhaustively enumerating all linear fragments of a molecular graph up to a given size and then hashing these fragments into a fixed-length bit vector.
#
# ### Machine Learning (ML)
#
# ML can be applied for (see also [scikit-learn page](http://scikit-learn.org/stable/)):
#
# * **Classification (supervised)**: Identify to which category an object belongs (Nearest neighbors, Naive Bayes, RF, SVM, ...)
# * Regression: Prediction of a continuous-values attribute associated with an object
# * Clustering (unsupervised): Automated grouping of similar objects into sets (see **talktorial 5**)
#
# #### Supervised learning
#
# Learning algorithm creates rules by finding patterns in the training data.
# <img src="./images/RF_example.png" width="250" align='right'>
# * **Random Forest (RF)**: Multiple decision trees which produce a mean prediction.
#
# * **Support Vector Machines (SVM)**: SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Classifier based on the idea of maximizing the margin as objective function.
#
# <img src="./images/ANN_wiki.png" width="150" align='right'>
# * **Artificial neural networks (ANNs)**: An ANN is based on a collection of connected units or nodes called artificial neurons which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. (Figure from Wikipedia)
#
# #### Validation strategy: K-fold cross validation
#
# * This model validation technique splits the dataset in two groups in an iterative manner:
# * Training data set: Considered as the known dataset on which the model is trained
# * Test dataset: Unknown dataset on which the model is then tested
# * Process is repeated k-times
#
# * The goal is to test the ability of the model to predict data which it has never seen before in order to flag problems known as over-fitting and to assess the generalization ability of the model.
#
# #### Performance measures
# <img src="./images/FP_TP_fig.png" width="250" align='right'>
#
# * **Sensitivity**, also true positive rate: TPR = TP/(FN+TP)
# * **Specificity**, also true negative rate: TNR = TN/(FP + TN)
# * **Accuracy**, also the trueness: ACC = (TP + TN)/(TP + TN + FP + FN)
# * **ROC-curve**, receiver operating characteristic curve
# * A graphical plot that illustrates the diagnostic ability of our classifier
# * Plots the sensitivity against the specificity
# * **AUC**, the area under the roc curve (AUC):
# * Describes the probability that a classifier will rank a randomly chosen positive instance higher than a negative one
# * Values between 0 and 1, the higher the better
# +
# Import statements
# General:
import pandas as pd
import numpy as np
# rdkit:
from rdkit import Chem
from rdkit.Chem import RDKFingerprint
from rdkit.Chem.AllChem import GetMorganFingerprintAsBitVect
from rdkit.Chem.AllChem import GetHashedTopologicalTorsionFingerprintAsBitVect
from rdkit.Chem import MACCSkeys
from rdkit.DataStructs import ConvertToNumpyArray
# sklearn:
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import KFold
from sklearn.metrics import auc
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score
from sklearn.metrics import roc_curve
# from sklearn.manifold import MDS
# matplotlib:
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# seaborn:
import seaborn as sns
# -
# ### Data preparation
# We will work on EGFR (Epidermal growth factor receptor) kinase data for now.
#
# But before starting, we will define two functions to help us creating the data frame we will work with.
# The first method is named `calculate_fp` and calculates the molecular fingerprint of a molecule. The user has the choice between:
# * maccs
# * ecfp4 and ecfp6
# * torsion
# * rdk5
def calculate_fp(mol, method='maccs', n_bits=2048):
# mol = Chem molecule object
# Function to calculate molecular fingerprints given the number of bits and the method
if method == 'maccs':
return MACCSkeys.GenMACCSKeys(mol)
if method == 'ecfp4':
return GetMorganFingerprintAsBitVect(mol, 2, nBits=n_bits, useFeatures=False)
if method == 'ecfp6':
return GetMorganFingerprintAsBitVect(mol, 3, nBits=n_bits, useFeatures=False)
if method == 'torsion':
return GetHashedTopologicalTorsionFingerprintAsBitVect(mol, nBits=n_bits)
if method == 'rdk5':
return RDKFingerprint(mol, maxPath=5, fpSize=1024, nBitsPerHash=2)
# The second function helps us to create a data frame with the following additional columns:
# * Our molecules as molecule objects (created SMILES-strings)
# * The fingerprint as an python-object of the method of our choice (here MACCs)
# * The bit-vector of the fingerprint method of our choice as binary representation
#
# Therefore, we have two parameters: The data frame with a column named "smiles" with valid SMILES values, and the length of the fingerprint. The second argument is only used when others than MACCS fingerprint-method is chosen.
def create_mol(df_l, n_bits):
# Construct a molecule from a SMILES string
# Generate mol column: Returns a Mol object, None on failure.
df_l['mol'] = df_l.smiles.apply(Chem.MolFromSmiles)
# Create a column for storing the molecular fingerprint as fingerprint object
df_l['bv'] = df_l.mol.apply(
# Apply the lambda function "calculate_fp" for each molecule
lambda x: calculate_fp(x, 'maccs', n_bits)
)
# Allocate np.array to hold fp bit-vector (np = numpy)
df_l['np_bv'] = np.zeros((len(df_l), df_l['bv'][0].GetNumBits())).tolist()
df_l.np_bv = df_l.np_bv.apply(np.array)
# Convert the object fingerprint to NumpyArray and store in np_bv
df_l.apply(lambda x: ConvertToNumpyArray(x.bv, x.np_bv), axis=1)
# ### Load data
#
# Now let's start to load our data and to do the actual work. The *csv* file from **talktorial 2** is loaded into a dataframe with the important columns:
#
# * the CHEMBL-ID
# * the SMILES value of the corresponding compound
# * pIC50
# Read data from previous talktorials
df = pd.read_csv('../data/T2/EGFR_compounds_lipinski.csv', delimiter=';', index_col=0)
# Look at head
print(df.shape)
print(df.info())
# ### Classify data
# We need to classify each compound as active or inactive, therefore, we use the pIC50 value.
# * pIC50 = -log10(IC50)
# * IC50 describes the molar concentration (mol/L) that will result in 50 percent of inhibition in-vitro.
# * A common cut-off value to discretize pIC50 data is 6,3, which we will use for our experiment.
# * Note that there are several other suggestions for an activity cut-off ranging from an pIC50 value of 5 to 7 in the literature or even to define an exclusion range when not to take data points.
#
# Now we can use our functions defined above to generate our molecules and their fingerprints
# as well as specifying which molecule is active and which is not.
# +
# Drop unnecessary columns
df_new=df.drop(['units', 'IC50'], axis=1)
# Create molecules from smiles and their fingerprints
create_mol(df_new, 2048)
# Add column for activity
df_new['active'] = np.zeros(len(df_new))
# Mark every molecule as active with an pIC50 of > 6.3
df_new.loc[df_new[df_new.pIC50 >= 6.3].index, 'active'] = 1.0
print('actives: %d, inactives: %d' % (df_new.active.sum(), len(df_new)-df_new.active.sum()))
# -
df_new.head(3)
# ### Machine Learning (ML)
#
# In the following we will try several ML approaches to classify our molecules. We will use:
# * Random Forest (RF)
# * Support Vector Machines (SVM)
# * Artificial Neural Networks (ANNs)
#
# Additionally, we will comment on the results. But before we start we define a function named `crossvalidation` which executes a cross validation procedure and returns measures such as accuracy, sensitivity and specificity.
#
# The goal is to test the ability of the model to predict data which it has never seen before in order to flag problems known as overfitting and to assess the generalization ability of the model.
# Function for a cross-validation loop.
def crossvalidation(model_l, df_l, n_folds=10):
# Given the selected model, the dataFrame and the number of folds the function executes a crossvalidation and returns
# accuracy, sensitivity, specificity for the prediction as well as fpr, tpr, roc_auc for each fold
# Empty results vector
results = []
# Shuffle the indices for the k-fold cross-validation
kf = KFold(n_splits=n_folds, shuffle=True)
# Labels initialized with -1 for each data-point
labels = -1 * np.ones(len(df_l))
# Loop over the folds
for train_index, test_index in kf.split(df_l):
# Training
# Convert the bit-vector and the label to a list
train_x = df_l.iloc[train_index].bv.tolist()
train_y = df_l.iloc[train_index].active.tolist()
# Fit the model
model_l.fit(train_x, train_y)
# Testing
# Convert the bit-vector and the label to a list
test_x = df_l.iloc[test_index].bv.tolist()
test_y = df_l.iloc[test_index].active.tolist()
# Predict on test-set
prediction_prob = model_l.predict_proba(test_x)[:, 1]
# Save the predicted label of each fold
labels[test_index] = model_l.predict(test_x)
# Performance
# Get fpr, tpr and roc_auc for each fold
fpr_l, tpr_l, _ = roc_curve(test_y, prediction_prob)
roc_auc_l = auc(fpr_l, tpr_l)
# Append to results
results.append((fpr_l, tpr_l, roc_auc_l))
# Get overall accuracy, sensitivity, specificity
y = df_l.active.tolist()
acc = accuracy_score(df_l.active.tolist(), labels)
sens = recall_score(df_l.active.tolist(), labels)
spec = (acc * len(y) - sens * sum(y)) / (len(y) - sum(y))
return acc, sens, spec, results
# Of course we want to assess the quality of our models. Therefore we want to know the accuracy, sensitivity and specificity of our prediction. Additionally we focus on the so called ROC-curve.
#
# For reasons of clarity and comprehensibility of our code, we build a small function to plot our results.
#
# We will focus shortly on the following aspects:
# * Sensitivity
# * Specificity
# * Accuracy
# * ROC-curve and AUC
np.linspace(0.1, 1.0, 10)
def print_results(acc, sens, spec, stat_res, main_text, file_name, plot_figure=1):
plt.figure(plot_figure, figsize=(7, 7))
cmap = cm.get_cmap('Blues')
colors = [cmap(i) for i in np.linspace(0.3, 1.0, 10)]
#colors = ["#3465A4"]
for i, (fpr_l, tpr_l, roc_auc_l) in enumerate(stat_res):
plt.plot(fpr_l, tpr_l, label='AUC CV$_{0}$ = {1:0.2f}'.format(str(i),roc_auc_l), lw=2, color=colors[i])
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.plot([0, 1], [0, 1], linestyle='--', label='Random', lw=2, color="black") # Random curve
plt.xlabel('False positive rate', size=24)
plt.ylabel('True positive rate', size=24)
plt.title(main_text, size=24)
plt.tick_params(labelsize=16)
plt.legend(fontsize=16)
# Save plot - use bbox_inches to include text boxes:
# https://stackoverflow.com/questions/44642082/text-or-legend-cut-from-matplotlib-figure-on-savefig?rq=1
plt.savefig("../data/T7/" + file_name, dpi=300, bbox_inches="tight", transparent=True)
plt.show()
# Calculate mean AUC and print
m_auc = np.mean([elem[2] for elem in r[3]])
print('Mean AUC: {}'.format(m_auc))
# Show overall accuracy, sensitivity, specificity
print('Sensitivity: {}\nAccuracy: {}\nSpecificity: {}\n'.format(acc, sens, spec))
print('\n')
# ### Random forest classifier
#
# Now we will start with a random forest classifier. We will first set the parameters. Afterwards we will do the cross validation of our model and plot the results.
# +
# Set model parameter for random Forest
param = {'max_features': 'auto',
'n_estimators': 2000,
'criterion': 'entropy',
'min_samples_leaf': 1}
modelRf = RandomForestClassifier(**param)
# Do cross-validation procedure with 10 folds
r = crossvalidation(modelRf, df_new, 10)
# -
# Plot the AUC results
# r contains acc, sens, spec, and results
print_results(r[0], r[1], r[2], r[3], 'Random forest ROC curves', 'rf_roc.png', 3)
# Our models shows very good values for all measured values and, thus, seem to be predictive.
# ### Support vector classifier
# Here we train a Support vector machine with a Radial-basis function kernel (also: squared-exponential kernel).
# For more information see [sklearn RBF kernel](http://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.kernels.RBF.html).
# +
# Specify model
modelSvm = svm.SVC(kernel='rbf', C=1, gamma=0.1, probability=True)
# Do cross-validation procedure with 10 folds
r = crossvalidation(modelSvm, df_new, 10)
# -
# Plot results
print_results(r[0], r[1], r[2], r[3],
'SVM$(rbf kernel)$ $C=1$ $\gamma=0.1$ ROC curves', 'svm_roc.png', 3)
# ### Neural network classifier
# The last approach we try here is a neural network model. We train an MLPClassifier (Multi-layer Perceptron classifier) with 3 layers, each with 5 neurons. You may notice early stopping is explicitely set to FALSE. As before, we do the crossvalidation procedure and plot the results. For more infor on MLP, see [sklearn MLPClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html).
# +
# Specify model, default activation: relu
modelClf = MLPClassifier(solver='adam',
alpha=1e-5,
hidden_layer_sizes=(5, 3),
random_state=1, early_stopping=False)
# Do cross-validation procedure with 10 folds
r = crossvalidation(modelClf, df_new, 10)
# -
# Plot results
print_results(r[0], r[1], r[2], r[3], 'MLPClassifier ROC curves', 'mlp_roc.png', 3)
# ## Discussion
#
# * Which model performed best on our data set and why?
# * All three models perform (very) good on our dataset. The best models are the random forest and support vector machine models which showed a mean AUC of 90%. Our neuronal network showed slightly lower results with a mean AUC of 87%. (Note values can slightly differ if you rerun the script.)
# * There might be several reasons that the random forest and support vector machine models performed best. Our dataset might be easily separable in active/inactive with some simple tree-like decisions or with the radial basis function, respectively. Thus, there is not such a complex pattern in the fingerprints to do this classification.
# * A cause for the slightly poorer performance of the ANN, could be that there was simply too few data to train the model on.
# * Additionally, it is always advisable to have another external validation set for model evaluation.
#
# * Was MACCS the right choice?
# * Obviously, MACCS was good to start training and validating models to see if a classification is possible.
# * But since MACCS fingerprints are rather short (166 bit) compared to others (2048 bit), one should try different fingerprints and repeat the validation process.
#
#
# ### Where can we go from here?
#
# * We successfully trained several models.
# * The next step is to use these models to do a classification with an unknown screening dataset to predict novel potential EGFR inhibitors.
# * An example for a large screening data set is e.g. [MolPort](https://www.molport.com/shop/database-download) with over 7 million compounds.
# * Our models could be used to rank the MolPort compounds and then further study those with the highest predicted probability of being active.
# * For an application of such see also the [TDT Tutorial](https://github.com/sriniker/TDT-tutorial-2014) developed by <NAME> and <NAME>, where they trained a fusion model to screen [eMolecules](https://www.emolecules.com/) for new anti-malaria drugs.
# ### Quiz
#
# * How can you apply ML for virtual screening?
# * Which machine learning algorithms do you know?
# * What are necessary prerequisites to successfully apply ML?
| talktorials/7_machine_learning/T7_machine_learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''base'': conda)'
# language: python
# name: python3
# ---
# # 7 Pattern Matching with Regular Expressions
# ## 7.1 Finding Patterns of Text without Regular Expressions
def isPhoneNumber(text):
if len(text) != 12:
return False
for i in range(0, 3):
if not text[i].isdecimal():
return False
if text[3] != '-':
return False
for i in range(4, 7):
if not text[i].isdecimal():
return False
if text[7] != '-':
return False
for i in range(8, 12):
if not text[i].isdecimal():
return False
return True
print('Is 415-555-4242 a phone number?')
print(isPhoneNumber("415-555-4242"))
print('Is <NAME> a phone number?')
print(isPhoneNumber("Moshi moshi"))
message = 'Call me at 415-555-1011 tomorrow. 415-555-9999 is my office.'
for i in range(len(message)):
chunk = message[i:i+12]
if isPhoneNumber(chunk):
print('Phone number found: ' + chunk)
print('Done')
# ## 7.2 Finding Patterns of Text with Regular Expressions
# ### 7.2.1 Creating Regex Objects
# Import re module to work with regex.
import re
# Passing a string value representing your regular expression to re.compile() returns a Regex pattern object (or simply, a Regex object).
#
# To create a Regex object that matches the phone number pattern, enter the following into the interactive
# shell. (Remember that \d means “a digit character” and \d\d\d-\d\d\d-\d\d\d\d is the regular expression for a phone number pattern.)
phoneNumRegex = re.compile(r'\d\d\d-\d\d\d-\d\d\d\d')
# ### 7.2.2 Matching Regex Objects
# A Regex object’s search() method searches the string it is passed for any matches to the regex. The search() method will return None if the regex pattern is not found in the string. If the pattern is found, the search() method returns a Match object, which have a group() method that will return the actual matched text from the searched string. (I’ll explain groups shortly.)
mo = phoneNumRegex.search('My number is 415-555-4242.')
print('Phone number found: ' + mo.group())
# Here, we pass our desired pattern to re.compile() and store the resulting Regex object in phoneNumRegex. Then we call search() on phoneNumRegex and pass search() the string we want to match for during the search. The result of the search gets stored in the variable mo. In this example, we know that our pattern will be found in the string, so we know that a Match object will be returned. Knowing that mo contains a Match object and not the null value None, we can call group() on mo to return the match. Writing mo.group() inside our print() function call displays the whole match, 415-555-4242.
# ### 7.2.3 Review of Regular Expression Matching
# 1. Import the regex module with import re.
# 2. Create a Regex object with the re.compile() function. (Remember to use a raw string.)
# 3. Pass the string you want to search into the Regex object’s search() method. This returns a Match object.
# 4. Call the Match object’s group() method to return a string of the actual matched text.
# ## 7.3 More Pattern Matching with Regular Expression
# ### 7.3.1 Grouping with Parentheses
phoneNumRegex = re.compile(r'(\d\d\d)-(\d\d\d-\d\d\d\d)')
mo = phoneNumRegex.search('My number is 415-555-4242.')
# Call the first group using mo.group(1)
mo.group(1)
# Call the second group using mo.group(2)
mo.group(2)
# Call the whole number using mo.group(0) or mo.group()
# If you would like to retrieve all the groups at once, use the groups() method—note the plural form for the name.
mo.groups()
areaCode, mainNumber = mo.groups()
print(areaCode)
# Since mo.groups() returns a tuple of multiple values, you can use the multiple-assignment trick to assign
# each value to a separate variable, as in the previous areaCode, mainNumber = mo.groups() line.
#
| Automating_Tasks_p1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="JSCHpsiDpCEP"
# # **IRIS FLOWER CLASSIFICATION ML PROJECT**
#
#
# ---
#
#
# ---
#
#
# + [markdown] id="vJBtYur0huzi"
# # **IRIS DATASET ANALYSICS**
#
#
#
# * Dataset Information
#
# * The dataset contains 3 classes of 50 instances each,where each class refers to a type of iris plant.
# * One class is linearly seperable from the others 2;latter are not linearly seperable from each other.
#
# * Attributes - SepalLength ,SepalWidth,PetalLength and PetalWidth all are in cm's and classes are as follows - 1.Iris Setosa, 2.Iris Versicolour & 3. Iris Virginica.
#
#
#
#
# ---
#
#
#
# ---
#
#
# + id="tBCckIwPdjuD"
from google.colab import files
from IPython.display import Image
# + id="buJiDzxSeeKN"
import cv2
from google.colab.patches import cv2_imshow
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 73} id="21BSk_PueA3v" outputId="5222a9a7-406e-4892-a4d7-bb31bd7cf6db"
uploaded = files.upload()
# + colab={"base_uri": "https://localhost:8080/", "height": 167} id="x7tNqMi7eT9L" outputId="e399027a-df92-4f5b-8801-4878e2401f92"
image = cv2.imread("ic.jpg")
cv2_imshow(image)
# + [markdown] id="DKoHCj86kZsj"
#
#
# ---
#
#
#
# ---
#
#
# ##**STEP 01 IMPORTING LIBRARIES**
#
#
# ---
#
#
#
# ---
#
#
# + id="QX0wXgdcRM6c"
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
# + [markdown] id="X2YCmNafj628"
#
#
# ---
#
#
#
# ---
#
#
# ##**STEP 02 UPLOADING THE DATASET**
#
#
# ---
#
#
#
# ---
#
#
# + id="tFUwTY-FRz0x"
from google.colab import files
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 73} id="8o40li4ISHBG" outputId="409a31cb-1db2-4d2e-daf8-27352bb98966"
uploaded = files.upload()
# + colab={"base_uri": "https://localhost:8080/"} id="qcjGbXgLSJ2R" outputId="378949f3-3275-4cf8-e795-cb3306fdc2f5"
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# + [markdown] id="3bXpaTk1kH8K"
#
#
# ---
#
#
#
# ---
#
#
# ##**STEP 03 LOADING THE DATASET**
#
#
# ---
#
#
#
# ---
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="Y11zievoSU36" outputId="46459021-7341-4ccb-986a-031fd12eea06"
df=pd.read_csv('Iris (3).csv')
df.head()
# + id="vygqSNCkSnC6"
df=df.drop(columns=['Id'])
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="0bVGuDkjS108" outputId="918c8fa5-db48-498d-f846-f78cc485729b"
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="eMmAEQY9S3tc" outputId="25817f9d-e363-4233-f819-e83b69586b1a"
#TO DISPLAY STATS ABOUT DATA
df.describe()
# + colab={"base_uri": "https://localhost:8080/"} id="0IBcMxIsS-t3" outputId="3ad89dfe-98be-4255-ea55-db5b168807f0"
df.info()
# + colab={"base_uri": "https://localhost:8080/"} id="YXDpPcV6TFrO" outputId="003f4d89-650f-44ac-d2b1-45528b7ccaa0"
#TO DISPLAY NO.OF SAMPLES ON EACH CLASS
df['Species'].value_counts()
# + [markdown] id="-Arf0S5jlIfy"
#
#
# ---
#
#
#
# ---
# ##**STEP 04 PREPROCESSING THE DATASET**
#
#
# ---
#
#
#
# ---
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="NyAgPPk3TgOl" outputId="7991f4e9-66a5-4ee8-9c89-f1dee593c3a8"
#CHECKING NULL VALUES
df.isnull().sum()
# + id="OhnhBPqclegS"
# + [markdown] id="g8KY2-RMlfC1"
#
#
# ---
#
#
#
# ---
#
# ##**STEP 05 EXPLORATORY DATA ANALYSICS**
# ##Data visualization
#
#
# ---
#
#
#
# ---
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="l_4anflPT7fe" outputId="828b7819-dfe0-46d9-b0a3-9674cc36c481"
df['SepalLengthCm'].hist()
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="2meeY55YUGa3" outputId="aeb6f09a-8a9e-487a-f88b-fbd69bc71880"
df['SepalWidthCm'].hist()
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="gBKt8H-NUgFS" outputId="cc235224-706b-4966-fd4d-597dbccac6de"
df['PetalLengthCm'].hist()
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="dnuoMkfYUf6k" outputId="b7fcbbbe-d24b-4914-d706-c68a88928612"
df['PetalWidthCm'].hist()
# + id="Wcl-HXbjUfwY"
#SCATTERPLOT
colors=['red','orange','blue']
species=['Iris-setosa','Iris-virginica' ,'Iris-versicolor']
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="V_ASjglKUfia" outputId="170cc039-4725-4a7f-db1d-9b23b710c73d"
for i in range(3):
x=df[df['Species']==species[i]]
plt.scatter(x['SepalLengthCm'],x['SepalWidthCm'],c=colors[i],label=species[i])
plt.xlabel("SepalLength")
plt.ylabel('SepalWidth')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="qs0nZFbgWvBa" outputId="c63fd2ea-e597-4902-b4d7-2d911640a763"
for i in range(3):
x=df[df['Species']==species[i]]
plt.scatter(x['PetalLengthCm'],x['PetalWidthCm'],c=colors[i],label=species[i])
plt.xlabel("PetalLength")
plt.ylabel('PetalWidth')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="3U22w1JgXdwx" outputId="af7c8979-af4d-4ea1-8360-caf8426ee1b1"
for i in range(3):
x=df[df['Species']==species[i]]
plt.scatter(x['SepalLengthCm'],x['PetalLengthCm'],c=colors[i],label=species[i])
plt.xlabel("SepalLength")
plt.ylabel('PetalLength')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="K2zedKRRXvE5" outputId="c894f630-2116-413f-d7b1-faa33615912f"
for i in range(3):
x=df[df['Species']==species[i]]
plt.scatter(x['SepalWidthCm'],x['PetalWidthCm'],c=colors[i],label=species[i])
plt.xlabel("SepalWidth")
plt.ylabel('PetalWidth')
plt.legend()
# + [markdown] id="wnpDomW8nnt6"
#
#
# ---
#
#
#
# ---
#
#
# **CORRELATION MATRIX** - A correlation matrix is a table showing correlation coefficients betweeen variables .Each cell in the table shows the coorelation btw two variables. The values is in the range of -1 to +1 . If two variables have high correlation,we can neglect one variable from those two.
#
#
# ---
#
#
#
# ---
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="gbFLFeVPY__u" outputId="c30ef41e-14fb-4e79-f8a9-48bed717ad3c"
df.corr()
# + [markdown] id="K6BF07qrmKqy"
#
# + colab={"base_uri": "https://localhost:8080/", "height": 409} id="vrVAaKflZRws" outputId="595f3e67-b055-48b1-aa16-fbf3369f9e5d"
corr=df.corr()
fig, ax =plt.subplots(figsize=(6,5))
sns.heatmap(corr,annot =True,ax=ax)
# + [markdown] id="cvShXYuhnvcu"
#
#
# ---
#
#
#
# ---
#
#
# **LABEL ENCODER**- In machine learning, we usually deals with datasets which contains multiple labels in one or more than one columns. These labels can be in the form of words or numbers. Label Encoders refers to convertiong th labels into numeric form so as to convert it into the machine readable form
#
#
# ---
#
#
#
# ---
#
#
# + id="5UUW5GSHZw6f"
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="0rEo9ELXa907" outputId="dde99d23-1016-41c4-e16f-77db29431c6f"
df['Species']=le.fit_transform(df['Species'])
df.head()
# + [markdown] id="qnRrFFN3oyfq"
#
#
# ---
#
#
#
# ---
#
#
# ##**STEP 06 MODEL TRAINING**
#
#
# ---
#
#
#
# ---
#
#
# + id="JqV_ndx4bJm3"
from sklearn.model_selection import train_test_split #train -70; test -30
X=df.drop(columns=['Species'])
Y=df['Species']
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.30)
# + id="z4KovrHIccsp"
#LOGISTIC REGRESSION
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
# + colab={"base_uri": "https://localhost:8080/"} id="8YfAGOfgcyF7" outputId="4d2cd230-ea5a-40d3-d067-e8dcb9d1811a"
model.fit(x_train,y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="b2hR0DRvdJEZ" outputId="8c084f0e-046a-432f-e0ab-476822f50f21"
#PRIBT METRIC TO GET PERFORMANCE
print("Accuracy : ",model.score(x_test,y_test)*100)
# + id="9Edr5WkUd3pS"
#knn - k-nearest neigthbours
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier()
# + colab={"base_uri": "https://localhost:8080/"} id="RyhXlzO6esJG" outputId="98f6f0f5-d33a-44a1-ac9e-fa8ad5c950da"
model.fit(x_train,y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="6O7iMR9kfDw0" outputId="cf7599be-084e-4598-fc26-3ac8dd33dba4"
print("Accuracy : ",model.score(x_test,y_test)*100)
# + id="ob2p8me_fJmE"
#decision tree
from sklearn.tree import DecisionTreeClassifier
model=DecisionTreeClassifier()
# + colab={"base_uri": "https://localhost:8080/"} id="J1UNLxWBf1So" outputId="0e8eb1ce-727e-4a90-8839-aa4c88797b30"
model.fit(x_train,y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="0XxlwvIbf5aF" outputId="83dfa7b6-22b5-4468-d3b8-562116372c6f"
print("Accuracy : ",model.score(x_test,y_test)*100)
# + id="3ibWTAK3f91w"
| Iris Flower Classification/Iris_Flower_classification_ML_project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import math
import torch
import gpytorch
import pyro
import tqdm
import h5py
import numpy as np
import matplotlib.pyplot as plt
import os, sys
sys.path.append("../")
from kernels import SpaceKernel
# %matplotlib inline
kern = SpaceKernel()
r_bao = torch.tensor(100/0.7/1000) # Gpc
w_bao = torch.tensor(15/0.7/1000) # Gpc
kern.raw_gauss_mean.data = torch.log(torch.exp(r_bao) - 1)
kern.raw_gauss_sig.data = torch.log(torch.exp(w_bao) - 1)
torch.nn.functional.softplus(kern.raw_gauss_mean)
tau = torch.linspace(0, 0.3)
xi = kern(tau, torch.zeros(1,1)).evaluate()
plt.plot(tau, xi.detach().log())
r_bao = 100/0.7/1000 # Gpc
w_bao = 15/0.7/1000 # Gpc
f = h5py.File("../data/comoving-positions.h5", 'r')
dset = f['pos']
obs = torch.FloatTensor(dset[()])
# +
n = 10
dim = 3
inducing_pts = torch.zeros(pow(n, dim), dim)
for i in range(n):
for j in range(n):
for k in range(n):
inducing_pts[i * n**2 + j * n + k][0] = float(i) / ((n-1) * 0.5) - 1.
inducing_pts[i * n**2 + j * n + k][1] = float(j) / ((n-1) * 0.5) - 1.
inducing_pts[i * n**2 + j * n + k][2] = float(k) / ((n-1) * 0.5) - 1.
inducing_row = torch.tensor([float(i) / ((n-1) * 0.5) - 1. for i in range(n)])
# -
class GPModel(gpytorch.models.ApproximateGP):
def __init__(self, num_arrivals, edge_len, inducing_pts, name_prefix="cox_gp_model"):
self.name_prefix = name_prefix
self.dim = inducing_pts.shape[-1]
self.edge_len = edge_len
self.mean_intensity = num_arrivals / (edge_len ** dim)
num_inducing = inducing_pts.shape[0]
# Define the variational distribution and strategy of the GP
# We will initialize the inducing points to lie on a grid from 0 to T
variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(num_inducing_points=num_inducing)
variational_strategy = gpytorch.variational.VariationalStrategy(self, inducing_pts, variational_distribution)
# Define model
super().__init__(variational_strategy=variational_strategy)
# Define mean and kernel
self.mean_module = gpytorch.means.ZeroMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, times):
mean = self.mean_module(times)
covar = self.covar_module(times)
return gpytorch.distributions.MultivariateNormal(mean, covar)
def guide(self, arrival_times, quadrature_times):
# Draw samples from q(f) at arrival_times
# Also draw samples from q(f) at evenly-spaced points (quadrature_times)
with pyro.plate(self.name_prefix + ".times_plate", dim=-1):
pyro.sample(
self.name_prefix + ".function_samples",
self.pyro_guide(torch.cat([arrival_times, quadrature_times], 0))
)
def model(self, arrival_times, quadrature_times):
pyro.module(self.name_prefix + ".gp", self)
# Draw samples from p(f) at arrival times
# Also draw samples from p(f) at evenly-spaced points (quadrature_times)
with pyro.plate(self.name_prefix + ".times_plate", dim=-1):
function_samples = pyro.sample(
self.name_prefix + ".function_samples",
self.pyro_model(torch.cat([arrival_times, quadrature_times], 0))
)
####
# Convert function samples into intensity samples, using the function above
####
intensity_samples = function_samples.exp() * self.mean_intensity
# Divide the intensity samples into arrival_intensity_samples and quadrature_intensity_samples
arrival_intensity_samples, quadrature_intensity_samples = intensity_samples.split([
arrival_times.size(-1), quadrature_times.size(-1)
], dim=-1)
####
# Compute the log_likelihood, using the method described above
####
arrival_log_intensities = arrival_intensity_samples.log().sum(dim=-1)
est_num_arrivals = quadrature_intensity_samples.mean(dim=-1).mul(self.max_time)
log_likelihood = arrival_log_intensities - est_num_arrivals
pyro.factor(self.name_prefix + ".log_likelihood", log_likelihood)
model = GPModel(obs.shape[0], edge_len = 2., inducing_pts=inducing_pts)
test = model(inducing_pts).sample(sample_shape=torch.Size((1,))).squeeze()
tt = test.view(n,n,n)
from scipy.interpolate import RegularGridInterpolator
interp = RegularGridInterpolator((inducing_row, inducing_row, inducing_row),
tt.numpy())
# ### Generate Random Points
import seaborn as sns
import pandas as pd
Ndraw = 1000
rs = np.cbrt(0.74**3*torch.rand(Ndraw).numpy())
# +
cos_thetas = np.random.uniform(low=-1, high=1, size=Ndraw)
sin_thetas = np.sqrt(1-cos_thetas*cos_thetas)
phis = np.random.uniform(low=0, high=2*math.pi, size=Ndraw)
pts = np.column_stack((rs*np.cos(phis)*sin_thetas,
rs*np.sin(phis)*sin_thetas,
rs*cos_thetas))
rs = np.sqrt(np.sum(np.square(pts[:,np.newaxis,:] - pts[np.newaxis,:,:]), axis=2))
# -
sns.pairplot(pd.DataFrame(pts))
# ### Compute Covariance and Rejection Sample
sample_intensity = model(torch.tensor(pts).float()).sample(sample_shape=torch.Size((1,))).squeeze()
sample_intensity = sample_intensity.div(sample_intensity.max())
pts = pts[torch.rand(Ndraw) < sample_intensity, :]
print('Drew {:d}'.format(pts.shape[0]))
pts = pd.DataFrame(data=pts, columns=['x', 'y', 'z'])
sns.pairplot(pts, markers='.')
# +
import os
smoke_test = ('CI' in os.environ)
num_iter = 2 if smoke_test else 200
num_particles = 1 if smoke_test else 32
train_pts = torch.tensor(pts.values).double()
inducing_pts = inducing_pts.double()
def train(lr=0.01):
optimizer = pyro.optim.Adam({"lr": lr})
loss = pyro.infer.Trace_ELBO(num_particles=num_particles, vectorize_particles=True, retain_graph=True)
infer = pyro.infer.SVI(model.model, model.guide, optimizer, loss=loss)
model.train()
loader = tqdm.tqdm_notebook(range(num_iter))
for i in loader:
loss = infer.step(train_pts, inducing_pts)
loader.set_postfix(loss=loss)
train()
# -
| Notebooks/threeD_cox_old.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import warnings
import pandas as pd
import itertools
import scipy
import scipy.stats
import numpy as np
from functools import reduce
import re
import numpy
import subprocess as sp
import os
import sys
import time
warnings.filterwarnings("ignore")
#import argparse
# parser = argparse.ArgumentParser()
# parser.add_argument('-rn', "--rowname", nargs='?', help="Rownames for heatmaps // True or False", const=1, type=str, default='True')
# args = parser.parse_args()
class Analysis:
def __init__(self, data,samplesheet):
self.data = 'inputs/'+data
self.samplesheet = 'inputs/'+samplesheet
# self.heatmap_rowname = args.rowname
def input_check(self):
id_dict = self.get_ids('ID')
print("Number of Samples:",len(id_dict))
for x,y in id_dict.items():
print (x,':',y)
sample_id = self.get_ids('All')
if len(sample_id) != len(set(sample_id)):
raise Exception('Error: Check unique Sample IDs in: Groups.csv for error')
skeleton_input = pd.read_table(self.data)
metabolite_list = skeleton_input['Metabolite']
if len(metabolite_list) != len(set(metabolite_list)):
raise Exception('Error: Check Metabolite column for duplicates in : Skeleton_input.tsv')
if self.get_matrix(self.get_ids('All')).isnull().values.any():
raise Exception('Error: Check for Missing Values in Sample intensities: Skeleton_input.csv')
if len(sample_id) != len(test.get_matrix(test.get_ids('All')).columns):
raise Exception('Error: Check if Number of Samples in Groups.csv matches Skeleton_input.tsv')
def dir_create(self):
groups = pd.read_csv(self.samplesheet)
results_folder = 'DME-results-'+str(len(self.get_ids('True'))) + '-Samples/'
sub_directories = [results_folder+ subdir for subdir in ['Volcano','Heatmap','Tables','PCA','Inputs','Pathway']]
sub_directories.append(results_folder)
for direc in sub_directories:
if not os.path.exists(direc):
os.makedirs(direc)
def get_groups(self):
# Get corresponding IDs for each group in Groups.csv
project = pd.read_csv(self.samplesheet)
grouped_samples = {}
for condition in (project.Group.unique()):
if condition != 'Blank':
test = [x.split('.')[0] for x in project.loc[project['Group'] == condition, 'File'].tolist()]
grouped_samples[condition] = test
return (grouped_samples)
def get_ids(self,full):
# Return sample IDS for all samples including blanks
if full == 'All':
skeleton = pd.read_table(self.data)
spike_cols = [col for col in skeleton.columns if 'S' in col]
spike_cols.pop(0)
return (list(spike_cols))
# Get all sequence IDS (xml ids) from Groups.csv
if full == 'True':
project = pd.read_csv(self.samplesheet)
project = project.loc[project['Group'] != 'Blank']
all_samples = [x.split('.')[0] for x in project['File'].tolist()]
return(all_samples)
if full == 'Sample':
project = pd.read_csv(self.samplesheet)
project = project.loc[project['Group'] != 'Blank']
all_samples = [x.split('.')[0] for x in project['id'].tolist()]
return(all_samples)
# Get all blank IDS from skeleton output matrix
if full == 'Blank':
project = pd.read_csv(self.samplesheet)
project = project.loc[project['Group'] == 'Blank']
all_samples = [x.split('.')[0] for x in project['File'].tolist()]
return (list(all_samples))
if full == 'ID':
project = pd.read_csv(self.samplesheet)
grouped_samples = {}
for condition in (project.id.unique()):
test = [x.split('.')[0] for x in project.loc[project['id'] == condition, 'File'].tolist()]
test = ''.join(test)
grouped_samples[test] = condition
return(grouped_samples)
def sequence2id(self,result):
ids = self.get_ids('ID')
for x,y in ids.items():
#print(x,y)
result.rename(columns={x: y}, inplace=True)
# Returns matrix based on inputted IDS
return(result)
def get_matrix(self,ids):
skeleton_outbut_hybrid = pd.read_table(self.data)
skeleton_outbut_hybrid = skeleton_outbut_hybrid.set_index('Metabolite')
matrix = (skeleton_outbut_hybrid[skeleton_outbut_hybrid.columns.intersection(ids)])
return (matrix)
def get_imputed_full_matrix(self,full_matrix,param):
blank_matrix = pd.DataFrame(self.get_matrix(self.get_ids('Blank')))
blank_threshold = pd.DataFrame(blank_matrix.mean(axis=1)*3)+10000
blank_threshold['Metabolite'] = blank_threshold.index
blank_threshold.columns = ['blank_threshold','Metabolite']
test_dictionary = {}
for index, row in full_matrix.iterrows():
test_list = []
#print(index)
for val in row:
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
if val < blankthresh:
if param == 'detected':
test_list.append(blankthresh)
if param == 'corrected':
test_list.append(0)
else:
test_list.append(val)
test_dictionary[index] = test_list
df_test = (pd.DataFrame.from_dict(test_dictionary))
final = df_test.transpose()
final.columns = list(full_matrix)
return(final)
def compile_tests(self,results_folder,full_matrix):
test_compile = {}
blank_matrix = pd.DataFrame(self.get_matrix(self.get_ids('Blank')))
blank_threshold = pd.DataFrame(blank_matrix.mean(axis=1)*3)+10000
blank_threshold['Metabolite'] = blank_threshold.index
blank_threshold.columns = ['blank_threshold','Metabolite']
for file in os.listdir(results_folder):
if file.endswith('corrected.csv'):
#path = os.path.abspath(results_folder+file)
test = pd.read_csv(results_folder+file,keep_default_na=True)
test = test.fillna('NA')
test.index = test['Metabolite']
columns = ['ttest_pval', 'Log2FoldChange','impact_score']
changed_names = [file +'_'+ x for x in columns]
changed_names = [x.replace('.corrected.csv','') for x in changed_names]
df1 = pd.DataFrame(test, columns=columns)
df1.columns = changed_names
test_compile[file] = df1
merged_df = pd.concat(test_compile, axis =1)
merged_df.columns = [col[1] for col in merged_df.columns]
test_dictionary = {}
for index, row in full_matrix.iterrows():
test_list = []
#print(index)
for val in row:
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
if val < blankthresh:
test_list.append(blankthresh)
else:
test_list.append(val)
test_dictionary[index] = test_list
df_test = (pd.DataFrame.from_dict(test_dictionary))
final = df_test.transpose()
final.columns = list(full_matrix)
detection_dict = {}
for index, row in final.iterrows():
test_list = []
#print (row)
#print(index)
row_intensity = (pd.DataFrame(row))
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
detected = (row_intensity[row_intensity > float(blankthresh)].count())
detected = (detected[0])
detection_dict[index] = detected
test_dictionary = {}
for index, row in full_matrix.iterrows():
test_list = []
#print(index)
for val in row:
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
if val < blankthresh:
test_list.append('-')
else:
test_list.append(val)
test_dictionary[index] = test_list
df_test = (pd.DataFrame.from_dict(test_dictionary))
new_final = df_test.transpose()
new_final.columns = list(full_matrix)
detection_df = pd.DataFrame(list(detection_dict.items()))
detection_df.columns = ['Metabolite','Detection']
detection_df.index = detection_df['Metabolite']
#detection_df.to_csv()
#
compiled = new_final.join(merged_df, how='outer')
compiled_final = compiled.join(detection_df, how='outer')
#passing_df = detection_df.drop('Detection', 1)
return(compiled_final,final)
def dme_comparisons(self):
sample_groups = self.get_groups()
groups = pd.read_csv(self.samplesheet)
unique_groups = [x for x in groups.Group.unique() if x != 'Blank']
unique_comparisons = []
for L in range(0, len(unique_groups)+1):
for subset in itertools.combinations(unique_groups, L):
if len(subset)== 2:
unique_comparisons.append(subset)
reversed_groups = []
for comparison in unique_comparisons:
reversed_comparison = (tuple(((reversed(comparison)))))
#print(reversed_comparison)
reversed_groups.append(reversed_comparison)
# print(comparison)
# print(reversed_comparison)
# print("\n")
unique_comparisons = unique_comparisons + reversed_groups
return(unique_comparisons)
def t_test(self):
print("\n")
print("################")
print("Pipeline executed:")
self.input_check()
print("\n")
print("Creating Directories...")
print("\n")
# Create all necessary directories
self.dir_create()
groups = pd.read_csv(self.samplesheet)
unique_groups = [x for x in groups.Group.unique()]
# get all unique comparisons from Groups.csv
unique_comparisons = self.dme_comparisons()
#Meta Data on Metabolites
standard = pd.read_table(self.data)
detection_column_index = standard.columns.get_loc("detections")
standard = standard.iloc[:,0:detection_column_index]
# Set directory for results folder
results_folder = 'DME-results-'+str(len(self.get_ids('True'))) + '-Samples/'
# Get full matrix of intensity values with Sequence IDS replaced with ID from Groups.csv
full_matrix = self.get_matrix(self.get_ids(full='True'))
full_matrix = self.sequence2id(full_matrix)
full_matrix_name = results_folder+'Tables/'+'Intensity.values.csv'
detected_matrix_name = results_folder+'Tables/'+'Intensity.detected.values.csv'
full_matrix.to_csv(full_matrix_name)
corrected_matrix = self.sequence2id(self.get_imputed_full_matrix(self.get_matrix(ids=self.get_ids('True')),param ='corrected'))
corrected_matrix.index.name = 'Metabolite'
corrected_matrix.to_csv(results_folder+'Tables/'+'Intensity.corrected.values.csv')
for comparison in unique_comparisons:
matrices = []
sample_groups = self.get_groups()
#print (comparison[0])
comparison_ids = []
for condition in comparison:
if condition in sample_groups:
ids = (sample_groups[condition])
#print (ids)
matrices.append((self.get_imputed_full_matrix(self.get_matrix(ids=ids),param='detected')))
comparison_ids.append(ids)
sample_ids = [item for sublist in comparison_ids for item in sublist]
#generate samplesheet just for comparison
samplesheet = pd.read_csv(self.samplesheet)
samplesheet_comparison = samplesheet.loc[samplesheet['File'].isin(sample_ids)]
samplesheet_comparison_name = results_folder+'PCA/samplesheet.csv'
samplesheet_comparison.to_csv(samplesheet_comparison_name)
#print ((matrices.shape())
group_sample_number = int((matrices[0].shape)[1])
group_sample_number_2 = int(group_sample_number+ ((matrices[1].shape)[1]))
#print(comparison_ids)
pca_matrix = reduce(lambda left,right: pd.merge(left,right,left_index=True, right_index=True), matrices)
#pca_matrix = pd.DataFrame(pca_matrix).set_index('Metabolite')
pca_matrix.index.name = 'Metabolite'
comparison_pca_name = (results_folder+'PCA/'+comparison[0]+'_vs_'+comparison[1]+'_PCA.html').replace(" ", "")
comparison_pca = results_folder+'PCA/PCA_matrix.csv'
pca_matrix.to_csv(comparison_pca)
proc = sp.Popen(['python','-W ignore','pca.py',comparison_pca,samplesheet_comparison_name,comparison_pca_name])
matrices.append(pd.DataFrame(self.get_matrix(self.get_ids(full='Blank'))))
df_m = reduce(lambda left,right: pd.merge(left,right,left_index=True, right_index=True), matrices)
# print(df_m.head())
# df_blankless = df_m.copy()
#print(group_sample_number,group_sample_number_2)
# print(df_blankless.head())
#return(df_blankless)
### Calculate Pearson Correlation
def get_correlation(matrix,group):
temp_pearson_dict ={}
cov = samplesheet.loc[samplesheet['Group'] == group]['Covariate']
for row in matrix.iterrows():
index, data = row
pearson_correl = np.corrcoef(data, cov)[0, 1]
temp_pearson_dict[index] = pearson_correl
pearson_df = pd.DataFrame([temp_pearson_dict]).T
pearson_df.columns = [group]
return(pearson_df)
# Not blank corrected test
# df_blankless['ttest_pval'] = ((scipy.stats.ttest_ind(df_blankless.iloc[:, :group_sample_number], df_blankless.iloc[:, group_sample_number:group_sample_number_2], axis=1))[1])
# group_1_df = (pd.DataFrame(df_blankless.iloc[:, :group_sample_number]))
# group_2_df = (pd.DataFrame(df_blankless.iloc[:, group_sample_number:group_sample_number_2]))
# pearson_1 = get_correlation(group_1_df,comparison[0])
# pearson_2 = get_correlation(group_2_df,comparison[1])
# merged_pearson = pearson_1.join(pearson_2, how='outer')
# merged_pearson['Metabolite'] = merged_pearson.index
# df_blankless[comparison[0]+'_Mean'] = (group_1_df.mean(axis=1))
# df_blankless[comparison[1]+'_Mean'] = (group_2_df.mean(axis=1))
# df_blankless['Log2FoldChange'] = np.log2(((group_1_df.mean(axis=1)))/((group_2_df.mean(axis=1))))
# #df_blankless = df_blankless.round(2)
# final_blankless = pd.merge(standard, df_blankless, on='Metabolite')
# blankless_name = (results_folder+comparison[0]+'_vs_'+comparison[1]+'.uncorrected.csv')
#final_blankless = self.sequence2id(final_blankless)
#final_blankless.to_csv(blankless_name)
# Blank corrected
blank_matrix = pd.DataFrame(self.get_matrix(self.get_ids('Blank')))
blank_matrix.to_csv(results_folder+'Tables/'+'blank_intensity.csv')
blank_threshold = pd.DataFrame(blank_matrix.mean(axis=1)*3)+10000
blank_threshold['Metabolite'] = blank_threshold.index
blank_threshold.columns = ['blank_threshold','Metabolite']
# test_dictionary = {}
# for index, row in df_m.iterrows():
# test_list = []
# #print(index)
# for val in row:
# blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
# if val < blankthresh:
# test_list.append(blankthresh)
# else:
# test_list.append(val)
# test_dictionary[index] = test_list
# df_test = (pd.DataFrame.from_dict(test_dictionary))
# final = df_test.transpose()
# final.columns = list(df_m)
# df_m = final.copy()
# df_m['Metabolite'] = df_m.index
df_m['ttest_pval'] = ((scipy.stats.ttest_ind(df_m.iloc[:, :group_sample_number], df_m.iloc[:, group_sample_number:group_sample_number_2], axis=1))[1])
df_m['1/pvalue'] = float(1)/df_m['ttest_pval']
group_1_df = (pd.DataFrame(df_m.iloc[:, :group_sample_number]))
group_2_df = (pd.DataFrame(df_m.iloc[:, group_sample_number:group_sample_number_2]))
df_m[comparison[0]+'_Mean'] = (group_1_df.mean(axis=1))
df_m[comparison[1]+'_Mean'] = (group_2_df.mean(axis=1))
df_m['Log2FoldChange'] = np.log2(((group_1_df.mean(axis=1)))/((group_2_df.mean(axis=1))))
df_m['LogFoldChange'] = (((group_1_df.mean(axis=1)))/((group_2_df.mean(axis=1))))
final_df_m = pd.merge(standard, df_m, on='Metabolite')
final_df_m = pd.merge(final_df_m,blank_threshold,on='Metabolite')
# Add detection column
for col in blank_matrix.columns:
final_df_m[col] = blank_matrix[col].values
comparison_name = (results_folder+'Tables/'+comparison[0]+'_vs_'+comparison[1]+'.corrected.csv').replace(" ", "")
final_df_m = self.sequence2id(final_df_m)
final_df_m['combined_mean'] = (final_df_m[comparison[0]+'_Mean']+final_df_m[comparison[1]+'_Mean'])/2
final_df_m['impact_score'] = (((2**abs(final_df_m['Log2FoldChange']))*final_df_m['combined_mean'])/final_df_m['ttest_pval'])/1000000
final_df_m.impact_score = final_df_m.impact_score.round()
final_df_m['impact_score'] = final_df_m['impact_score'].fillna(0)
####Calculate Detection
detection_dict = {}
comparison_matrix = group_1_df.join(group_2_df, how='outer')
for index, row in comparison_matrix.iterrows():
test_list = []
#print (row)
#print(index)
row_intensity = (pd.DataFrame(row))
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
detected = (row_intensity[row_intensity > float(blankthresh)].count())
detected = (detected[0])
detection_dict[index] = detected
detection_df = pd.DataFrame(list(detection_dict.items()))
detection_df.columns = ['Metabolite','Detection']
detection_df.index = detection_df['Metabolite']
final_df_m = pd.merge(final_df_m,detection_df,on='Metabolite')
# Add impact score
print("Analysis",":",comparison[0]+'_vs_'+comparison[1])
print('Results Generated: %s'%comparison_name)
final_df_m = final_df_m.fillna('NA')
# final_df_m = pd.merge(final_df_m,merged_pearson,on='Metabolite',how='outer')
final_df_m.to_csv(comparison_name)
test = pd.read_csv(comparison_name)
print("Significant Metabolites P-value < 0.05:",len(test.loc[test['ttest_pval'] < 0.05]))
#Generate Volcano
print("Generating Volcano Plot: %s" %comparison_name)
proc = sp.Popen(['Rscript','scripts/volcano.plot.R',comparison_name])
# Generate heatmaps
pvalues = [str(0.05)]
print("Generating Pvalue < 0.05 Heatmap: %s"%comparison_name)
for pvalue in pvalues:
proc = sp.Popen(['Rscript','scripts/heatmap.R',comparison_name,pvalue,'TRUE'])
# Generate heatmap with all expressed metabolites
print("\n")
# Generate 3-D PCA
print("Compiling Comparison - Results - output: dme.compiled.csv")
compiled, imputed_intensities = self.compile_tests(results_folder+'Tables/',full_matrix)
compiled = compiled.fillna('-')
def change_column_order(df, col_name, index):
cols = df.columns.tolist()
cols.remove(col_name)
cols.insert(index, col_name)
return df[cols]
compiled.to_csv(results_folder+'Tables/'+'dme.compiled.csv')
dme_meta_data = standard[['Metabolite','Formula','Polarity (z)','mz','ppm','RT','RT_range']]
dme_meta_data.index = dme_meta_data['Metabolite']
compiled = pd.merge(dme_meta_data,compiled,on='Metabolite')
compiled = change_column_order(compiled, 'Detection', 7)
compiled.to_csv(results_folder+'Tables/'+'dme.compiled.csv')
imputed_intensities.index.name = "Metabolite"
#imputed_intensities = imputed_intensities.rename(columns={ imputed_intensities.columns[0]: "Metabolite" })
imputed_intensities.to_csv(results_folder+'Tables/'+'Intensity.detected.values.csv')
print("Generating Full Heatmap")
proc = sp.Popen(['Rscript','scripts/heatmap.full.R',full_matrix_name,'nonimputed'])
proc = sp.Popen(['Rscript','scripts/heatmap.full.R',detected_matrix_name,'imputed'])
proc = sp.Popen(['python','-W ignore','pca.py',detected_matrix_name,self.samplesheet,(results_folder+'PCA/'+'PCA.full.html')])
os.remove(comparison_pca)
os.remove(samplesheet_comparison_name)
from shutil import copyfile
copyfile('inputs/Groups.csv', results_folder+'Inputs/'+'Groups.csv')
copyfile('inputs/skeleton_output.tsv', results_folder+'Inputs/'+'skeleton_output.tsv')
table_directory = results_folder+'Tables'
print("resultsfolder path")
print('#######')
# for file in os.listdir(results_folder+'Tables'):
# if file.endswith('corrected.csv'):
path = os.path.abspath(results_folder+'Tables')
output_path = os.path.abspath(results_folder+'Pathway')
proc = sp.Popen(['Rscript','scripts/pathway.R',path,output_path])
# time.sleep(2)
print("\n")
print("\n")
print("\n")
print("#######")
print("\n")
print("\n")
print("\n")
test = Analysis(data='skeleton_output.tsv',samplesheet='Groups.csv')
test.t_test()
# -
test.get_ids('True')
train = pd.read_table('inputs/skeleton_output.tsv')
null_columns=train.columns[train.isnull().any()]
train[null_columns].isnull().sum()
groups = test.get_groups()
groups['OLFR2']
samplesheet = pd.read_csv('Inputs/Groups.csv')
samplesheet.head()
cov = samplesheet.loc[samplesheet['Group'] == 'OLFR2']['Covariate']
cov
# +
import warnings
import pandas as pd
import itertools
import scipy
import scipy.stats
import numpy as np
from functools import reduce
import re
import numpy
import subprocess as sp
import os
import sys
import time
warnings.filterwarnings("ignore")
#import argparse
# parser = argparse.ArgumentParser()
# parser.add_argument('-rn', "--rowname", nargs='?', help="Rownames for heatmaps // True or False", const=1, type=str, default='True')
# args = parser.parse_args()
class Analysis:
def __init__(self, data,samplesheet):
self.data = 'inputs/'+data
self.samplesheet = 'inputs/'+samplesheet
# self.heatmap_rowname = args.rowname
def input_check(self):
id_dict = self.get_ids('ID')
print("Number of Samples:",len(id_dict))
for x,y in id_dict.items():
print (x,':',y)
sample_id = self.get_ids('All')
if len(sample_id) != len(set(sample_id)):
raise Exception('Error: Check unique Sample IDs in: Groups.csv for error')
skeleton_input = pd.read_table(self.data)
metabolite_list = skeleton_input['Metabolite']
if len(metabolite_list) != len(set(metabolite_list)):
raise Exception('Error: Check Metabolite column for duplicates in : Skeleton_input.tsv')
if self.get_matrix(self.get_ids('All')).isnull().values.any():
raise Exception('Error: Check for Missing Values in Sample intensities: Skeleton_input.csv')
if len(sample_id) != len(test.get_matrix(test.get_ids('All')).columns):
raise Exception('Error: Check if Number of Samples in Groups.csv matches Skeleton_input.tsv')
def dir_create(self):
groups = pd.read_csv(self.samplesheet)
results_folder = 'DME-results-'+str(len(self.get_ids('True'))) + '-Samples/'
sub_directories = [results_folder+ subdir for subdir in ['Volcano','Heatmap','Tables','PCA','Inputs','Pathway','Correlation']]
sub_directories.append(results_folder)
for direc in sub_directories:
if not os.path.exists(direc):
os.makedirs(direc)
def get_groups(self):
# Get corresponding IDs for each group in Groups.csv
project = pd.read_csv(self.samplesheet)
grouped_samples = {}
for condition in (project.Group.unique()):
if condition != 'Blank':
test = [x.split('.')[0] for x in project.loc[project['Group'] == condition, 'File'].tolist()]
grouped_samples[condition] = test
return (grouped_samples)
def get_ids(self,full):
# Return sample IDS for all samples including blanks
if full == 'All':
skeleton = pd.read_table(self.data)
spike_cols = [col for col in skeleton.columns if 'S' in col]
spike_cols.pop(0)
return (list(spike_cols))
# Get all sequence IDS (xml ids) from Groups.csv
if full == 'True':
project = pd.read_csv(self.samplesheet)
project = project.loc[project['Group'] != 'Blank']
all_samples = [x.split('.')[0] for x in project['File'].tolist()]
return(all_samples)
if full == 'Sample':
project = pd.read_csv(self.samplesheet)
project = project.loc[project['Group'] != 'Blank']
all_samples = [x.split('.')[0] for x in project['id'].tolist()]
return(all_samples)
# Get all blank IDS from skeleton output matrix
if full == 'Blank':
project = pd.read_csv(self.samplesheet)
project = project.loc[project['Group'] == 'Blank']
all_samples = [x.split('.')[0] for x in project['File'].tolist()]
return (list(all_samples))
if full == 'ID':
project = pd.read_csv(self.samplesheet)
grouped_samples = {}
for condition in (project.id.unique()):
test = [x.split('.')[0] for x in project.loc[project['id'] == condition, 'File'].tolist()]
test = ''.join(test)
grouped_samples[test] = condition
return(grouped_samples)
def sequence2id(self,result):
ids = self.get_ids('ID')
for x,y in ids.items():
#print(x,y)
result.rename(columns={x: y}, inplace=True)
# Returns matrix based on inputted IDS
return(result)
def get_matrix(self,ids):
skeleton_outbut_hybrid = pd.read_table(self.data)
skeleton_outbut_hybrid = skeleton_outbut_hybrid.set_index('Metabolite')
matrix = (skeleton_outbut_hybrid[skeleton_outbut_hybrid.columns.intersection(ids)])
return (matrix)
def get_imputed_full_matrix(self,full_matrix):
blank_matrix = pd.DataFrame(self.get_matrix(self.get_ids('Blank')))
blank_threshold = pd.DataFrame(blank_matrix.mean(axis=1)*3)+10000
blank_threshold['Metabolite'] = blank_threshold.index
blank_threshold.columns = ['blank_threshold','Metabolite']
test_dictionary = {}
for index, row in full_matrix.iterrows():
test_list = []
#print(index)
for val in row:
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
if val < blankthresh:
test_list.append(blankthresh)
else:
test_list.append(val)
test_dictionary[index] = test_list
df_test = (pd.DataFrame.from_dict(test_dictionary))
final = df_test.transpose()
final.columns = list(full_matrix)
return(final)
def compile_tests(self,results_folder,full_matrix):
test_compile = {}
blank_matrix = pd.DataFrame(self.get_matrix(self.get_ids('Blank')))
blank_threshold = pd.DataFrame(blank_matrix.mean(axis=1)*3)+10000
blank_threshold['Metabolite'] = blank_threshold.index
blank_threshold.columns = ['blank_threshold','Metabolite']
for file in os.listdir(results_folder):
if file.endswith('corrected.csv'):
#path = os.path.abspath(results_folder+file)
test = pd.read_csv(results_folder+file,keep_default_na=True)
test = test.fillna('NA')
test.index = test['Metabolite']
columns = ['ttest_pval', 'Log2FoldChange','impact_score']
changed_names = [file +'_'+ x for x in columns]
changed_names = [x.replace('.corrected.csv','') for x in changed_names]
df1 = pd.DataFrame(test, columns=columns)
df1.columns = changed_names
test_compile[file] = df1
merged_df = pd.concat(test_compile, axis =1)
merged_df.columns = [col[1] for col in merged_df.columns]
test_dictionary = {}
for index, row in full_matrix.iterrows():
test_list = []
#print(index)
for val in row:
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
if val < blankthresh:
test_list.append(blankthresh)
else:
test_list.append(val)
test_dictionary[index] = test_list
df_test = (pd.DataFrame.from_dict(test_dictionary))
final = df_test.transpose()
final.columns = list(full_matrix)
detection_dict = {}
for index, row in final.iterrows():
test_list = []
#print (row)
#print(index)
row_intensity = (pd.DataFrame(row))
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
detected = (row_intensity[row_intensity > float(blankthresh)].count())
detected = (detected[0])
detection_dict[index] = detected
test_dictionary = {}
for index, row in full_matrix.iterrows():
test_list = []
#print(index)
for val in row:
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
if val < blankthresh:
test_list.append('-')
else:
test_list.append(val)
test_dictionary[index] = test_list
df_test = (pd.DataFrame.from_dict(test_dictionary))
new_final = df_test.transpose()
new_final.columns = list(full_matrix)
detection_df = pd.DataFrame(list(detection_dict.items()))
detection_df.columns = ['Metabolite','Detection']
detection_df.index = detection_df['Metabolite']
#detection_df.to_csv()
#
compiled = new_final.join(merged_df, how='outer')
compiled_final = compiled.join(detection_df, how='outer')
#passing_df = detection_df.drop('Detection', 1)
return(compiled_final,final)
def dme_comparisons(self):
sample_groups = self.get_groups()
groups = pd.read_csv(self.samplesheet)
unique_groups = [x for x in groups.Group.unique() if x != 'Blank']
unique_comparisons = []
for L in range(0, len(unique_groups)+1):
for subset in itertools.combinations(unique_groups, L):
if len(subset)== 2:
unique_comparisons.append(subset)
reversed_groups = []
for comparison in unique_comparisons:
reversed_comparison = (tuple(((reversed(comparison)))))
#print(reversed_comparison)
reversed_groups.append(reversed_comparison)
# print(comparison)
# print(reversed_comparison)
# print("\n")
unique_comparisons = unique_comparisons + reversed_groups
return(unique_comparisons)
def t_test(self):
print("\n")
print("################")
print("Pipeline executed:")
self.input_check()
print("\n")
print("Creating Directories...")
print("\n")
# Create all necessary directories
self.dir_create()
groups = pd.read_csv(self.samplesheet)
unique_groups = [x for x in groups.Group.unique()]
# get all unique comparisons from Groups.csv
unique_comparisons = self.dme_comparisons()
#Meta Data on Metabolites
standard = pd.read_table(self.data)
detection_column_index = standard.columns.get_loc("detections")
standard = standard.iloc[:,0:detection_column_index]
# Set directory for results folder
results_folder = 'DME-results-'+str(len(self.get_ids('True'))) + '-Samples/'
# Get full matrix of intensity values with Sequence IDS replaced with ID from Groups.csv
full_matrix = self.get_matrix(self.get_ids(full='True'))
full_matrix = self.sequence2id(full_matrix)
full_matrix_name = results_folder+'Tables/'+'Intensity.values.csv'
detected_matrix_name = results_folder+'Tables/'+'Intensity.detected.values.csv'
full_matrix.to_csv(full_matrix_name)
for comparison in unique_comparisons:
matrices = []
sample_groups = self.get_groups()
#print (comparison[0])
comparison_ids = []
for condition in comparison:
if condition in sample_groups:
ids = (sample_groups[condition])
#print (ids)
matrices.append((self.get_imputed_full_matrix(self.get_matrix(ids=ids))))
comparison_ids.append(ids)
sample_ids = [item for sublist in comparison_ids for item in sublist]
#generate samplesheet just for comparison
samplesheet = pd.read_csv(self.samplesheet)
samplesheet_comparison = samplesheet.loc[samplesheet['File'].isin(sample_ids)]
samplesheet_comparison_name = results_folder+'PCA/samplesheet.csv'
samplesheet_comparison.to_csv(samplesheet_comparison_name)
#print ((matrices.shape())
group_sample_number = int((matrices[0].shape)[1])
group_sample_number_2 = int(group_sample_number+ ((matrices[1].shape)[1]))
#print(comparison_ids)
pca_matrix = reduce(lambda left,right: pd.merge(left,right,left_index=True, right_index=True), matrices)
#pca_matrix = pd.DataFrame(pca_matrix).set_index('Metabolite')
pca_matrix.index.name = 'Metabolite'
comparison_pca_name = (results_folder+'PCA/'+comparison[0]+'_vs_'+comparison[1]+'_PCA.html').replace(" ", "")
comparison_pca = results_folder+'PCA/PCA_matrix.csv'
pca_matrix.to_csv(comparison_pca)
proc = sp.Popen(['python','-W ignore','pca.py',comparison_pca,samplesheet_comparison_name,comparison_pca_name])
matrices.append(pd.DataFrame(self.get_matrix(self.get_ids(full='Blank'))))
df_m = reduce(lambda left,right: pd.merge(left,right,left_index=True, right_index=True), matrices)
# print(df_m.head())
# df_blankless = df_m.copy()
#print(group_sample_number,group_sample_number_2)
# print(df_blankless.head())
#return(df_blankless)
### Calculate Pearson Correlation
def get_correlation(matrix,group):
temp_pearson_dict ={}
cov = samplesheet.loc[samplesheet['Group'] == group]['Covariate']
for row in matrix.iterrows():
index, data = row
pearson_correl = np.corrcoef(data, cov)[0, 1]
temp_pearson_dict[index] = pearson_correl
pearson_df = pd.DataFrame([temp_pearson_dict]).T
pearson_df.columns = [group]
return(pearson_df)
# Not blank corrected test
# df_blankless['ttest_pval'] = ((scipy.stats.ttest_ind(df_blankless.iloc[:, :group_sample_number], df_blankless.iloc[:, group_sample_number:group_sample_number_2], axis=1))[1])
# group_1_df = (pd.DataFrame(df_blankless.iloc[:, :group_sample_number]))
# group_2_df = (pd.DataFrame(df_blankless.iloc[:, group_sample_number:group_sample_number_2]))
# pearson_1 = get_correlation(group_1_df,comparison[0])
# pearson_2 = get_correlation(group_2_df,comparison[1])
# merged_pearson = pearson_1.join(pearson_2, how='outer')
# merged_pearson['Metabolite'] = merged_pearson.index
# df_blankless[comparison[0]+'_Mean'] = (group_1_df.mean(axis=1))
# df_blankless[comparison[1]+'_Mean'] = (group_2_df.mean(axis=1))
# df_blankless['Log2FoldChange'] = np.log2(((group_1_df.mean(axis=1)))/((group_2_df.mean(axis=1))))
# #df_blankless = df_blankless.round(2)
# final_blankless = pd.merge(standard, df_blankless, on='Metabolite')
# blankless_name = (results_folder+comparison[0]+'_vs_'+comparison[1]+'.uncorrected.csv')
#final_blankless = self.sequence2id(final_blankless)
#final_blankless.to_csv(blankless_name)
# Blank corrected
blank_matrix = pd.DataFrame(self.get_matrix(self.get_ids('Blank')))
blank_matrix.to_csv(results_folder+'Tables/'+'blank_intensity.csv')
blank_threshold = pd.DataFrame(blank_matrix.mean(axis=1)*3)+10000
blank_threshold['Metabolite'] = blank_threshold.index
blank_threshold.columns = ['blank_threshold','Metabolite']
# test_dictionary = {}
# for index, row in df_m.iterrows():
# test_list = []
# #print(index)
# for val in row:
# blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
# if val < blankthresh:
# test_list.append(blankthresh)
# else:
# test_list.append(val)
# test_dictionary[index] = test_list
# df_test = (pd.DataFrame.from_dict(test_dictionary))
# final = df_test.transpose()
# final.columns = list(df_m)
# df_m = final.copy()
# df_m['Metabolite'] = df_m.index
df_m['ttest_pval'] = ((scipy.stats.ttest_ind(df_m.iloc[:, :group_sample_number], df_m.iloc[:, group_sample_number:group_sample_number_2], axis=1))[1])
df_m['1/pvalue'] = float(1)/df_m['ttest_pval']
group_1_df = (pd.DataFrame(df_m.iloc[:, :group_sample_number]))
group_2_df = (pd.DataFrame(df_m.iloc[:, group_sample_number:group_sample_number_2]))
df_m[comparison[0]+'_Mean'] = (group_1_df.mean(axis=1))
df_m[comparison[1]+'_Mean'] = (group_2_df.mean(axis=1))
df_m['Log2FoldChange'] = np.log2(((group_1_df.mean(axis=1)))/((group_2_df.mean(axis=1))))
df_m['LogFoldChange'] = (((group_1_df.mean(axis=1)))/((group_2_df.mean(axis=1))))
final_df_m = pd.merge(standard, df_m, on='Metabolite')
final_df_m = pd.merge(final_df_m,blank_threshold,on='Metabolite')
# Add detection column
for col in blank_matrix.columns:
final_df_m[col] = blank_matrix[col].values
comparison_name = (results_folder+'Tables/'+comparison[0]+'_vs_'+comparison[1]+'.corrected.csv').replace(" ", "")
final_df_m = self.sequence2id(final_df_m)
final_df_m['combined_mean'] = (final_df_m[comparison[0]+'_Mean']+final_df_m[comparison[1]+'_Mean'])/2
final_df_m['impact_score'] = (((2**abs(final_df_m['Log2FoldChange']))*final_df_m['combined_mean'])/final_df_m['ttest_pval'])/1000000
final_df_m.impact_score = final_df_m.impact_score.round()
final_df_m['impact_score'] = final_df_m['impact_score'].fillna(0)
####Calculate Detection
detection_dict = {}
comparison_matrix = group_1_df.join(group_2_df, how='outer')
for index, row in comparison_matrix.iterrows():
test_list = []
#print (row)
#print(index)
row_intensity = (pd.DataFrame(row))
blankthresh = blank_threshold.loc[index, ['blank_threshold']][0]
detected = (row_intensity[row_intensity > float(blankthresh)].count())
detected = (detected[0])
detection_dict[index] = detected
detection_df = pd.DataFrame(list(detection_dict.items()))
detection_df.columns = ['Metabolite','Detection']
detection_df.index = detection_df['Metabolite']
final_df_m = pd.merge(final_df_m,detection_df,on='Metabolite')
# Add impact score
print("Analysis",":",comparison[0]+'_vs_'+comparison[1])
print('Results Generated: %s'%comparison_name)
final_df_m = final_df_m.fillna('NA')
# final_df_m = pd.merge(final_df_m,merged_pearson,on='Metabolite',how='outer')
final_df_m.to_csv(comparison_name)
test = pd.read_csv(comparison_name)
print("Significant Metabolites P-value < 0.05:",len(test.loc[test['ttest_pval'] < 0.05]))
#Generate Volcano
print("Generating Volcano Plot: %s" %comparison_name)
proc = sp.Popen(['Rscript','scripts/volcano.plot.R',comparison_name])
# Generate heatmaps
pvalues = [str(0.05)]
print("Generating Pvalue < 0.05 Heatmap: %s"%comparison_name)
for pvalue in pvalues:
proc = sp.Popen(['Rscript','scripts/heatmap.R',comparison_name,pvalue,'TRUE'])
# Generate heatmap with all expressed metabolites
print("\n")
# Generate 3-D PCA
print("Compiling Comparison - Results - output: dme.compiled.csv")
compiled, imputed_intensities = self.compile_tests(results_folder+'Tables/',full_matrix)
compiled = compiled.fillna('-')
def change_column_order(df, col_name, index):
cols = df.columns.tolist()
cols.remove(col_name)
cols.insert(index, col_name)
return df[cols]
compiled.to_csv(results_folder+'Tables/'+'dme.compiled.csv')
dme_meta_data = standard[['Metabolite','Formula','Polarity (z)','mz','ppm','RT','RT_range']]
dme_meta_data.index = dme_meta_data['Metabolite']
compiled = pd.merge(dme_meta_data,compiled,on='Metabolite')
compiled = change_column_order(compiled, 'Detection', 7)
compiled.to_csv(results_folder+'Tables/'+'dme.compiled.csv')
imputed_intensities.index.name = "Metabolite"
#imputed_intensities = imputed_intensities.rename(columns={ imputed_intensities.columns[0]: "Metabolite" })
imputed_intensities.to_csv(results_folder+'Tables/'+'Intensity.detected.values.csv')
print("Generating Full Heatmap")
proc = sp.Popen(['Rscript','scripts/heatmap.full.R',full_matrix_name,'nonimputed'])
proc = sp.Popen(['Rscript','scripts/heatmap.full.R',detected_matrix_name,'imputed'])
proc = sp.Popen(['python','-W ignore','pca.py',detected_matrix_name,self.samplesheet,(results_folder+'PCA/'+'PCA.full.html')])
os.remove(comparison_pca)
os.remove(samplesheet_comparison_name)
from shutil import copyfile
copyfile('inputs/Groups.csv', results_folder+'Inputs/'+'Groups.csv')
copyfile('inputs/skeleton_output.tsv', results_folder+'Inputs/'+'skeleton_output.tsv')
table_directory = results_folder+'Tables'
print("resultsfolder path")
print('#######')
# for file in os.listdir(results_folder+'Tables'):
# if file.endswith('corrected.csv'):
path = os.path.abspath(results_folder+'Tables')
output_path = os.path.abspath(results_folder+'Pathway')
proc = sp.Popen(['Rscript','scripts/pathway.R',path,output_path])
proc = sp.Popen(['python','-W ignore scripts/impact.correlation.py',results_folder+'Tables/dme.compiled.csv'])
# time.sleep(2)
print("\n")
print("\n")
print("\n")
print("#######")
print("\n")
print("\n")
print("\n")
test = Analysis(data='skeleton_output.tsv',samplesheet='Groups.csv')
test.t_test()
# -
| development/dev.metabolyze.ipynb |
# ---
# jupyter:
# jupytext:
# formats: python_scripts//py:percent,notebooks//ipynb
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %% [markdown]
# # Exercise 02
#
# The goal of this exercise is to evalutate the impact of using an arbitrary
# integer encoding for categorical variables along with a linear
# classification model such as Logistic Regression.
#
# To do so, let's try to use `OrdinalEncoder` to preprocess the categorical
# variables. This preprocessor is assembled in a pipeline with
# `LogisticRegression`. The performance of the pipeline can be evaluated as
# usual by cross-validation and then compared to the score obtained when using
# `OneHotEncoding` or to some other baseline score.
#
# Because `OrdinalEncoder` can raise errors if it sees an unknown category at
# prediction time, we need to pre-compute the list of all possible categories
# ahead of time:
#
# ```python
# categories = [data[column].unique()
# for column in data[categorical_columns]]
# OrdinalEncoder(categories=categories)
# ```
# %%
import pandas as pd
df = pd.read_csv("https://www.openml.org/data/get_csv/1595261/adult-census.csv")
# Or use the local copy:
# df = pd.read_csv('../datasets/adult-census.csv')
# %%
target_name = "class"
target = df[target_name].to_numpy()
data = df.drop(columns=[target_name, "fnlwgt"])
categorical_columns = [c for c in data.columns
if data[c].dtype.kind not in ["i", "f"]]
data_categorical = data[categorical_columns]
# %%
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OrdinalEncoder
from sklearn.linear_model import LogisticRegression
# TODO: write me!
| rendered_notebooks/03_basic_preprocessing_categorical_variables_exercise_01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.append('..')
import torch, torchuq
from torchuq.evaluate import distribution
from torchuq.transform.conformal import ConformalCalibrator
from torchuq.dataset import create_example_regression
# -
# In this very simple example, we create a synthetic prediction (which is a set of Gaussian distributions), plot them, and recalibrate them with conformal calibration. `
predictions, labels = create_example_regression()
# The example predictions are intentially incorrect (i.e. the label is not drawn from the predictions).
# We will recalibrate the distribution with a powerful recalibration algorithm called conformal calibration. It takes as input the predictions and the labels, and learns a recalibration map that can be applied to new data (here for illustration purposes we apply it to the original data).
calibrator = ConformalCalibrator(input_type='distribution', interpolation='linear')
calibrator.train(predictions, labels)
adjusted_predictions = calibrator(predictions)
# We can plot these distribution predictions as a sequence of density functions, and the labels as the cross-shaped markers.
# As shown by the plot, the original predictions have systematically incorrect variance and mean, which is fixed by the recalibration algorithm.
distribution.plot_density_sequence(predictions, labels, smooth_bw=10);
distribution.plot_density_sequence(adjusted_predictions, labels, smooth_bw=10);
| examples/quickstart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Now You Code 2: Shopping List
#
# Write a program to input a list of grocery items for your shopping list and then writes them to a file a line at a time. The program should keep asking you to enter grocery items until you type `'done'`.
#
# After you complete the list, the program should then load the file and read the list back to you by printing each item out.
#
# Sample Run:
#
# ```
# Let's make a shopping list. Type 'done' when you're finished:
# Enter Item: milk
# Enter Item: cheese
# Enter Item: eggs
# Enter Item: beer
# Enter Item: apples
# Enter Item: done
# Your shopping list:
# milk
# cheese
# eggs
# beer
# apples
# ```
#
# ## Step 1: Problem Analysis
#
# Inputs:
#
#
# - stuff want to append to the shopping list file
# - done
# Outputs:
#
# - show stuff appended
#
# Algorithm (Steps in Program):
# 1.
# - open file to append
# - prompt input
# - input stuff want to buy
# - input done
# - close file
#
# - open file to read
# - print stuff in file
# 2.
# - prompt input
# - open file to write
# - close file
# - open file to read
# +
#the ultimate final verison
filename = ("NYC2-shopping-list.txt")
with open (filename, 'w') as f:
while True:
shoppingList = input("what do you want to buy? ")
if shoppingList == "done":
break
else:
f.write (shoppingList + '\n')
with open (filename, 'r') as f:
print ("your shoping list is: ")
for line in f.readlines():
print(line.strip())
# +
filename = "NYC2-shopping-list.txt"
## Step 2: write code here
shopping_list = input("what u wnat to buy? ")
with open (filename, "a") as f:
while True:
if shopping_list == 'done':
break
print("here are the stuff u want to buy ",)
else:
f.write(shopping_list)
# -
while True:
shoppingList = input("What you want to buy?")
if shoppingList == "":
with open ("NYC2-shopping-list.txt",'r') as f:
shop_list = f.read()
print(shop_list)
break
else:
continue
# +
filename = ("NYC2-shopping-list.txt")
while True:
shoppingList = input("What you want to buy?")
if shoppingList == "done":
with open (filename,'r') as f:
f = open(filename,'r')
shop_list = f.read()
print("Here is the list you need to buy in shop:%s \n"%(shoppingList))
#print( '\n'.join(shop_list))
break
else:
f.write(shoppingList + "\n")
with open ("NYC2-shopping-list.txt", "a") as f:
f = open(filename,'a')
f.write(shoppingList + '\n')
#
# +
# 1 item version
filename = ("NYC2-shopping-list.txt")
while True:
shoppingList = input("What you want to buy?")
if shoppingList == "done":
with open (filename,'r') as f:
contents = []
for line in f:
contents.append(line.strip())
#print("Here is the list you need to buy in shop:%s"%(shop_list))
print(" \n ".join(shop_list))
break
else:
f.write(shoppingList + '\n')
with open ("NYC2-shopping-list.txt", "a") as f:
f.write(text.strip() + "\n")
#
# +
#the ultimate final verison
filename = ("NYC2-shopping-list.txt")
with open (filename, 'w') as f:
while True:
shoppingList = input("what do you want to buy? ")
if shoppingList == "done":
break
else:
f.write (shoppingList + '\n')
with open (filename, 'r') as f:
print ("your shoping list is: ")
for line in f.readlines():
print(line.strip())
# -
def shoppinglist(filename):
with open(filename, 'r') as f:
shopList = f.readlines()
for lines in f.readlines():
return shopList
filename = "NYC2-shopping-list.txt"
while True:
inp = input("what do you want to buy today? ")
if inp == "done":
break
else:
with open (filename, 'w') as f:
f.write(inp )
print(shoppinglist(filename))
help(str.join)
# ## Step 3: Refactoring
#
# Refactor the part of your program which reads the shopping list from the file into a separate user-defined function called `readShoppingList`
# re-write your program to use this function.
#
# ## ReadShoppingList function
#
# Inputs:
#
# Outputs:
#
# Algorithm (Steps in Program):
#
#
#
#
# +
## Step 4: Write program again with refactored code.
def readShoppingList(shoppingList):
shoppingList = []
with open (shoppingList, 'w') as f:
shoppingList = f.readlines()
shoppingList = " ".join(shoppingList)
return shoppingList
# todo read shopping list here
shop = "NYC2-shopping-list.txt"
print(readShoppingList(shop))
# TODO Main code here
# -
# ## Step 5: Questions
#
# 1. Is the refactored code in step 4 easier to read? Why or why not?
# 2. Explain how this program could be refarctored further (there's one thing that's obvious).
# 3. Describe how this program can be modified to support multiple shopping lists?
#
# ## Reminder of Evaluation Criteria
#
# 1. What the problem attempted (analysis, code, and answered questions) ?
# 2. What the problem analysis thought out? (does the program match the plan?)
# 3. Does the code execute without syntax error?
# 4. Does the code solve the intended problem?
# 5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
#
| content/lessons/08/Now-You-Code/NYC2-Shopping-List.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %pylab
# %matplotlib inline
# ### Part 1
# +
def has_pair(line):
return (('()' in line) or
('[]' in line) or
('{}' in line) or
('<>' in line))
openers = '([{<'
closers = ')]}>'
def reduce_corrupted(line):
while has_pair(line):
for pair in ['()','[]','{}','<>']:
line = line.replace(pair,'')
while True:
if len(line):
if line[-1] in openers:
line = line[:-1]
else:
break
else:
break
return(line)
# -
def find_first_bad_closer(reduced):
first_closers = []
for c in closers:
ix = reduced.find(c)
if ix!=-1:
first_closers+=[ix]
return reduced[min(first_closers)]
score_closers = {')':3,
']':57,
'}':1197,
'>':25137}
def score_first_bad_closers(fn):
#fn = 'd10p1_test.txt'
score = 0
with open(fn) as data:
for line in data.readlines():
line=line.strip()
reduced = reduce_corrupted(line)
if not reduced: # merely incomplete
continue
bc = find_first_bad_closer(reduced)
score+=score_closers[bc]
return score
assert score_first_bad_closers('d10p1_test.txt')==26397
print(f"The answer is {score_first_bad_closers('d10p1.txt')}")
# ### Part 2
def discard_corrupted(line):
ogline=line
while has_pair(line):
for pair in ['()','[]','{}','<>']:
line = line.replace(pair,'')
while True:
if len(line):
if line[-1] in openers:
line = line[:-1]
else:
break
else:
break
if not len(line):
return ogline
# +
def score_completion_string(comp):
rubric = {')':1,']':2,'}':3,'>':4}
score = 0
for c in comp:
score*=5
score+=rubric[c]
return score
assert score_completion_string('])}>')==294
# -
def score_autocomplete(fn):
scores = []
with open(fn) as data:
for line in data.readlines():
line=line.strip()
line = discard_corrupted(line)
if line is None:
continue
# reduce the line -- should only leave opening characters
while has_pair(line):
for pair in ['()','[]','{}','<>']:
line = line.replace(pair,'')
# generate completetion string -- reverse and replace with closers
completion=line.replace('(',')').replace('[',']').replace('{','}').replace('<','>')[::-1]
#print(completion)
scores += [score_completion_string(completion)]
#print(scores)
#print(np.sort(scores))
return np.sort(scores)[int(np.floor(len(scores)/2))]
assert score_autocomplete('d10p1_test.txt')==288957
print(f"The answer is {score_autocomplete('d10p1.txt')}")
| Day 10 - Syntax Scoring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Babayaga-mp4/l2i/blob/master/Sequence_encoder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="YbxTGOwzknYV"
import tensorflow as tf
distr = tf.contrib.distributions
# import numpy as np
# from tqdm import tqdm
# import os
# import matplotlib.pyplot as plt
# from datetime import timedelta
#
# import time
# Embed input sequence [batch_size, seq_length, from_] -> [batch_size, seq_length, to_]
def embed_seq(input_seq, from_, to_, is_training, BN=True, initializer=tf.contrib.layers.xavier_initializer()):
with tf.variable_scope("embedding"): # embed + BN input set
W_embed = tf.get_variable("weights", [1, from_, to_], initializer=initializer)
embedded_input = tf.nn.conv1d(input_seq, W_embed, 1, "VALID", name="embedded_input")
if BN:
embedded_input = tf.layers.batch_normalization(embedded_input, axis=2, training=True, name='layer_norm', reuse=None)
return embedded_input
# Apply multihead attention to a 3d tensor with shape [batch_size, seq_length, n_hidden].
# Attention size = n_hidden should be a multiple of num_head
# Returns a 3d tensor with shape of [batch_size, seq_length, n_hidden]
def multihead_attention(inputs, num_units=None, num_heads=16, dropout_rate=0.1, is_training=True):
with tf.variable_scope("multihead_attention", reuse=None):
# Linear projections
Q = tf.layers.dense(inputs, num_units, activation=tf.nn.relu) # [batch_size, seq_length, n_hidden]
K = tf.layers.dense(inputs, num_units, activation=tf.nn.relu) # [batch_size, seq_length, n_hidden]
V = tf.layers.dense(inputs, num_units, activation=tf.nn.relu) # [batch_size, seq_length, n_hidden]
# Split and concat
Q_ = tf.concat(tf.split(Q, num_heads, axis=2), axis=0) # [batch_size, seq_length, n_hidden/num_heads]
K_ = tf.concat(tf.split(K, num_heads, axis=2), axis=0) # [batch_size, seq_length, n_hidden/num_heads]
V_ = tf.concat(tf.split(V, num_heads, axis=2), axis=0) # [batch_size, seq_length, n_hidden/num_heads]
# Multiplication
outputs = tf.matmul(Q_, tf.transpose(K_, [0, 2, 1])) # num_heads*[batch_size, seq_length, seq_length]
# Scale
outputs = outputs / (K_.get_shape().as_list()[-1] ** 0.5)
# Activation
outputs = tf.nn.softmax(outputs) # num_heads*[batch_size, seq_length, seq_length]
# Dropouts
outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=is_training)
# Weighted sum
outputs = tf.matmul(outputs, V_) # num_heads*[batch_size, seq_length, n_hidden/num_heads]
# Restore shape
outputs = tf.concat(tf.split(outputs, num_heads, axis=0), axis=2) # [batch_size, seq_length, n_hidden]
# Residual connection
outputs += inputs # [batch_size, seq_length, n_hidden]
# Normalize
outputs = tf.layers.batch_normalization(outputs, axis=2, training=True, name='ln', reuse=None) # [batch_size, seq_length, n_hidden]
return outputs
# Apply point-wise feed forward net to a 3d tensor with shape [batch_size, seq_length, n_hidden]
# Returns: a 3d tensor with the same shape and dtype as inputs
def feedforward(inputs, num_units=[2048, 512], is_training=True):
with tf.variable_scope("ffn", reuse=None):
# Inner layer
params = {"inputs": inputs, "filters": num_units[0], "kernel_size": 1, "activation": tf.nn.relu, "use_bias": True}
outputs = tf.layers.conv1d(**params)
# Readout layer
params = {"inputs": outputs, "filters": num_units[1], "kernel_size": 1, "activation": None, "use_bias": True}
outputs = tf.layers.conv1d(**params)
# Residual connection
outputs += inputs
# Normalize
outputs = tf.layers.batch_normalization(outputs, axis=2, training=True, name='ln', reuse=None) # [batch_size, seq_length, n_hidden]
return outputs
# Encode input sequence [batch_size, seq_length, n_hidden] -> [batch_size, seq_length, n_hidden]
def encode_seq(input_seq, input_dim, num_stacks, num_heads, num_neurons, is_training, dropout_rate=0.):
with tf.variable_scope("stack"):
for i in range(num_stacks): # block i
with tf.variable_scope("block_{}".format(i)): # Multihead Attention + Feed Forward
input_seq = multihead_attention(input_seq, num_units=input_dim, num_heads=num_heads, dropout_rate=dropout_rate, is_training=is_training)
input_seq = feedforward(input_seq, num_units=[num_neurons, input_dim], is_training=is_training)
return input_seq # encoder_output is the ref for actions [Batch size, Sequence Length, Num_neurons]
| Sequence_encoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # BLU15 - Model CSI
# ## Part 2 of 2 - Model diagnosis and redeployment
# In this notebook we will be covering the following:
#
# 1. How to diagnose an existing model
# 2. Identifying issues
# 3. Redeploy the model
# ## 1. How to diagnose an existing model
# As we've seen previously, it often happens that your data distribution changes with time. More than that, sometimes you don't know how a model was trained and what was the original training data.
#
# As an example, we're going to use the same problem that you met in [BLU14](https://github.com/LDSSA/batch5-students/tree/main/S06%20-%20DS%20in%20the%20Real%20World/BLU14%20-%20Deployment%20in%20Real%20World).
#
# >The police department has received lots of complaints about its stop and search policy. Every time a car is stopped, the police officers have to decide whether or not to search the car for contraband. According to critics, these searches have a bias against people of certain backgrounds.
# >
# >Your company has been hired to (1) determine whether these criticisms seem to be substantiated, and (2) create a service to fairly decide whether or not to search a car, based on objective data. This service will be used by police officers to request authorization to search, and your service will return a Yes or No answer.
# >
# >The police department has asked for the following requirements:
# >
# >A minimum 50% success rate for searches (when a car is searched, it should be at least 50% likely that contraband is found)
# No police sub-department should have a discrepancy bigger than 5% between the search success rate between protected classes (race, ethnicity, gender)
# The largest possible amount of contraband found, given the constraints above.
#
# You got a model from your client, and **here is the model's description:**
#
# > It's a [LightGBM model (LGBMClassifier)](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html) trained on the following features:
# > - Department Name
# > - InterventionLocationName
# > - InterventionReasonCode
# > - ReportingOfficerIdentificationID
# > - ResidentIndicator
# > - SearchAuthorizationCode
# > - StatuteReason
# > - SubjectAge
# > - SubjectEthnicityCode
# > - SubjectRaceCode
# > - SubjectSexCode
# > - TownResidentIndicator
#
# - All the categorical feature were one-hot encoded. The only numerical feature (SubjectAge) was not changed.
# - The rows that contain rare categorical features (the ones that appear less than N times in the dataset) were removed.
# - You can check the `original_model.ipynb` notebook for more details.
#
# **The police department has asked for the following requirements:**
#
# > - A minimum 50% success rate for searches (when a car is searched, it should be at least 50% likely that contraband is found)
# > - No police sub-department should have a discrepancy bigger than 5% between the search success rate between protected classes (race, ethnicity, gender)
# > - The largest possible amount of contraband found, given the constraints above.
#
# **And here is the description of how the current model succeeds with the requirements:**
# - precision = 53.40%
# - recall = 89.30%
# - roc_auc_score for the probability predictions = 66.88%
#
# Firstly let's compare the models that we created in the previous BLU:
#
# | Model | Baseline | Second iteration | New model | Best model |
# |-------------------|---------|--------|--------|--------|
# | Requirement 1 - success rate | 0.53 | 0.38 | 0.5 | 1 |
# | Requirement 2 - global discrimination (race) | 0.105 | 0.11 | NaN | 1 |
# | Requirement 2 - global discrimination (sex) | 0.012 | 0.014 | NaN | 1 |
# | Requirement 2 - global discrimination (ethnicity) | 0.114 | 0.101 | NaN | 2 |
# | Requirement 2 - # department discrimination (race) | 27 | 17 | NaN | 2 |
# | Requirement 2 - # department discrimination (sex) | 19 | 23 | NaN | 1 |
# | Requirement 2 - # department discrimination (ethnicity) | 24 | NaN | 23 | 2 |
# | Requirement 3 - contraband found (Recall) | 0.65 | 0.76 | 0.893 | 3 |
#
#
# As we can see, **the last model has the exact required success rate (Requirement 1) as we need, and a very good Recall (Requirement 3)**, but it might be **risky to have such a specific threshold**, as we might end up with a success rate <0.5 really quickly. It might be a better idea to have a bigger threshold (e.g. 0.25), but let's see.
#
# Let's imagine that the model was trained long time ago. Models need to be constantly evaluated as data distribution is not always the same. Something that used to work even a year ago could be completely wrong today. And some years are more _different_ than others (looking at you 2020 and 2021).
# First of all, let's start the server which is running this model and then read a csv files with the latest observations. Open the shell and type:
# ```sh
# python protected_server.py
#
# ```
# +
import joblib
import pandas as pd
import json
import joblib
import pickle
from sklearn.metrics import precision_score, recall_score, roc_auc_score
from sklearn.metrics import confusion_matrix
import requests
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn.metrics import precision_recall_curve
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from lightgbm import LGBMClassifier
# %matplotlib inline
# -
df = pd.read_csv('./data/new_observations.csv')
df.head()
# Let's start from sending all those requests and comparing the model prediction results with the target values. The model is already prepared to convert our observations to the format its expecting, **the only thing we need to change is making department and intervention location names lowercase**, and we're good to extract fields from the dataframe and put them to the post request.
# lowercaes departments and location names
df['Department Name'] = df['Department Name'].apply(lambda x: str(x).lower())
df['InterventionLocationName'] = df['InterventionLocationName'].apply(lambda x: str(x).lower())
url = "http://127.0.0.1:5000/predict"
headers = {'Content-Type': 'application/json'}
def send_request(index: int, obs: dict, url: str, headers: dict):
observation = {
"id": index,
"observation": {
"Department Name": obs["Department Name"],
"InterventionLocationName": obs["InterventionLocationName"],
"InterventionReasonCode": obs["InterventionReasonCode"],
"ReportingOfficerIdentificationID": obs["ReportingOfficerIdentificationID"],
"ResidentIndicator": obs["ResidentIndicator"],
"SearchAuthorizationCode": obs["SearchAuthorizationCode"],
"StatuteReason": obs["StatuteReason"],
"SubjectAge": obs["SubjectAge"],
"SubjectEthnicityCode": obs["SubjectEthnicityCode"],
"SubjectRaceCode": obs["SubjectRaceCode"],
"SubjectSexCode": obs["SubjectSexCode"],
"TownResidentIndicator": obs["TownResidentIndicator"]
}
}
r = requests.post(url, data=json.dumps(observation), headers=headers)
result = json.loads(r.text)
return result
responses = [send_request(i, obs, url, headers) for i, obs in df.iterrows()]
print(responses[0])
df['proba'] = [r['proba'] for r in responses]
threshold = 0.21073452797732833
# we're going to use the threshold we got from the client
df['prediction'] = [1 if p >= threshold else 0 for p in df['proba']]
confusion_matrix(df['ContrabandIndicator'], df['prediction'])
# If you remember from S01, this is how you interpret a confusion matrix, and ours is not looking too good...
# <img src="./media/confusion_matrix.jpg" alt="drawing" width="500"/>
# If we check the client requirement of:
# > A minimum 50% success rate for searches (when a car is searched, it should be at least 50% likely that contraband is found)
#
def verify_success_rate_above(y_true, y_pred, min_success_rate=0.5):
"""
Verifies the success rate on a test set is above a provided minimum
"""
precision = precision_score(y_true, y_pred, pos_label=True)
is_satisfied = (precision >= min_success_rate)
return is_satisfied, precision
verify_success_rate_above(df['ContrabandIndicator'], df['prediction'], 0.5)
# <img src="./media/e39.jpg" alt="danger" width="500"/>
#
# What about the next requirement?
#
# > The largest possible amount of contraband found, given the constraints above.
#
# The client said their model recall was 0.893. Is it still true?
def verify_amount_found(y_true, y_pred):
"""
Verifies the amout of contraband found in the test dataset - a.k.a the recall in our test set
"""
recall = recall_score(y_true, y_pred, pos_label=True)
return recall
verify_amount_found(df['ContrabandIndicator'], df['prediction'])
# <img src="./media/no.png" alt="no" width="500"/>
#
# **Okay, relax, it happens.** Let's start from checking different thresholds. Maybe the selected threshold was too specific and doesn't work anymore.
# >What about 0.25?
threshold = 0.25
df['prediction'] = [1 if p >= threshold else 0 for p in df['proba']]
verify_success_rate_above(df['ContrabandIndicator'], df['prediction'], 0.5)
verify_amount_found(df['ContrabandIndicator'], df['prediction'])
# <img src="./media/poker.jpg" alt="drawing" width="200"/>
# Let's try the same technique to identify the best threshold as they originally did. Maybe we can find something good enough.
#
# _It's not a good idea to verify such things on the test data, but we're going to use it just to confirm the model's performance, not to select the threshold._
precision, recall, thresholds = precision_recall_curve(df['ContrabandIndicator'], df['proba'])
precision = precision[:-1]
recall = recall[:-1]
fig=plt.figure()
ax1 = plt.subplot(211)
ax2 = plt.subplot(212)
ax1.hlines(y=0.5,xmin=0, xmax=1, colors='red')
ax1.plot(thresholds,precision)
ax2.plot(thresholds,recall)
ax1.get_shared_x_axes().join(ax1, ax2)
ax1.set_xticklabels([])
plt.xlabel('Threshold')
ax1.set_title('Precision')
ax2.set_title('Recall')
plt.show()
# So what do we see? **There is some threshold value (around 0.6) that gives us precision >= 0.5, but the threshold is so big, that the recall at this point is really-really low.**
#
# Let's calculate the exact values:
min_index = [i for i, prec in enumerate(precision) if prec >= 0.5][0]
print(min_index)
thresholds[min_index]
precision[min_index]
recall[min_index]
# _**OUCCH!**_ Before we move on, we need to understand why this happens, so that we can decide what kind of action to perform.
# ## 2. Identifying issues
# Let's try to analyze the changes in data and discuss different things we might want to do.
old_df = pd.read_csv('./data/train_searched.csv')
old_df.head()
# We're going to apply the same changes to the dataset as in the original model notebook unit to understand what was the original data like and how the current dataset differs.
# +
old_df = old_df[(old_df['VehicleSearchedIndicator']==True)]
# lowercaes departments and location names
old_df['Department Name'] = old_df['Department Name'].apply(lambda x: str(x).lower())
old_df['InterventionLocationName'] = old_df['InterventionLocationName'].apply(lambda x: str(x).lower())
train_features = old_df.columns.drop(['VehicleSearchedIndicator', 'ContrabandIndicator'])
categorical_features = train_features.drop(['InterventionDateTime', 'SubjectAge'])
numerical_features = ['SubjectAge']
target = 'ContrabandIndicator'
# dictionary with the minimum required number of appearences
min_frequency = {
"Department Name": 50,
"InterventionLocationName": 50,
"ReportingOfficerIdentificationID": 30,
"StatuteReason": 10
}
# -
def filter_values(df: pd.DataFrame, column_name: str, threshold: int):
value_counts = df[column_name].value_counts()
to_keep = value_counts[value_counts > threshold].index
filtered = df[df[column_name].isin(to_keep)]
return filtered
for feature, threshold in min_frequency.items():
old_df = filter_values(old_df, feature, threshold)
print(old_df.shape)
old_df.head()
old_df['ContrabandIndicator'].value_counts(normalize=True)
df['ContrabandIndicator'].value_counts(normalize=True)
# Looks like we got a bit more contraband now, and it's already a good sign:
#
# **If the training data had a different target feature distribution than the test set, the model's predictions might have a different distribution as well. It's a good practice to have the same target feature distribution both in training and test sets.**
#
# Let's investigate further..
# +
new_department_names = df['Department Name'].unique()
old_department_names = old_df['Department Name'].unique()
unknown_departments = [department for department in new_department_names if department not in old_department_names]
len(unknown_departments)
# -
df[df['Department Name'].isin(unknown_departments)].shape
# So we have **10 departments** that the original model was **not trained on**, but they are **only 23 rows from the test set**.
#
# Let's repeat the same thing for the Intervention Location names
# +
new_location_names = df['InterventionLocationName'].unique()
old_location_names = old_df['InterventionLocationName'].unique()
unknown_locations = [location for location in new_location_names if location not in old_location_names]
len(unknown_locations)
# -
df[df['InterventionLocationName'].isin(unknown_locations)].shape[0]
print('unknown locations: ', df[df['InterventionLocationName'].isin(unknown_locations)].shape[0] * 100 / df.shape[0], '%')
# Alright, there's a bit more of unknown locations. **We don't know if the feature was important for the model**, so we can't know if these 5.3% of unknown locations are important or not. But it's worth keeping it in mind.
#
# **Here are a few ideas of what we could try to do:**
#
# 1. Reanalyze the filtered locations, e.g. filter more rare ones.
# 2. Create a new category for the rare locations
# 3. Analyze the unknown locations for containing typos
#
# Let's go take a look on the **relation between department names and the number of contrabands they find**.
#
# >We're going to select the most common department names, and then see the percentage of contraband indicator in each one for the training and test sets
common_departments = df['Department Name'].value_counts().head(20).index
departments_new = df[df['Department Name'].isin(common_departments)]
departments_old = old_df[old_df['Department Name'].isin(common_departments)]
pd.crosstab(departments_new['ContrabandIndicator'], departments_new['Department Name'], normalize="columns")
# *Unfamiliar with crosstab? you can learn more about it [here](https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html)
pd.crosstab(departments_old['ContrabandIndicator'], departments_old['Department Name'], normalize="columns")
# **We can clearly see that some departments got a huge difference in the contraband indicator.**
#
# - Bridgeport used to have 93% of False contrabands, and now has only 62%.
# - Similar situation with Danbury and New Haven.
#
# **Why?** Hard to say. There are really a lot of variables here. Maybe the departments got instructed on how to look for contraband. But it's getting clearer that we might need to retrain the model.
#
# Let's just finish reviewing other columns.
common_location = df['InterventionLocationName'].value_counts().head(20).index
locations_new = df[df['InterventionLocationName'].isin(common_location)]
locations_old = old_df[old_df['InterventionLocationName'].isin(common_location)]
pd.crosstab(locations_new['ContrabandIndicator'], locations_new['InterventionLocationName'], normalize="columns")
pd.crosstab(locations_old['ContrabandIndicator'], locations_old['InterventionLocationName'], normalize="columns")
# **What do we see?**
#
# - The InterventionLocationName and the Department Name are often the same.
#
# *It sounds pretty logic, as probably policeman's usually work in the area of their department. But we could try to create a feature saying whether InterventionLocationName is equal to the Department Name. Or maybe we could just get rid of one of them, if all the values are equal.*
#
# - There are similar changes in the Contraband distribution as in Department Name case.
#
# Let's continue..
pd.crosstab(df['ContrabandIndicator'], df['InterventionReasonCode'], normalize="columns")
pd.crosstab(old_df['ContrabandIndicator'], old_df['InterventionReasonCode'], normalize="columns")
# - There are some small changes, but they don't seem to be significant. Especially that all the 3 values have around 33% of Contraband.
#
# Time for officers!
df['ReportingOfficerIdentificationID'].value_counts()
filter_values(df, 'ReportingOfficerIdentificationID', 2)['ReportingOfficerIdentificationID'].nunique()
# - It looks like there are a lot of unique values for the officer id (1166 for 2000 records), and there are not so many common ones (only 206 officers have more than 2 rows in the dataset) so it doesn't make much sense to analyze it.
#
# Let's make this a little less painful and go through the rest of the collumns in one go!
# +
rest = ['ResidentIndicator', 'SearchAuthorizationCode',
'StatuteReason', 'SubjectEthnicityCode',
'SubjectRaceCode', 'SubjectSexCode','TownResidentIndicator']
for col in rest:
display(pd.crosstab(df['ContrabandIndicator'], df[col], normalize="columns"))
display(pd.crosstab(old_df['ContrabandIndicator'], old_df[col], normalize="columns"))
# -
# - We see that all the columns got changes, but they don't seem to be as significant as in the Departments cases.
#
# Anyway, it seems like we need to retrain the model. It's what we were hired for! ;)
# <img src="./media/retrain.jpg" alt="drawing" width="400"/>
# ## 3. Redeploy the model
# Retraining a model is always a decision we need to think about.
#
# > Was this change in data constant, temporary or seasonal?
#
# In other words, do we expect the data distribution to stay as it is? To change back? To change from season to season?
#
# **Depending on that, we could retrain the model differently:**
#
# - **If it's a seasonality**, we might want to add features like season or month and train the same model to predict differently depending on the season. We could also investigate time-series classification algorithms.
#
# - **If it's something that is going to change back**, we might either train a new model for this particular period in case the current data distrubution changes were temporary. Otherwise, if we expect the data distribution change here and back from time to time (and we know these periods in advance), we could create a new feature that would help model understand which period it is.
#
# > E.g. if we had a task of predicting beer consumption and had a city that has a lot of football matches, we might add a feature like **football_championship** and make the model predict differently for this occasions.
#
# - **If the data distribution has simply changed and we know that it's never going to come back**, we can simply retrain the model.
#
# > But in some cases we have no idea why some changes appeared (e.g. in this case of departments having more contraband).
#
# - In this case it might be a good idea to train a new model on the new datast and create some monitoring for these features distribution, so we could react when things change.
#
# > So, in our case we don't know what was the reason of data distribution changes, so we'd like to train a model on the new dataset.
#
# > The only thing is the size of the dataset. Original dataset had around 50k rows, and our new set has only 2000. It's not enough to train a good model, so this time **we're going to combine both the datasets and add a new feature helping model to distinguish between them**. If we had more data, it would be probably better to train a completely new model.
old_df = pd.read_csv('./data/train_searched.csv')
old_df['is_new'] = False
df = pd.read_csv('./data/new_observations.csv')
df['is_new'] = True
df_combined = pd.concat([old_df, df], axis=0)
df_combined = df_combined.dropna()
df_combined['Department Name'] = df_combined['Department Name'].str.lower()
df_combined['InterventionLocationName'] = df_combined['InterventionLocationName'].str.lower()
df_combined
# +
target = 'ContrabandIndicator'
train_features = df_combined.columns.drop(target)
df_train, df_test = train_test_split(df_combined, test_size=0.25, random_state=42, stratify=df_combined['is_new'])
X_train = df_train[train_features]
y_train = df_train[target]
X_test = df_test[train_features]
y_test = df_test[target]
# +
categorical_features = df_combined.columns.drop(['ContrabandIndicator', 'SubjectAge'])
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(
transformers=[('cat', categorical_transformer, categorical_features)])
pipeline = make_pipeline(
preprocessor,
LGBMClassifier(n_jobs=-1, random_state=42),
)
# +
pipeline.fit(X_train, y_train)
preds = pipeline.predict(X_test)
preds_proba = pipeline.predict_proba(X_test)[:, 1]
precision = precision_score(y_test, preds, pos_label=True)
recall = recall_score(y_test, preds)
print("precision: ", precision)
print("recall: ", recall)
# -
# Things are looking much better! It would still be necessary to **ensure that we're respecting all of the requirements**, as we've done above! Next would be to save out new model and re-deploy our server, but I'll leave that for the `Exercise notebook`. Good luck!
| S06 - DS in the Real World/BLU15 - Model CSI/Learning notebook - Part 2 of 2 - Model diagnosis and redeployment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Title
# The title of the notebook should be coherent with file name. Namely, file name should be:
# *author's initials_progressive number_title.ipynb*
# For example:
# *EF_01_Data Exploration.ipynb*
#
# ## Purpose
# State the purpose of the notebook.
#
# ## Methodology
# Quickly describe assumptions and processing steps.
#
# ## WIP - improvements
# Use this section only if the notebook is not final.
#
# Notable TODOs:
# - todo 1;
# - todo 2;
# - todo 3.
#
# ## Results
# Describe and comment the most important results.
#
# ## Suggested next steps
# State suggested next steps, based on results obtained in this notebook.
# # Setup
#
# ## Library import
# We import all the required Python libraries
# +
# Data manipulation
import pandas as pd
import numpy as np
import os
import glob
from biopandas.pdb import PandasPdb
from pymol import cmd
# Options for pandas
pd.options.display.max_columns = 50
pd.options.display.max_rows = 30
# Visualizations
import plotly
import plotly.graph_objs as go
import plotly.offline as ply
plotly.offline.init_notebook_mode(connected=True)
import matplotlib.pyplot as plt
import seaborn as sns
# Autoreload extension
if 'autoreload' not in get_ipython().extension_manager.loaded:
# %load_ext autoreload
# %autoreload 2
# -
# ### Change directory
# If Jupyter lab sets the root directory in `notebooks`, change directory.
if "notebook" in os.getcwd():
os.chdir("..")
# ## Local library import
# We import all the required local libraries libraries
# +
# Include local library paths
import sys
# sys.path.append("./src") # uncomment and fill to import local libraries
# Import local libraries
# import src.utilities as utils
# -
# # Parameter definition
# We set all relevant parameters for our notebook. By convention, parameters are uppercase, while all the
# other variables follow Python's guidelines.
#
# # Data import
# We retrieve all the required data for the analysis.
# # Data processing
# Put here the core of the notebook. Feel free to further split this section into subsections.
# # References
# We report here relevant references:
# 1. author1, article1, journal1, year1, url1
# 2. author2, article2, journal2, year2, url2
| notebooks/template.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="HuoYaJrasbds"
MAX_DURATION = 8.0 # 2 bars
NOTE_SEPARATOR = '!'
REST_VALUE = '@'
import pickle
notes = pickle.load(open('./raggen-notes.bin', 'rb'))
# +
from keras.utils import np_utils
import numpy as np
TIMESTEP = 0.5 # 16th notes
SEQ_LEN = int(4 / TIMESTEP) # 8 per bar
num_unique_notes = len(set(notes))
print(f'Number of unique notes: {num_unique_notes}')
# all unique pitches (including rests)
pitch_names = sorted(set(i for i in notes))
# map pitches to integers
note_to_int = dict((note, num) for num, note in enumerate(pitch_names))
data_X = []
data_y = []
for i in range(0, len(notes) - SEQ_LEN, 1):
seq_in = notes[i:i + SEQ_LEN]
data_X.append([note_to_int[n] for n in seq_in])
seq_out = notes[i + SEQ_LEN]
data_y.append(note_to_int[seq_out])
#end
X = np.reshape(data_X, (len(data_X), SEQ_LEN, 1))
X = X / float(len(notes))
y = np_utils.to_categorical(data_y)
# + colab={"base_uri": "https://localhost:8080/", "height": 530} colab_type="code" executionInfo={"elapsed": 6196, "status": "ok", "timestamp": 1571574049565, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCFRtqS5I7HVNdoY-OISy3BzinululKE6uTawrZ4g=s64", "userId": "17744463245578873670"}, "user_tz": -60} id="-c-Vxxb3tymA" outputId="df7a43b6-a6fb-4293-8952-2cce858fe102"
from keras.models import Sequential
from keras.layers import LSTM, Dropout, Dense
import keras.backend as K
K.clear_session()
model = Sequential()
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2]), return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(256))
model.add(Dropout(0.3))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# -
model_path = './raggen-model.hdf5'
model.load_weights(model_path)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 940, "status": "ok", "timestamp": 1571574413691, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCFRtqS5I7HVNdoY-OISy3BzinululKE6uTawrZ4g=s64", "userId": "17744463245578873670"}, "user_tz": -60} id="VGtNS_O6uQVS" outputId="0c99e48b-2fdb-43d3-d0bc-b111ea310eab"
# reverse mapping
int_to_note = dict((num, note) for num, note in enumerate(pitch_names))
# if true, only uses the best prediction available
best_pred_only = False
# if best_pred_only is false, how many predictions to randomly choose from
num_best_preds = 2
# random start point
start_idx = np.random.randint(0, len(data_X)-1)
v1_pattern = data_X[start_idx]
print(f'Starting pattern: {v1_pattern}')
v1_output = []
for idx in range(100 * SEQ_LEN):
# print(f'Pattern {idx}: {v1_pattern}')
prediction_input = np.reshape(v1_pattern, (1, len(v1_pattern), 1))
prediction_input = prediction_input / float(len(notes))
prediction = model.predict(prediction_input)
if best_pred_only:
pred_idx = np.argmax(prediction[0])
else:
top_5_idx = np.argpartition(prediction[0], -num_best_preds)[-num_best_preds:]
pred_idx = top_5_idx[np.random.randint(0, len(top_5_idx))]
result = int_to_note[pred_idx]
v1_output.append(result)
# print(f'\tPredicted index: {pred_idx}')
v1_pattern.append(pred_idx)
v1_pattern = v1_pattern[1:len(v1_pattern)]
#end
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1128, "status": "ok", "timestamp": 1571574415425, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCFRtqS5I7HVNdoY-OISy3BzinululKE6uTawrZ4g=s64", "userId": "17744463245578873670"}, "user_tz": -60} id="HMalDmMMuoyY" outputId="ce8a9103-2615-477d-9ca6-8d9923ba77b9"
from music21 import stream, duration, key, meter, note, chord, instrument
def split_note_duration(pattern):
n, d = pattern.split('$')
if '/' in d:
a, b = d.split('/')
d = float(a) / float(b)
else:
d = float(d)
#end
return n, d
#end
def notes_array_to_midi(notes_array):
offset = 0.0
output_notes = []
for pattern in notes_array:
# print(pattern)
# handle chords (i.e. multiple notes split by NOTE_SEPARATOR)
if NOTE_SEPARATOR in pattern:
# adding all notes in the chord separately instead of in a chord.Chord.
# this is because while music21 should support different length notes in a Chord,
# it doesn't appear to work (uses the Chord's duration).
for chord_note in pattern.split(NOTE_SEPARATOR):
note_name, note_duration = split_note_duration(chord_note)
new_note = note.Note(note_name)
new_note.offset = offset
new_note.storedInstrument = instrument.Piano
new_note.duration = duration.Duration(note_duration)
output_notes.append(new_note)
#end
#end
else:
note_name, note_duration = split_note_duration(pattern)
# handle rests
if REST_VALUE == note_name:
new_rest = note.Rest()
new_rest.offset = offset
new_rest.duration = duration.Duration(note_duration)
output_notes.append(new_rest)
else:
new_note = note.Note(note_name)
new_note.offset = offset
new_note.duration = duration.Duration(note_duration)
output_notes.append(new_note)
#end
offset += TIMESTEP
#end
midi_stream = stream.Stream(output_notes)
midi_stream.timeSignature = meter.TimeSignature('2/4')
midi_stream.keySignature = key.KeySignature(0)
midi_stream.write('midi', fp='./output.mid')
return output_notes
#end
s = notes_array_to_midi(v1_output)
print('Done!')
# -
| old/predict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
data=np.loadtxt('F:\coursera_,ml\machine-learning-ex2\ex2\ex2data2.txt',delimiter=',')
x=data[:,:2]
y=data[:,2]
plt.scatter(x[:,0],x[:,1],c=y,cmap=plt.cm.Spectral)
plt.show()
x=x.T
y=y.T
y=y.reshape(1,118)
np.random.seed(3)
#cost function
def cost(a,y):
j=(-1/y.shape[1])*(np.sum(y*np.log(a)+(1-y)*np.log(1-a)))
return j
#sigmoid function
def sig(x):
a=1/(1+np.exp(-1*x))
return a
def relu(x):
return x*(x>0)
def relu_der(x):
return 1*(x>0)
alpha=0.0006
c_history=np.zeros((10000,1))
#initializing parameters
w1=np.random.rand(10,2)*0.01
b1=np.random.rand(10,1)
w2=np.random.rand(1,10)*0.01
b2=np.random.rand(1,1)
for i in range(10000):
#forward propagation
z1=np.dot(w1,x)+b1
a1=np.tanh(z1)
z2=np.dot(w2,a1)+b2
a2=sig(z2)
#computing cost
c=cost(a2,y)
c_history[i]=c
#backward propagation
dz2=a2-y
dw2=(1/len(y))*np.dot(dz2,a1.T)
db2=(1/len(y))*np.sum(dz2, axis=1, keepdims=True)
dz1=np.dot(w2.T,dz2)*(1-a1*a1)
dw1=(1/len(y))*np.dot(dz1,x.T)
db1=(1/len(y))*np.sum(dz1, axis=1, keepdims=True)
#gradient descent
w1=w1-alpha*dw1
b1=b1-alpha*db1
w2=w2-alpha*dw2
b2=b2-alpha*db2
#cost vs iterations
xa=np.arange(1,10001,1)
plt.plot(xa,c_history)
plt.xlabel('iterations')
plt.ylabel('cost')
plt.show()
pre=np.round(a2)
c=0
for i in range(len(y.T)):
if y.T[i]==pre.T[i]:
c=c+1
print(str(c/len(y.T)*100)+'%')
def predict(x):
x=x.T
z1=np.dot(w1,x)+b1
a1=np.tanh(z1)
z2=np.dot(w2,a1)+b2
a2=sig(z2)
pre=np.round(a2)
return pre
#function for plotting desicion boundary
def plot_decision_boundary(pred_func, X, y):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral)
plt.show()
plot_decision_boundary(lambda x: predict(x), x.T, y[0])
| Classification Problem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import requests
from bs4 import BeautifulSoup as soup
from urllib.request import Request,urlopen
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import os
#Seting up webdrive
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.options import Options
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
# Web Driver Start
#from webdriver_manager.chrome import ChromeDriverManager
#driver = webdriver.Firefox()
driver = webdriver.Chrome('D:\Software\chromedriver_win32\chromedriver.exe')
# Scrolling Infinite Times
df1=pd.read_csv("medifeecitytestlink.csv")
links=df1["Link"]
# Parsing the Page
testname=[]
labname=[]
cost=[]
address=[]
city=[]
lablink=[]
ss_1=0
for i in range(8207,9600):
driver.get(links[i])
#driver.get("https://www.medifee.com/tests/adenosine-deaminase-test-cost-in-ahmedabad/")
#once the scroller has scrolled all the way extract the required data
res=driver.execute_script("return document.documentElement.outerHTML")
soup_1=soup(res,'lxml')
#heading
head_1=soup_1.find("h1").text
if " in " in head_1:
x=soup_1.findAll("div",{"class":"panel panel-default"})
for j in range(len(x)):
if "Discount Offers" in x[j].text:
#gets the name and city
head=head_1.split(" in ")
testname.append(head[0])
city.append(head[1])
#labname and cost
word=x[j].find("p",{"class":"lead"}).text
a=word.split("Rs.")
a_1=a[0].split("by")
a_2=a_1[len(a_1)-1].split("Offer Price:")
labname.append(a_2[0])
cost.append(a[len(a)-1])
#address
text_1=x[j].findAll("div",{"class":"col-md-7"})
text_2=text_1[len(text_1)-1].text
text_3=text_2.split(head[1])
text_4=text_3[0].split("Location:")
address.append(text_4[len(text_4)-1]+head[1])
#lab link
lablink.append("Not Available")
elif "Related Tests" in x[j].text:
ss_1+=1
elif "Thyrocare (Online Order)" in x[j].text:
#gets the name and city
head=head_1.split(" in ")
testname.append(head[0])
city.append(head[1])
#labname and cost
word=x[j].find("p",{"class":"lead"}).text
a=word.split("Rs.")
a_1=a[0].split("by")
a_2=a_1[len(a_1)-1].split("Offer Price:")
labname.append(a_2[0])
a_3=x[0].find("span",{"class":"text-success"}).text
cost.append(a_3)
#address
address.append("Free home sample pickup in "+head[1])
#lab link
lablink.append("Not Available")
elif "Offer Price:" in x[j].text:
#gets the name and city
head=head_1.split(" in ")
testname.append(head[0])
city.append(head[1])
#labname and cost
word=x[j].find("p",{"class":"lead"}).text
a=word.split("Rs.")
a_1=a[0].split("by")
a_2=a_1[len(a_1)-1].split("Offer Price:")
labname.append(a_2[0])
cost.append(a[len(a)-1])
#address
text_1=x[j].findAll("div",{"class":"col-md-7"})
text_2=text_1[len(text_1)-1].text
text_3=text_2.split(head[1])
text_4=text_3[0].split("Location:")
address.append(text_4[len(text_4)-1]+head[1])
#lab link
lablink.append("Not Available")
else:
#gets the name and city
head=head_1.split(" in ")
testname.append(head[0])
city.append(head[1])
#labname and cost
word=x[j].find("p",{"class":"lead"}).text
a=word.split("Rs.")
labname.append(a[0])
cost.append(a[len(a)-1])
#address
word_1=str(x[j])
r=word_1.split("--")
p_1=x[j].text
p_2=p_1.split("00")
p_3=p_2[len(p_2)-1].split("(")
if ")" in p_3[len(p_3)-2]:
p_4=p_3[len(p_3)-2].split(")")
address.append(r[1]+p_4[len(p_4)-1])
else:
address.append(r[1]+p_3[len(p_3)-2])
#lab link
x_2=x[j].find("a")
c=x_2.get("href")
lablink.append("https://www.medifee.com"+c)
print(i+1)
# Data Conversion
column=["Test Name","Lab Name","Price","Address","City","Lab Link"]
df=pd.DataFrame(columns=column)
df
df["Test Name"]=pd.Series(testname)
df["Lab Name"]=pd.Series(labname)
df["Price"]=pd.Series(cost)
df["Address"]=pd.Series(address)
df["City"]=pd.Series(city)
df["Lab Link"]=pd.Series(lablink)
df
df.to_csv("medifeelabdata_6k_9k_2.csv")
| Medifee/Health Packages/Medifee_City_lab_Data_Extraction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: aipr_dz04
# language: python
# name: aipr_dz04
# ---
# # Analiza i projektiranje računalom - 4. laboratorijska vježba: demo population.py
# ## Priprema za izvođenje
# +
import os
CD_KEY = "--HW04_D02_IN_ROOT"
# +
if (
CD_KEY not in os.environ
or os.environ[CD_KEY] is None
or len(os.environ[CD_KEY]) == 0
or os.environ[CD_KEY] == "false"
):
# %cd ..
else:
print(os.getcwd())
os.environ[CD_KEY] = "true"
# -
# ## Učitavanje paketa
# +
import numpy as np
from src.evolution.function import Function
from src.evolution.population import Population
# -
# ## Inicijalizacija
# ### Formatiranje
np.set_printoptions(precision=2, suppress=True)
# ### Konstante
wellness_function = Function(lambda x: np.mean(np.square(x)))
capacities = (5, 10)
# ### Populacije
populations = [
Population(
wellness_function=wellness_function,
capacity=capacity
)
for capacity in capacities
]
# ## Demonstracija
for population in populations:
print(population, end="\n\n")
for population in populations:
for _ in range(capacities[0]):
population.add(np.random.uniform(-1, 1, (3,)))
print(population, end="\n\n")
# Ako pokušamo dodati primjerak s `add`, a popunjeni su kapaciteti populacije, prvo će se maknuti najgori primjerak, a onda umetnuti novi.
for population in populations:
population.add(np.array([1, 1, 1]))
print(population, end="\n\n")
# **Komentar**: Podrazumijevani argument `remove_before` je postavljen na `True`, čime se osigurava da prilikom prepunjene populacije mičemo najgori element. Međutim, to ne garantira da će u populaciji ostati i **najbolji** element. Ako želimo garantirati i to, moramo postaviti `remove_before` na `False` - ovime se omogućava privremena prepopulacija, a tek se nakon svih unosa brišu dodatni elementi počevši od najgoreg. Kasnije ćemo demonstrirati ovo ponašanje.
# Ako želimo dodavati elemente u populaciju bez da automatski mičemo višak, to možemo raditi koristeći `append`.
for population in populations:
population.append(np.array([-1, -1, -1]))
print(population, end="\n\n")
# Višak elemenata možemo obrisati pozivom `cull` nad populacijom.
for population in populations:
population.cull()
print(population, end="\n\n")
# Metoda `cull` ima pretpostavljen argument `n_additional` postavljen na `0`. Ako želimo obrisati više ili manje elemenata od broja definiranim kapacitetom, to možemo promijenom argumenta. Na primjer, ako želimo da ostanu 3 elementa manje od kapaciteta, onda možemo pisati
for population in populations:
population.cull(3)
print(population, end="\n\n")
# Možemo izbaciti specifične primjerke koristeći metodu `ban`, ako postoje u populaciji.
for population in populations:
population.ban(np.array([1, 1, 1]))
print(population, end="\n\n")
# Radi demonstracije dodat ćemo jedan element natrag.
populations[0].add(np.array([1, 1, 1]))
print(populations[0])
# Također, možemo i maknuti specifični indeks iz populacije uz `pop`. Podrazumijevani indeks je `-1`, tj. najgora jedinka.
for population in populations:
population.pop(1)
print(population, end="\n\n")
# Slično kao i s `append`, možemo dodati kolekciju jedinki koristeći `assimilate`.
for population in populations:
population.assimilate(np.random.uniform(-1, 1, (10, 3)))
print(population, end="\n\n")
# Za funkcionalnost analognu `add`, možemo dodati kolekciju uz provjeru kapaciteta s `invade`. Prvo ćemo to napraviti tako da postavimo `remove_before` na `False`.
for population in populations:
population.invade(np.random.uniform(-1, 1, (10, 3)), remove_before=False)
print(population, end="\n\n")
# **Komentar**: Vidimo da su najbolje jedinke očuvane ako su ostale najbolje.
# Ako ovu operaciju primijenimo bez gorenavedene promjene, onda čuvanje najboljeg elementa nije garantirano.
for population in populations:
population.invade(np.random.uniform(-1, 1, (10, 3)))
print(population, end="\n\n")
# ### Mehanike elitizma
# Moguće je implementirati i elitizam, no tada populacija ima nešto drukčije ponašanje, koje može biti neželjeno. Podrazumijevana vrijednost elitizma u populaciji je `0`, a ona ne bi trebala biti negativan broj (u slučaju da je željeno ponašanje micanje najboljih jedinki to se treba implementirati na neki drugi, jasniji način).
elite_capacity = 5
elite_elitism = 1
elite_population = Population(
wellness_function=wellness_function,
capacity=elite_capacity,
elitism=elite_elitism
)
elite_population.assimilate(np.random.uniform(-1, 1, (5, 3)))
print(elite_population)
# Kad bi htjeli maknuti sve elemente, najboljih `elitism` će ostati.
elite_population.cull(elite_population.capacity)
print(elite_population)
# Ako pokušamo iskoristiti `pop` u ovom slučaju, vratit će element koji tražimo, ali ga neće maknuti iz populacije.
print(f"Traženi element: {elite_population.pop()}\n")
print(elite_population)
# Međutim, postoji posebni slučaj koji nije reguliran elitizmom, a to je `ban`. U slučaju pozivanju `ban` nad elitom, moguće je čak prekršiti i ograničenje minimalnog broja jedinki.
elite_population.ban(elite_population[0])
print(elite_population)
# **Komentar**: Ovakvo ponašanje je namjerno kako bi postojao način pražnjenja populacije s elitizmom.
| dz/dz-04/demo/demo-02_population.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# - turtle
# onscreen pen that you use the dreawing is called the turtle and this is what gives the library its name
#
# +
import turtle
turtle.shape("turtle")
turtle.color('#ff0000')
turtle.forward(50)
turtle.right(105)
turtle.forward(100)
turtle.right(145)
turtle.forward(100)
turtle.right(105)
turtle.forward(50)
turtle.begin_fill()
turtle.color('#ff6666')
turtle.right(110)
turtle.forward(70)
turtle.right(145)
turtle.forward(70)
turtle.right(110)
turtle.forward(25)
turtle.end_fill()
turtle.color('#cc6600')
turtle.circle(60)
turtle.color('#ff9966')
turtle.circle(40)
turtle.color('#ffff99')
turtle.circle(30)
turtle.color('#ffcc66')
turtle.circle(20)
turtle.color('#cccccc')
turtle.circle(10)
# -
| STEMies/Untitled Folder/STEM lecture .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Домашнее задание №5
# ### Задание №1 (1,5 балла)
#
# Создайте два массива: в первом должны быть четные числа от 2 до 12 включительно, а в другом числа 7, 11, 15, 18, 23, 29.
import numpy as np
A = np.array([2, 4, 6, 8, 10, 12])
B = np.array([7, 11, 15, 18, 23, 29])
A
# $1.$ Сложите массивы и возведите элементы получившегося массива в квадрат:
(A+B)**2
# $2.$ Выведите все элементы из первого массива, которые стоят на тех местах, где элементы второго массива больше 12 и дают остаток 3 при делении на 5.
B[B>12]
A[np.arange(B.shape[0])%5==3]
# *3.* Для первого массива найдите остатки от деления на 2, а для второго --- на 3. Для каждого получившегося массива выведите его уникальные значения (см. семинар).
np.unique(A%2)
np.unique(B%3)
# ### Задание №2 (1,5 балла)
#
# Найдите интересный для вас датасет. Например, можно выбрать датасет [тут](http://data.un.org/Explorer.aspx) или [тут](https://hls.harvard.edu/library/research/find-a-database/). Если выскакивают ошибки при считывании таблицы, попробуйте их погуглить и решить проблему!
#
# 1. Рассчитайте подходящие описательные статистики для признаков объектов в выбранном датасете
# 2. Проанализируйте и прокомментируйте содержательно получившиеся результаты
# - недостаточно написать фразы формата "Средний возраст = 60, а минимальный = 20", следует дать пояснение, что это значит и какие выводы можно сделать в рамках датасета
# 3. Все комментарии оформляйте строго в ячейках формата Markdown
# - для этого надо в выпадающем окошке сверху для текущей ячейки сменить тип Code на Markdown
# 
dataset = np.loadtxt("untitled.txt", dtype=int)
dataset
# #### Среднее значение:
np.mean(dataset[:, 1])
# #### Медиана:
np.median(dataset[:, 2])
# #### Размах:
np.max(dataset[:, 0])-np.min(dataset[:, 0])
# #### Дисперсия:
np.var(dataset[:, 3])
| homework.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
from datetime import date
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite", echo=False)
# +
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# -
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# # Exploratory Precipitation Analysis
# Find the most recent date in the data set.
recent = session.query(Measurement.date).order_by((Measurement.date).desc()).first()[0]
print(f'the most recent date in the data is {startDate}')
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
startDate = dt.datetime.strptime(recent,"%Y-%m-%d").date()
# Calculate the date one year from the last date in data set.
endDate = startDate - dt.timedelta(days=365)
print(f'Most recent date: {startDate}\nOne year earlier: {endDate}')
# +
# Perform a query to retrieve the data and precipitation scores
results = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date > endDate).\
order_by((Measurement.date).desc()).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(results, columns=['date', 'prcp'])
# Use Pandas Plotting with Matplotlib to plot the data
dates = df['date']
x = [dt.datetime.strptime(d,"%Y-%m-%d").date() for d in dates]
y = df['prcp']
fig, ax = plt.subplots()
ax.bar(x, y, 6)
ax.set_xlabel("Date")
ax.set_ylabel("Precipitation")
ax.set_title("Year of Precipitation")
ax.legend("precipitation")
plt.xticks(rotation=90)
# -
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
# # Exploratory Station Analysis
session.query(Station).first().__dict__
session.query(Measurement).first().__dict__
# Design a query to calculate the total number stations in the dataset
stationCount = session.query(func.count(Station.id)).all()
print(f'Total number of stations: {stationCount[0][0]}')
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by((func.count(Measurement.station)).desc()).all()
# +
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
station_mostActive = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by((func.count(Measurement.station)).desc()).first()[0]
station_df = pd.DataFrame(session.query(Measurement.tobs).\
filter(Measurement.station == station_mostActive).all(), columns=['tobs'])
print(f'Min: {station_df.min()[0]}')
print(f'Max: {station_df.max()[0]}')
print(f'Avg: {station_df.mean()[0]}')
# +
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
tobs_df = pd.DataFrame(session.query(Measurement.date, Measurement.tobs).\
filter(Measurement.date > endDate).\
filter(Measurement.station == station_mostActive).all(), columns=['date','temperature'])
tobs_df.plot.hist(bins=12)
plt.xlabel('Temperature')
# -
# # Close session
# Close Session
session.close()
| climate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NumPy Indexing and Selection
#
# In this exercise we will focus on how to select elements or groups of elements from an array.
# ### Bracket Indexing and Selection
# The simplest way to pick one or some elements of an array looks very similar to python lists:
# +
# Create the following
import numpy as np
arr = np.arange(0,12,1)
arr
# -
# Using arange
arr[3]
# +
# Selecting an element
# -
arr[0:5]
# +
#Get values in a range
# -
arr[1:5]
# +
#Get values in a range
# -
# ## Broadcasting
#
# Numpy arrays differ from a normal Python list because of their ability to broadcast:
# +
x = arr.view()
x[0:5] = 100
x
# -
#Setting a value with index range (Broadcasting)
x = np.arange(0,12,1)
x
# +
# Reset array
# -
x[0:6]
# +
#Slice the array
#Show slice
# -
# I am not sure about the below - did you wanted to copy or view the initial array, bring modifications in order to see the modifications to original array? (in this case, view)
x = arr.view()
x[0:6] = 99
x[0:6]
# +
#Change Slice
#Show Slice again
# +
# Now note the changes also occur in our original array!
# -
arr
# +
# Original array changed as well
# -
# ## Use copy() to get a copy of above array
y = arr.copy()
y
# +
# Copy of array
# -
# ## Indexing a 2D array (matrices)
#
# The general format is **arr_2d[row][col]** or **arr_2d[row,col]**. I recommend usually using the comma notation for clarity.
arr_2d = np.array([np.arange(5,20,5),np.arange(20,35,5)])
arr_2d
# +
w = np.array(np.arange(5,50,5))
t = np.reshape(w, (3,3))
print(t.ndim)
# +
# Create a 2-D matrix
#Show
# -
arr_2d[1]
# +
# Print the second row
# -
arr_2d[1][0]
# +
# Getting individual element value
# -
arr_2d[1,0]
# +
# Getting individual element value using comma notaion
# -
arr_2d[0:2,-2:3] # that is not super intuitive
arr_2d[:2,1:] # that is not super intuitive
# +
# 2D array slicing
#Shape (2,2) from top right corner
# +
a = np.arange(5,50,5)
z = np.reshape(a, (3,3))
z
# -
z[2]
# +
#Shape bottom row
# -
# ### Fancy Indexing
#
# Fancy indexing allows you to select entire rows or columns out of order,to show this, let's quickly build out a numpy array:
b = np.zeros((10,10))
b
# +
#Set up matrix
# -
len(b)
# +
#Length of array
# +
i = 0
for i in range(0,10):
b[i:] = i
b
# +
#Set up array
#Loop through the array and set the value of index = i
# +
filter_b = b % 2 == 0
new_b = b[filter_b]
new_b
# +
filter_b = b[:,0]%2 == 0
new_b = b[filter_b]
new_b
# +
# Extract
# +
import random
b = []
for i in range (0,4):
a = random.randint(1,10)
np_array_1d = np.array([a])
b.append(np_array_1d)
c = np.repeat(b,10,1)
c
# +
#Allows in any order
# -
# # Good luck!
# Try doing OOP exercises on your own, ping me if you face any problems.
| Exercises - Qasim/Python. Pandas, Viz/Exercise 10 - Numpy (Indexing) - answer K.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# LightGBM install: use conda: https://anaconda.org/conda-forge/lightgbm
# StratifiedKFold: This cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class.
# KFold: Provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds (without shuffling by default).
# -
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import numpy as np
import pandas as pd
import os
import lightgbm as lgb
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.metrics import roc_auc_score
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings('ignore')
plt.style.use('seaborn')
sns.set(font_scale=1)
os.chdir('/Users/hanbosun/Documents/GitHub/TrasactionPrediction/')
random_state = 42
np.random.seed(random_state)
df_train = pd.read_csv('input/train.csv')
df_test = pd.read_csv('input/test.csv')
df_train = df_train.iloc[:2000,:]
def augment(x,y,t=2):
xs,xn = [],[]
for i in range(t):
mask = y>0
x1 = x[mask].copy()
ids = np.arange(x1.shape[0])
for c in range(x1.shape[1]):
np.random.shuffle(ids)
x1[:,c] = x1[ids][:,c]
xs.append(x1)
for i in range(t//2):
mask = y==0
x1 = x[mask].copy()
ids = np.arange(x1.shape[0])
for c in range(x1.shape[1]):
np.random.shuffle(ids)
x1[:,c] = x1[ids][:,c]
xn.append(x1)
xs = np.vstack(xs)
xn = np.vstack(xn)
ys = np.ones(xs.shape[0])
yn = np.zeros(xn.shape[0])
x = np.vstack([x,xs,xn])
y = np.concatenate([y,ys,yn])
return x,y
# +
# https://www.kaggle.com/jiweiliu/lgb-2-leaves-augment#500381
# thanks to @vishalsrinirao
def shuffle_col_vals(x1):
rand_x = np.array([np.random.choice(x1.shape[0], size=x1.shape[0], replace=False) for i in range(x1.shape[1])]).T
grid = np.indices(x1.shape)
rand_y = grid[1]
return x1[(rand_x, rand_y)]
def augment_fast1(x,y,t=2):
xs,xn = [],[]
for i in range(t):
mask = y>0
x1 = x[mask].copy()
x1 = shuffle_col_vals(x1)
xs.append(x1)
for i in range(t//2):
mask = y==0
x1 = x[mask].copy()
x1 = shuffle_col_vals(x1)
xn.append(x1)
xs = np.vstack(xs); xn = np.vstack(xn)
ys = np.ones(xs.shape[0]);yn = np.zeros(xn.shape[0])
x = np.vstack([x,xs,xn]); y = np.concatenate([y,ys,yn])
return x,y
# +
# https://stackoverflow.com/questions/50554272/randomly-shuffle-items-in-each-row-of-numpy-array
def disarrange(a, axis=-1):
"""
Shuffle `a` in-place along the given axis.
Apply numpy.random.shuffle to the given axis of `a`.
Each one-dimensional slice is shuffled independently.
"""
b = a.swapaxes(axis, -1)
# Shuffle `b` in-place along the last axis. `b` is a view of `a`,
# so `a` is shuffled in place, too.
shp = b.shape[:-1]
for ndx in np.ndindex(shp):
np.random.shuffle(b[ndx])
return
def augment_fast2(x,y,t=2):
xs,xn = [],[]
for i in range(t):
mask = y>0
x1 = x[mask].copy()
disarrange(x1,axis=0)
xs.append(x1)
for i in range(t//2):
mask = y==0
x1 = x[mask].copy()
disarrange(x1,axis=0)
xn.append(x1)
xs = np.vstack(xs)
xn = np.vstack(xn)
ys = np.ones(xs.shape[0])
yn = np.zeros(xn.shape[0])
x = np.vstack([x,xs,xn])
y = np.concatenate([y,ys,yn])
return x,y
# -
lgb_params = {
"objective" : "binary",
"metric" : "auc",
"boosting": 'gbdt',
"max_depth" : -1,
"num_leaves" : 13,
"learning_rate" : 0.01,
"bagging_freq": 5,
"bagging_fraction" : 0.4,
"feature_fraction" : 0.05,
"min_data_in_leaf": 80,
"min_sum_heassian_in_leaf": 10,
"tree_learner": "serial",
"boost_from_average": "false",
#"lambda_l1" : 5,
#"lambda_l2" : 5,
"bagging_seed" : random_state,
"verbosity" : 1,
"seed": random_state
}
# +
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=random_state)
oof = df_train[['ID_code', 'target']]
oof['predict'] = 0
predictions = df_test[['ID_code']]
val_aucs = []
feature_importance_df = pd.DataFrame()
features = [col for col in df_train.columns if col not in ['target', 'ID_code']]
X_test = df_test[features].values
# -
for fold, (trn_idx, val_idx) in enumerate(skf.split(df_train, df_train['target'])):
X_train, y_train = df_train.iloc[trn_idx][features], df_train.iloc[trn_idx]['target']
X_valid, y_valid = df_train.iloc[val_idx][features], df_train.iloc[val_idx]['target']
N = 5
p_valid,yp = 0,0
for i in range(N):
X_t, y_t = augment(X_train.values, y_train.values)
# X_t, y_t = augment_fast1(X_train.values, y_train.values)
X_t, y_t = augment_fast2(X_train.values, y_train.values)
X_t = pd.DataFrame(X_t)
X_t = X_t.add_prefix('var_')
trn_data = lgb.Dataset(X_t, label=y_t)
val_data = lgb.Dataset(X_valid, label=y_valid)
evals_result = {}
lgb_clf = lgb.train(lgb_params,
trn_data,
100000,
valid_sets = [trn_data, val_data],
early_stopping_rounds=3000,
verbose_eval = 1000,
evals_result=evals_result
)
p_valid += lgb_clf.predict(X_valid)
yp += lgb_clf.predict(X_test)
fold_importance_df = pd.DataFrame()
fold_importance_df["feature"] = features
fold_importance_df["importance"] = lgb_clf.feature_importance()
fold_importance_df["fold"] = fold + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
oof['predict'][val_idx] = p_valid/N
val_score = roc_auc_score(y_valid, p_valid)
val_aucs.append(val_score)
predictions['fold{}'.format(fold+1)] = yp/N
# +
mean_auc = np.mean(val_aucs)
std_auc = np.std(val_aucs)
all_auc = roc_auc_score(oof['target'], oof['predict'])
print("Mean auc: %.9f, std: %.9f. All auc: %.9f." % (mean_auc, std_auc, all_auc))
cols = (feature_importance_df[["feature", "importance"]]
.groupby("feature")
.mean()
.sort_values(by="importance", ascending=False)[:1000].index)
best_features = feature_importance_df.loc[feature_importance_df.feature.isin(cols)]
plt.figure(figsize=(14,26))
sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance",ascending=False))
plt.title('LightGBM Features (averaged over folds)')
plt.tight_layout()
plt.savefig('lgbm_importances.png')
# submission
predictions['target'] = np.mean(predictions[[col for col in predictions.columns if col not in ['ID_code', 'target']]].values, axis=1)
predictions.to_csv('lgb_all_predictions.csv', index=None)
sub_df = pd.DataFrame({"ID_code":df_test["ID_code"].values})
sub_df["target"] = predictions['target']
sub_df.to_csv("lgb_submission.csv", index=False)
oof.to_csv('lgb_oof.csv', index=False)
| code/.ipynb_checkpoints/benchmark0.901-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="mLjN-UNzR7Sq"
# Setup: These commands need to be run before using our program.
# + id="fienv8YmR33s"
# !pip install pytorch_lightning
# !pip install torchsummaryX
# !pip install webdataset
# !git clone --branch master https://github.com/McMasterAI/Radiology-and-AI.git
# !git clone https://github.com/black0017/MedicalZooPytorch.git
# + [markdown] id="muAuDLkxSKV9"
# We can get set-up with Google Colab if were using it.
# + id="SZKKEduBSJ6w"
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# + id="m-r2Ki5nTBJJ"
# cd drive/MyDrive/MacAI
# + [markdown] id="lmmHx4NYTUF4"
# Imports
# + id="tnMPgjbKTC8h"
import sys
sys.path.append('./Radiology-and-AI/Radiology_and_AI')
sys.path.append('./MedicalZooPytorch')
import os
import torch
import numpy as np
from torch.utils.data import Dataset, DataLoader, random_split
from pytorch_lightning.loggers import WandbLogger, TensorBoardLogger
import pytorch_lightning as pl
import sys
import nibabel as nb
from skimage import transform
import matplotlib.pyplot as plt
import webdataset as wds
from collators.brats_collator import col_img
from lightning_modules.segmentation import TumourSegmentation
from scipy.interpolate import RegularGridInterpolator
from scipy.ndimage.filters import gaussian_filter
from time import time
# + [markdown] id="QwqglKvgTZZR"
# Loading datasets.
# Because neuroimages are really large files, we've decided to use the webdataset library to handle them during training. Essentially, we create a zip file representing our dataset and store them in some file path. However, we can work with any PyTorch dataset object (check PyTorch dataset documentation for details).
# + id="spU-XeiFT3AL"
train_dataset = wds.Dataset("macai_datasets/brats/train/brats_train.tar.gz")
eval_dataset = wds.Dataset("macai_datasets/brats/validation/brats_validation.tar.gz")
# + [markdown] id="gMGMCMzwU67B"
# To modify/load in the dataset, we use a *collator function* which is also imported (called col_img). You should create a lambda function only taking in the DataLoader batch as an argument, and using whatever arguments you want afterwards. This sounds complex, so just check the next examples:
#
# A few notes:
# - Image augmentations randomly change training images, to artificially increase the sample size by a bit. The available augmentations, demonstrated to be most effective in literature, are the power-law transformation and elastic transformation. However, elastic transformation is relatively slow as of now. Set the augmentation probabilities (pl_prob and elastic_prob) to 0 during evaluation, but you can set them between 0 and 1 for training.
# - Image normalization is used to make the image intensity distributions more similar. We currently support two types: Nyul normalization and Z-score normalization. To use Z-score normalization, set use_zscore to True. To use Nyul normalization, the *standard_scales* and *percs* have to be trained first (more details later)
#
# Note: both Nyul normalization and Z-score normalization will normalize based on the non-background (black) pixels of the entire image, including the tumor region.
# + id="q7YRJmylU5ZO"
training_collator_function = lambda batch: col_img(batch, to_tensor=True, nyul_params=None, use_zscore=True, pl_prob=0.5, elastic_prob=0)
# + id="0Kz7Q1rpXNjy"
eval_collator_function = lambda batch: col_img(batch, to_tensor=True, nyul_params=None, use_zscore=True, pl_prob=0, elastic_prob=0)
# + [markdown] id="MOm66YuoXeZo"
# Nyul normalization can be trained using the training dataset. We first create a dataloader that uses a collator function that makes no changes to the image, then feed it to an imported nyul_train_dataloader function. While this currently ignores the segmented region and background (for more accurate use in radiomics), we will create an option to also take into account the segmented region (as we won't have access to a segmentation before performing automated segmentation).
# + id="Wfh9ia72X7gb"
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=5, collate_fn=lambda batch:col_img(batch, to_tensor=False))
standard_scales, percss = nyul_train_dataloader(train_dataloader, step = 5)
# + [markdown] id="4a_iN2jXaNN9"
# After training, we can apply Nyul normalization to our dataset.
# + id="5VnWBly7aDx9"
nyul_collator_function = lambda batch: col_img(batch, to_tensor=True, nyul_params=nyul_params={'percs': percss, 'standard_scales':standard_scales})
# + [markdown] id="AtYAju4MUCDM"
# We have a lot going on in this line.
# Our model is a PyTorch Lightning model called TumourSegmentation, which we import above. This instantiates a new instance of the model, and i used during training.
# - The learning rate controls how quickly the model learns. Too high, and the model won't converge; too low, and it will take too long to train.
# - The collator is described previously.
# - The train_dataset is what we train the model using, and the eval_dataset is to ensure that our model is truly learning (rather than memorizing the train_dataset).
# - batch_size has to be set to the number of images in each series (including the segmentation image). In this case, we have 4 (T1, T2, T1ce, T1 FLAIR) plus a segmentation, to make a total of 5.
#
# + id="Hb72wEgyT_3D"
model = TumourSegmentation(learning_rate = 4e-4, collator=collator_function, batch_size=5, train_dataset=train_dataset, eval_dataset=eval_dataset)
# + [markdown] id="sZ3udcmzZezT"
# This code deals with training. We can check tensorboard to see how well it's been running after training; you can also use any other type of logger. I use tensorboard here, but there exists another (WandB) that handles automatic updating on Colab.
# + id="GA6zv9gGZcGi"
# %load_ext tensorboard
# + id="S1fasCzOZZGq"
#Training
#wandb_logger = WandbLogger(project='macai',name='test_run', offline = True)
trainer = pl.Trainer(
accumulate_grad_batches = 1,
gpus = 1,
max_epochs = 10,
precision=16,
check_val_every_n_epoch = 1,
logger = tensorboard_logger,
log_every_n_steps=10,
)
trainer.fit(model)
# + id="2yY14W3RZdFs"
# %tensorboard --logdir logs/
# + [markdown] id="Y7xR-qZ3ZxGs"
# The trainer automatically creates checkpoints, but we can interrupt the trainer and save a checkpoint like so whenever we wish.
# + id="JP05cPb_Zyuh"
trainer.save_checkpoint("last_ckpt.ckpt")
# + [markdown] id="qv1lcol5aVoj"
# Finally, it is possible to load saved models and to see the outputs. We can either visualize this in a Python notebook, or by saving the segmentation somewhere and visualizing it using a neuroimaging software (I use 3D Slicer, but I think anything will do).
# + id="xRCEfRG_afOo"
# Load the model
model = TumourSegmentation.load_from_checkpoint('last_ckpt.ckpt').cuda().half()
i=0
for z in model.val_dataloader():
print('======================================================')
prediction = model.forward(torch.unsqueeze(z[0], axis=0).cuda().half())
# Save predictions to file for further visualization
prediction_img = nb.Nifti1Image(prediction, np.eye(4))
nb.save(prediction_img, 'prediction_'+str(i)+'.nii.gz')
# Simple visualization of a slice, but we can use Cameron's visualization method
# for improvements to this process.
sl = z[1][0, :, 100]
plt.title('Label')
plt.imshow(sl, vmin = 0, vmax=4)
plt.show()
prediction = prediction[0].cpu().detach().numpy().astype('float32')
plt.title('Prediction core')
plt.imshow(prediction[0, :, 100], vmin = 0, vmax=1)
plt.show()
plt.title('Prediction enhancing')
plt.imshow(prediction[1, :, 100], vmin = 0, vmax=1)
plt.show()
plt.title('Prediction edema')
plt.imshow(prediction[2, :, 100], vmin = 0, vmax=1)
plt.show()
i += 1
if i >= 10:
break
| Notebooks/old/annotated_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python3
import configparser
import os
import subprocess
import sys
from urllib.request import urlretrieve
CURDIR = "/home/ubuntu/workspace/bert-japanese/src" #os.path.dirname(os.path.abspath(__file__))
CONFIGPATH = os.path.join(CURDIR, os.pardir, 'config.ini')
config = configparser.ConfigParser()
config.read(CONFIGPATH)
FILEURL = config['DATA']['FILEURL']
FILEPATH = config['DATA']['FILEPATH']
EXTRACTDIR = config['DATA']['TEXTDIR']
def reporthook(blocknum, blocksize, totalsize):
'''
Callback function to show progress of file downloading.
'''
readsofar = blocknum * blocksize
if totalsize > 0:
percent = readsofar * 1e2 / totalsize
s = "\r%5.1f%% %*d / %d" % (
percent, len(str(totalsize)), readsofar, totalsize)
sys.stderr.write(s)
if readsofar >= totalsize: # near the end
sys.stderr.write("\n")
else: # total size is unknown
sys.stderr.write("read %d\n" % (readsofar,))
def download():
urlretrieve(FILEURL, FILEPATH, reporthook)
def extract():
subprocess.call(['python3',
os.path.join(CURDIR, os.pardir, os.pardir,
'wikiextractor', 'WikiExtractor.py'),
FILEPATH, "-o={}".format(EXTRACTDIR)])
def main():
download()
extract()
# -
EXTRACTDIR
| src/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Schedule Generation
import sys
sys.path.append("..")
sys.path.append("..\\..")
import numpy as np
import matplotlib.pyplot as plt
from financepy.finutils.FinCalendar import FinDayAdjustTypes, FinDateGenRuleTypes
from financepy.finutils.FinSchedule import FinSchedule
from financepy.finutils.FinFrequency import FinFrequencyTypes
from financepy.finutils.FinCalendar import FinCalendarTypes
from financepy.finutils.FinDate import FinDate
startDate = FinDate(4, 8, 2016)
endDate = FinDate(1, 5, 2022)
frequencyType = FinFrequencyTypes.ANNUAL
calendarType = FinCalendarTypes.WEEKEND
busDayAdjustType = FinDayAdjustTypes.FOLLOWING
dateGenRuleType = FinDateGenRuleTypes.BACKWARD
schedule = FinSchedule(startDate, endDate, frequencyType, calendarType, busDayAdjustType, dateGenRuleType)
schedule.print()
# The first date is the previous coupon date. The next date is the next coupon date after today.
frequencyType = FinFrequencyTypes.SEMI_ANNUAL
calendarType = FinCalendarTypes.WEEKEND
busDayAdjustType = FinDayAdjustTypes.FOLLOWING
dateGenRuleType = FinDateGenRuleTypes.BACKWARD
schedule = FinSchedule(startDate, endDate, frequencyType, calendarType, busDayAdjustType, dateGenRuleType)
schedule.print()
frequencyType = FinFrequencyTypes.SEMI_ANNUAL
calendarType = FinCalendarTypes.TARGET
busDayAdjustType = FinDayAdjustTypes.FOLLOWING
dateGenRuleType = FinDateGenRuleTypes.BACKWARD
schedule = FinSchedule(startDate, endDate, frequencyType, calendarType, busDayAdjustType, dateGenRuleType)
schedule.print()
| examples/finutils/FINUTILS_ScheduleGeneration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# Fill in any place that says `# YOUR CODE HERE` or YOUR ANSWER HERE, as well as your name and collaborators below.
# Grading for pre-lecture assignments is all or nothing. Partial credit is available for in-class assignments and checkpoints, but **only when code is commented**.
# -
NAME = ""
COLLABORATORS = ""
# ---
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "3ef11e599ccd171cfe59543453bc6486", "grade": false, "grade_id": "cell-ec0c8f83ffb0d9c7", "locked": true, "schema_version": 3, "solution": false}
# # Learning Objectives
#
# This lecture will show you how to:
# 1. Make 1-D graphs using *Matplotlib*
# 2. Customize the appearance of your graphs
# 3. Create figures with multiple subplots
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "1a83a543fa8de564025fbe4651fd4bc9", "grade": false, "grade_id": "cell-abd1b2cca923116d", "locked": true, "schema_version": 3, "solution": false}
# imports
import numpy as np
import matplotlib.pyplot as plt # plotting
import grading_helper as _test
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "87a23df8e7ebc4e654cdbd30c2fc2208", "grade": false, "grade_id": "cell-fd4c445965bbd3eb", "locked": true, "schema_version": 3, "solution": false}
# # Plotting a 1-D Array
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "6c44358fb96b71f00da41880777227a9", "grade": false, "grade_id": "cell-2134bbe60932de24", "locked": true, "schema_version": 3, "solution": false}
# %video Cia8UYhaEfE
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "a8b95bed31fcd0c671c559bde7f66293", "grade": false, "grade_id": "cell-5fa018e5c6f22714", "locked": true, "schema_version": 3, "solution": false}
# Summary:
# - Typical import statement: `import matplotlib.pyplot as plt`
# - Make a line plot using `plt.plot(x, y)` then show it with `plt.show()`
# -
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "c4dca94c523426925a49a977e01e7b49", "grade": false, "grade_id": "cell-ba6487f4150fc6a6", "locked": true, "schema_version": 3, "solution": false}
# # What are `*args` and `**kwargs`?
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "99232bf7dfe2f4f9fe216364669444d0", "grade": false, "grade_id": "cell-30d9cb8e46897fad", "locked": true, "schema_version": 3, "solution": false}
# %video f4G2N5GTZ18
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "d56196e28ba14ca3d1fdf25e2ba2cc27", "grade": false, "grade_id": "cell-ff287146601d93ad", "locked": true, "schema_version": 3, "solution": false}
# Summary:
# - `*args` means "any number of positional arguments."
# - `**kwargs` means "any number of keyword arguments."
# -
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "6b988997f78bc6338876e48869db52c9", "grade": false, "grade_id": "cell-f1a1d118b438df37", "locked": true, "schema_version": 3, "solution": false}
# # Customizing Your Plots
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "dd4aece02510441656ef8e338f0b9fb4", "grade": false, "grade_id": "cell-a23884cd333263e6", "locked": true, "schema_version": 3, "solution": false}
# %video 5d2nxZ607Kg
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "8b10bce4836792830ef6e21443acd3c8", "grade": false, "grade_id": "cell-ba03145392faefc7", "locked": true, "schema_version": 3, "solution": false}
# Summary:
# - Set `color` and `linestyle` using keyword arguments.
# - A format string can be used as a shortcut: `plt.plot(x, y, "g--")` makes a green dashed line.
#
#
# - Set the limits with `plt.xlim` and `plt.ylim`
# - Adjust figure size with `plt.figure(figsize=(width, height))`
# - Add a title with `plt.title` and label the axes with `plt.xlabel` and `plt.ylabel`
#
#
# - Add a legend with `plt.legend` (give each plot a `label` keyword).
# - Add a grid with `plt.grid`
# - Add horizontal and vertical lines with `plt.axhline` and `plt.axvline`
# -
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "b2e3c8e32c712c64aad9a94d09015b7b", "grade": false, "grade_id": "cell-6587c24a92522869", "locked": true, "schema_version": 3, "solution": false}
# ## Your Turn
#
# Plot something using `plt.plot` and `plt.show`. Use a red line. Add a title to your plot with `plt.title`.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "f826bebb2ff5ee8876e324befb62f5b3", "grade": false, "grade_id": "cell-dbbd0e0871dc068a", "locked": false, "schema_version": 3, "solution": true}
# %%graded # 2 points
# YOUR CODE HERE
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "865d82d42a54dd7bb8c605edd5e67352", "grade": true, "grade_id": "cell-1220eaf0a0723282", "locked": true, "points": 2, "schema_version": 3, "solution": false}
# %%tests
_test.code_contains("plot", "show")
_test.code_contains("r(ed)?") # matches "r" or "red"
_test.plot_has("title")
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "a8edbb9450d0b07b891acb1bfc3040af", "grade": false, "grade_id": "cell-268b53af66fc7346", "locked": true, "schema_version": 3, "solution": false}
# # Subplots
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "12e4bf447c364f0d8f6146db6b0fb1ed", "grade": false, "grade_id": "cell-74fb87bb352821a6", "locked": true, "schema_version": 3, "solution": false}
# %video ubIu2rJekRU
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "c0c3057099012495463ede7cc01be5a8", "grade": false, "grade_id": "cell-b371c2e0edd23a7c", "locked": true, "schema_version": 3, "solution": false}
# Summary:
# - Create multiple figures by calling `plt.figure` for each.
# - Create one figure with multiple subplots using `plt.subplot`, `plt.subplots`, or `plt.subplot2grid`
# - Matplotlib's object-oriented interface offers an alternate way to control formatting. See https://matplotlib.org/3.1.1/tutorials/introductory/lifecycle.html
# -
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "10e7e1660dae852144d9df2278c7aa33", "grade": false, "grade_id": "cell-cebf95be7698fe25", "locked": true, "schema_version": 3, "solution": false}
# ## Your Turn
#
# Create a figure with four subplots plots arranged in a 2$\times$2 grid. Set the figure size to 8"$\times$8".
# + deletable=false nbgrader={"cell_type": "code", "checksum": "10907582505b5d08eb3d7d1dda80827b", "grade": false, "grade_id": "cell-a9fbe644e3861a32", "locked": false, "schema_version": 3, "solution": true}
# %%graded # 3 points
# YOUR CODE HERE
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "f6cab66d58cbc45c33e234a256f3cdac", "grade": true, "grade_id": "cell-714e7c270b1a0b3a", "locked": true, "points": 3, "schema_version": 3, "solution": false}
# %%tests
_plot = _test.get_plot()
_test.code_contains("subplot")
_test.equal(_plot.get_figwidth(), 8) # check figure size
_test.equal(_plot.get_figheight(), 8) #
_test.plot_shown()
_test.equal(len(_plot.axes), 4) # are there exactly 4 subplots?
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "03e2f5c8c4e8a71221f4a0b176e58417", "grade": false, "grade_id": "cell-00efdf6cef2b715a", "locked": true, "schema_version": 3, "solution": false}
# # Additional Resources
#
# - Textbook sections 3.1, and 3.2
#
# In the textbook you'll see `import pylab`, but that name is deprecated. We will instead use `import matplotlib.pyplot as plt`. All of the function names are the same.
| Assignments/03.3 Lecture - Basic Plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Imports
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import datatable as dt
test_df = dt.fread('dataset/test.csv').to_pandas()
test_df['SalePrice'] = -1
train_df = dt.fread('dataset/train.csv').to_pandas()
dataset = pd.concat([train_df, test_df], axis=0)
dataset = test_df.copy()
# +
# PRE-PROCESSING
# Replace NaNs with NA for categorical columns
na_cols = ['Alley', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2',
'FireplaceQu', 'GarageCond', 'GarageFinish', 'GarageQual', 'GarageType', 'Fence',
'MiscFeature', 'PoolQC']
for col in na_cols:
dataset[col] = dataset[col].fillna('None')
nan_cols = ['TotalBsmtSF', 'GarageArea', 'MasVnrArea']
for col in nan_cols:
dataset[col] = dataset[col].fillna(0)
# Replace NaNs with 0 for categorical columns that can be converted to ordinals
qual_map = {'Ex': 5,'Gd': 4,'TA': 3,'Fa': 2,'Po': 1, 'NA': 0}
qual_cols = ['BsmtQual', 'BsmtCond', 'FireplaceQu', 'HeatingQC', 'GarageQual', 'GarageCond',
'ExterQual', 'ExterCond', 'KitchenQual', 'PoolQC']
for col in qual_cols:
dataset[col] = dataset[col].map(qual_map).fillna(0)
# +
numerical_cols = ['LotArea', 'LotFrontage', 'OverallQual', 'OverallCond', 'YearRemodAdd', 'MasVnrArea',
'BsmtQual', 'BsmtCond', 'TotalBsmtSF', 'HeatingQC', 'GrLivArea', 'TotRmsAbvGrd', 'Fireplaces',
'GarageArea', 'GarageQual', 'GarageCond', 'GarageYrBlt', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch',
'1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtFullBath', 'BsmtHalfBath',
'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'BsmtUnfSF', 'GarageCars']
dataset = dataset[numerical_cols]
# -
categorical_cols = ['MSSubClass', 'MSZoning', 'Alley', 'Street', 'LotConfig', 'LotShape',
'LandContour', 'LandSlope', 'Neighborhood', 'BldgType', 'RoofStyle',
'Foundation', 'CentralAir', 'GarageType', 'GarageFinish']
ohe = pd.get_dummies(test_df[categorical_cols])
dataset = dataset.join(ohe)
test_cols = dataset.columns.to_list()
import joblib
training_cols = joblib.load('models/training_cols.pkl')
for col in training_cols:
if col not in test_cols:
dataset[col] = 0
# +
from sklearn.preprocessing import MinMaxScaler
from sklearn import metrics
scaler = joblib.load("models/scaler.pkl")
scaled_df = scaler.transform(dataset)
scaled_df = pd.DataFrame(scaled_df, columns=test_cols)
# -
regressor = joblib.load("models/xgb.pkl")
y_pred = regressor.predict(scaled_df)
y_test = dt.fread("dataset/submission_7.csv").to_pandas()['SalePrice'].values
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(np.log(y_test), np.log(y_pred))))
output = pd.DataFrame({'Id': test_df.Id, 'SalePrice': y_pred})
output.to_csv('dataset/submission_9.csv', index=False)
# + jupyter={"outputs_hidden": true}
dataset.columns.to_list()
# -
| model_output.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 循环
# - 循环是一种控制语句块重复执行的结构
# - while 适用于广度遍历
# - for 开发中经常使用
# ## while 循环
# - 当一个条件保持真的时候while循环重复执行语句
# - while 循环一定要有结束条件,否则很容易进入死循环
# - while 循环的语法是:
#
# while loop-contunuation-conndition:
#
# Statement
# ## 示例:
# sum = 0
#
# i = 1
#
# while i <10:
#
# sum = sum + i
# i = i + 1
import os#死循环
i = 0
while 1:
os.system('calc')
# ## 错误示例:
# sum = 0
#
# i = 1
#
# while i <10:
#
# sum = sum + i
#
# i = i + 1
# - 一旦进入死循环可按 Ctrl + c 停止
# ## EP:
# 
# 
# 答案:5,0
#
# # 验证码
# - 随机产生四个字母的验证码,如果正确,输出验证码正确。如果错误,产生新的验证码,用户重新输入。
# - 验证码只能输入三次,如果三次都错,返回“别爬了,我们小网站没什么好爬的”
# - 密码登录,如果三次错误,账号被锁定
#
import random
import random
shuruma = eval(input('输入四个验证码'))
yzm1 = random.randint(65,89)
yzm2 = random.randint(65,89)
yzm3 = random.randint(65,89)
yzm4 = random.randint(65,89)
suiji = chr(yzm1) + chr(yzm2)+ chr(yzm3)+ chr(yzm4)
i=1
while i<10:
if shuruma == suiji:
print('正确')
if shuruma !=suiji:
print('错误,重新输入',suiji)
break
import random
number = random.randint(1000,9999)
print('验证码为:',number)
me = eval(input('>>'))
if number == me:
print('验证码输入正确')
else:
print('验证码输入错误')
chooose = random.randint(0.1)
list_
# ## 尝试死循环
# ## 实例研究:猜数字
# - 你将要编写一个能够随机生成一个0到10之间的且包括两者的数字程序,这个程序
# - 提示用户连续地输入数字直到正确,且提示用户输入的数字是过高还是过低
# ## 使用哨兵值来控制循环
# - 哨兵值来表明输入的结束
# - 
# ## 警告
# 
# ## for 循环
# - Python的for 循环通过一个序列中的每个值来进行迭代
# - range(a,b,k), a,b,k 必须为整数
# - a: start
# - b: end
# - k: step
# - 注意for 是循环一切可迭代对象,而不是只能使用range
for i in range(10):
print(i)
b = 'ab'
b.__iter__()### 下划线加iter加下划线,可迭代才可for循环
for i in range(0,10,2):
print(i)
for i in range(10,0,-1):
print(i)
# # 在Python里面一切皆对象
# ## EP:
# - 
# +
i = 1
sum = 0
while sum < 10000:
sum = sum + i
i += 1
print(i)
print(sum)
# -
sum = 0
for i in range(10000):
sum += i
if sum == 10011:
break
print(i)
# ##5.7 可以,但反向不可以
# 5.8 sum_ = 0 (sum保留字符)
#
sum_ = 0
i = 0 ##运行次数
while i <1001:
sum_+= 1
i += 1
print(sum_)
print(i)
# ## 嵌套循环
# - 一个循环可以嵌套另一个循环
# - 每次循环外层时,内层循环都会被刷新重新完成循环
# - 也就是说,大循环执行一次,小循环会全部执行一次
# - 注意:
# > - 多层循环非常耗时
# - 最多使用3层循环
for i in range(10):
for j in range(10):
print(i,j)
# %time
for i in range(100000):
for j in range(100000):
for k in range(100000):
k +=i
# +
# %time
for i in range(10):
for j in range(10):
if j ==5:
print(i,j)
continue
# -
# ## EP:
# - 使用多层循环完成9X9乘法表
# - 显示50以内所有的素数
for i in range(1,10):
for j in range(1,i+1):
d = i * j
print("%d*%d=%-2d"%(i,j,d),end = " ")
print()
for i in range(2,50):
if i == 2:
print(i,'是素数')
else
import math
l = [ ]
for a in range(1,50):
for b in range(2,int(math.sqrt(a)+1)):#质数只需要不能整除2~根号自己就可以了。
l.append(a%b)#将所有b遍历的结果加到列表中
if 0 not in l:#注意缩进,一定要在b完整的遍历以后执行。如果将缩进后移,代表了完全不同的意义。
print (a,' ',end='')
l = [ ]#执行完一次b的遍历将列表清空。
# ## 关键字 break 和 continue
# - break 跳出循环,终止循环
# - continue 跳出此次循环,继续执行
# ## 注意
# 
# 
# # Homework
# - 1
# 
a=int(input())
if a>0:
print('正数')
elif a<0:
print('负数')
else: print('0')
# - 2
# 
xuefei = 10000
i=1
if i <10:
xuefei += xuefei * 0.05
i += 1
print(xuefei)
# - 4
# 
n=100
m=0
while n<=1000:
n=n+1
m=m+1
if n%5==0 and n%6==0:
print (n)
# - 5
# 
n=1
while n*n >12000:
n=n+1
continue
print(n)
n=1
while n*n*n <12000:
n=n+1
continue
print(n)
# - 6
# 
# - 7
# 
# - 8
# 
sum = 0
for i in range(3,100,2):
sum += (i-2)/i
print(sum)
# - 9
# 
# - 10
# 
for i in range(1,10001):
sum_ = 0
for j in range(1,i):
if i % j == 0:
sum_ = j
if sum_ == i:
print(i)
# - 11
# 
shu = 0
for i in range (1,7):
for j in range(1,7):
shu += 1
print (i,j)
print('the total number of all combinations is:',shu)
# - 12
# 
| 7.19.ipynb |
-- # The Elm Architecture
--
-- This is a pretty minimal example of The Elm Architecture in a Jupyter notebook.
--
-- 
-- ## Imports
-- First, we have to import a few other modules
import Browser
import Html exposing (..)
import Html.Events exposing (..)
-- ## Messages
-- Next we define our message "vocabulary". These are the various ways in which our model can be updated.
type Msg = Inc | Dec
-- ## Model
-- Our model is simply an integer in this case. While it's not strictly necessary to create an alias for it, we'll include one for completeness; for more complex models you'll almost always have an alias.
type alias Model = Int
-- ## Init and subscriptions
-- Next we define our init and subscriptions. Init passes the starting value in to our main program.
-- Subscriptions is for telling our main program what events we want to receive.
-- +
init : () -> (Model, Cmd Msg)
init _ = (0, Cmd.none)
subscriptions : Model -> Sub Msg
subscriptions _ = Sub.none
-- -
-- ## View
-- The `view` function renders our model.
view : Model -> Html Msg
view model =
div []
[ button [ onClick Inc ] [ text "+" ]
, div [] [ text (String.fromInt model) ]
, button [ onClick Dec ] [ text "-" ]
]
-- ## Update
-- The `update` function takes a `Msg` and a `Model`, returning a new, updated `Model`.
update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
case msg of
Inc -> (model + 1, Cmd.none)
Dec -> (model - 1, Cmd.none)
-- ## Main
-- Finally we tie everything together with `main`.
-- +
main =
Browser.element
{ init = init
, view = view
, update = update
, subscriptions = subscriptions
}
-- compile-code
| examples/the-elm-architecture.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Importing Gaia data to Firefly
#
# *Building off of the tutorial here: http://gea.esac.esa.int/archive-help/index.html *
#
# *The main table is gaiadr2.gaia_source, and [here](http://gea.esac.esa.int/archive/documentation/GDR2/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_gaia_source.html) is the description of the columns*
# %load_ext autoreload
# %autoreload 2
from FIREreader import FIREreader
import astropy.units as u
from astropy.coordinates import ICRS
from astropy.coordinates.sky_coordinate import SkyCoord
from astroquery.gaia import Gaia
import numpy as np
# *Do the following to load and look at the available Gaia table names:*
tables = Gaia.load_tables(only_names=True)
for table in (tables):
print (table.get_qualified_name())
gaiadr2_table = Gaia.load_table('gaiadr2.gaia_source')
for column in (gaiadr2_table.get_columns()):
print(column.get_name())
# ### Next, we retrieve the data
#
# *Note: The query to the archive is with ADQL (Astronomical Data Query Language). *
# +
#NOTE: not all sources have RVs
#cmd = "SELECT TOP 100 ra, dec, parallax, pmra, pmdec, radial_velocity FROM gaiadr2.gaia_source"
#NOTE: It looks like there are
#480235 sources with g <=10 and with parallaxes
#1236351 sources with g <=11 and with parallaxes
#I could only get 3000000 for g<=12. I wonder if that is the limit?
#I think this is already ordered by gmag, so by default this will pull the brightest first
N = 2e5
gmax = 12
cmd = "SELECT TOP " + str(np.int(N))+" \
ra, dec, parallax, pmra, pmdec, radial_velocity, teff_val, phot_g_mean_mag \
FROM gaiadr2.gaia_source \
WHERE phot_g_mean_mag<=" + str(gmax) +" \
AND parallax IS NOT NULL "#\
#AND radial_velocity IS NOT NULL"
print(cmd)
#synchronous commands are OK for jobs with < 2000 output rows
#job = Gaia.launch_job(cmd, dump_to_file=False)
#asynchronous commands for larger files
job = Gaia.launch_job_async(cmd, dump_to_file=False)
print (job)
# -
# *Inspect the output table and number of rows:*
r = job.get_results()
print(len(r))
print(r)
# ### Convert these ra, dec, parallax coordinates to 3D cartesian
# +
#it is odd that some of the parallaxes are negative!
xx = np.where(r['parallax'] < 0)[0]
print("Number, fraction where parallax < 0 :", len(xx), len(xx)/len(r['parallax']))
dist = np.abs((r['parallax']).to(u.parsec, equivalencies=u.parallax()))
#this only contains the position, and in cartesian it is much faster
#astroC = SkyCoord(ra=r['ra'], dec=r['dec'], distance=dist).cartesian #transform_to(coord.Galactocentric)
#this would account for velocities also
pmra = np.array([x for x in r['pmra']]) * u.mas/u.yr
pmdec = np.array([x for x in r['pmdec']]) * u.mas/u.yr
rv = np.array([x for x in r['radial_velocity']]) * u.km/u.s
xx = np.where(np.isnan(r['radial_velocity']) )[0]
print("Number, fraction without RVs :", len(xx), len(xx)/len(r['radial_velocity']))
#################
#################
# what should we about missing RV data?
#################
#################
for i in xx:
rv[i] = 0*u.km/u.s
#NOTE: without RVs, the velocities are not correct
astroC = ICRS(ra=r['ra'], dec=r['dec'], distance=dist, pm_ra_cosdec=pmra, pm_dec=pmdec, radial_velocity=rv)
print(astroC.cartesian)
print(astroC.velocity)
# -
# *Now put these in the format expected by FIREreader, and convert to cartesian*
# +
p = astroC.cartesian
v = astroC.velocity
coordinates = np.dstack((p.x.to(u.pc).value, p.y.to(u.pc).value, p.z.to(u.pc).value)).squeeze()
velocities = np.dstack((v.d_x.to(u.km/u.s).value, v.d_y.to(u.km/u.s).value, v.d_z.to(u.km/u.s).value)).squeeze()
Teff = np.array(r['teff_val'])
gMag = np.array(r['phot_g_mean_mag'])
print(coordinates[0:10])
print(velocities[0:10])
print(min(Teff), max(Teff))
print(min(gMag), max(gMag))
# -
# ### Create the Firefly data files using FIREreader
# +
pname = 'Gaia'
reader = FIREreader()
#set all of these manually
reader.returnParts = [pname]
reader.names = {pname:pname}
reader.dataDir = pname
reader.JSONfname = pname+'Data'
#create a space for the data in partsDict
reader.partsDict[pname] = dict()
#define the defaults; this must be run first if you want to change the defaults below
reader.defineDefaults()
reader.defineFilterKeys()
#update a few of the options
reader.options['sizeMult'][pname] = 0.001
reader.options['color'][pname] = [1., 1., 1., 1.]
reader.options['UIcolorPicker'][pname] = False
reader.options['center'] = [0,0,0]
reader.addtodict(reader.partsDict,None,pname,'Coordinates',0,0,vals=coordinates)
reader.addtodict(reader.partsDict,None,pname,'Velocities',0,0,vals=velocities)
reader.addtodict(reader.partsDict,None,pname,'Teff',0,0,vals=Teff, filterFlag = True)
reader.addtodict(reader.partsDict,None,pname,'gMag',0,0,vals=gMag, filterFlag = True)
reader.shuffle_dict()
reader.swap_dict_names()
reader.createJSON()
# -
| data/GaiaToFirefly.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import torch
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from bgflow.utils import (assert_numpy, distance_vectors, distances_from_vectors,
remove_mean, IndexBatchIterator, LossReporter, as_numpy, compute_distances
)
from bgflow import (GaussianMCMCSampler, DiffEqFlow, BoltzmannGenerator, Energy, Sampler,
MultiDoubleWellPotential, MeanFreeNormalDistribution, KernelDynamics)
# +
# first define system dimensionality and a target energy/distribution
dim = 6
n_particles = 2
n_dimensions = dim // n_particles
# DW parameters
a=0.9
b=-4
c=0
offset=4
target = MultiDoubleWellPotential(dim, n_particles, a, b, c, offset, two_event_dims=False)
# +
# define a MCMC sampler to sample from the target energy
init_state = torch.ones(size=(10, dim)).normal_()
target_sampler = GaussianMCMCSampler(target, init_state=init_state, noise_std=0.5, n_burnin=100)
# -
data = target_sampler.sample(1000)
data = data.reshape(-1, dim)
data = remove_mean(data, n_particles, n_dimensions)
# +
dists_data = as_numpy(compute_distances(data, n_particles, n_dimensions))
plt.hist(dists_data.reshape(-1), bins=100);
# -
# now set up a prior
prior = MeanFreeNormalDistribution(dim, n_particles, two_event_dims=False).cuda()
# +
# set of the equivariant kernel dynamics
n_dimension = dim // n_particles
d_max = 8
mus = torch.linspace(0, d_max, 50).cuda()
mus.sort()
gammas = 0.3 * torch.ones(len(mus)).cuda()
mus_time = torch.linspace(0, 1, 10).cuda()
gammas_time = 0.3 * torch.ones(len(mus_time)).cuda()
kdyn = KernelDynamics(n_particles, n_dimension, mus, gammas, optimize_d_gammas=True, optimize_t_gammas=True,
mus_time=mus_time, gammas_time=gammas_time).cuda()
# -
flow = DiffEqFlow(
dynamics = kdyn
).cuda()
# +
# having a flow and a prior, we can now define a Boltzmann Generator
bg = BoltzmannGenerator(prior, flow, target).cuda()
# +
# initial training with likelihood maximization on data set
n_batch = 64
batch_iter = IndexBatchIterator(len(data), n_batch)
optim = torch.optim.Adam(bg.parameters(), lr=5e-3)
n_epochs = 500
n_report_steps = 1
reporter = LossReporter("NLL")
# +
# use DTO in the training process
flow._use_checkpoints = True
# Anode options
options={
"Nt": 20,
"method": "RK4"
}
flow._kwargs = options
# +
# train with convex mixture of NLL and KL loss
n_kl_samples = 64
n_batch = 64
batch_iter = IndexBatchIterator(len(data), n_batch)
optim = torch.optim.Adam(bg.parameters(), lr=5e-3)
n_epochs = 5
n_report_steps = 5
# mixing parameter
lambdas = torch.linspace(1., 0.1, n_epochs).cuda()
reporter = LossReporter("NLL", "KLL")
# -
for epoch, lamb in enumerate(lambdas):
for it, idxs in enumerate(batch_iter):
batch = data[idxs].cuda()
optim.zero_grad()
# negative log-likelihood of the batch is equal to the energy of the BG
nll = bg.energy(batch).mean()
# aggregate weighted gradient
(lamb * nll).backward()
# kl divergence to the target
kll = bg.kldiv(n_kl_samples).mean()
# aggregate weighted gradient
((1. - lamb) * kll).backward()
reporter.report(nll, kll)
optim.step()
if it % n_report_steps == 0:
print("\repoch: {0}, iter: {1}/{2}, lambda: {3}, NLL: {4:.4}, KLL: {5:.4}".format(
epoch,
it,
len(batch_iter),
lamb,
*reporter.recent(1).ravel()
), end="")
# use OTD in the evaluation process
flow._use_checkpoints = False
flow._kwargs = {}
reporter.plot()
# +
n_samples = 30000
samples, latent, dlogp = bg.sample(n_samples, with_latent=True, with_dlogp=True)
log_w = as_numpy(bg.log_weights_given_latent(samples, latent, dlogp))
distances_x = as_numpy(compute_distances(samples, n_particles, n_dimensions))
# +
def distance_energy(d):
d = d - offset
return a * d**4 + b * d**2
d = torch.linspace(1, 7, 1000).view(-1, 1) + 1e-6
u = torch.exp(-(distance_energy(d).view(-1, 1) - offset )).sum(dim=-1, keepdim=True) * d.abs() **(dim // n_particles - 1)
Z = (u * 1 / (len(d) / (d.max() - d.min()))).sum()
e = u / Z
plt.figure(figsize=(16,9))
plt.plot(d, e, label="Groundtruth", linewidth=4, alpha = 0.5)
plt.hist(dists_data.reshape(-1), bins=100, label="training samples", alpha=0.5, density=True, histtype='step', linewidth=4);
plt.hist(distances_x.reshape(-1), bins=100, label="bg samples", alpha=0.7, density=True, histtype='step', linewidth=4);
plt.hist(distances_x.reshape(-1), bins=100, label="reweighted bg samples", alpha=0.7, density=True, histtype='step', linewidth=4, weights=np.exp(log_w));
plt.xlim(1,7)
plt.legend(fontsize=35)
plt.xlabel("u(x)", fontsize=45)
plt.xticks(fontsize=45)
plt.yticks(fontsize=45);
# -
| notebooks/example_equivariant_nODE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp dt
# default_cls_lvl 2
# -
# # Chapter 6. Decision Trees
# >
# - In this Chapter, we will start by discussing how to train, validate, and make predictions with decision trees.
# - Then we will go through the CART training algorithm used by Scikit-Learn, we will discuss how to regularize trees and use them in regression tasks.
# - Finally, we will discuss some of the limitations of decision trees.
#
# ## Training & Visualizing a Decision Tree
#
# - To understand decision trees, let's start by building one and taking a look at its predictions.
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
X = iris.data[:, 2:] # Petal length and width
y = iris.target
X.shape, y.shape
tree_clf = DecisionTreeClassifier(max_depth=2)
tree_clf.fit(X, y)
# - You can visualize the decision tree by using the `export_graphiz()` method to export a graph representation file then taking a look at it:
from sklearn.tree import export_graphviz
export_graphviz(tree_clf,
out_file='models/06/iris_tree.dot',
feature_names=iris.feature_names[2:],
class_names=iris.target_names,
rounded=True,
filled=True)
# Let's convert the graph file into a .png file:
# ! dot -Tpng models/06/iris_tree.dot -o static/imgs/iris_tree.png
# And here it is:
#
# <div style="text-align:center"><img style="width:33%" src="static/imgs/iris_tree.png"></div>
# ## Making Predictions
#
# - To classify a new data point, you start at the root node of the graph (on the top), and you answer the binary questions and you reach the end leaf.
# - That end leaf represents your class.
# - It's really that simple!
# - One of the many qualities of decision trees is that they require very little data preparation.
# - In fact, they don't require feature scaling or centering at all!
# - A node's `samples` attribute counts how many training instances are sitting on the node.
# - A node's `value` attribute tells you have many instances of each class are setting on the node.
# - A node's `gini` attribute measures the nodes impurity (pure == 0)
# - The following equation shows how the training algorithm computes the gini scores of the ith node:
#
# $$G_i=1-\sum_{k=1}^n{p_{i,k}}^2$$
#
# - Where $p_{i,k}$ is the ratio of class $k$ instances among the training instances in that particular node.
# - In our case: $k \in \{1,2,3\}$.
# - Scikit-learn uses the CART algorithm, which produces only binary trees
# - Non-leaf nodes only have two children
# - However, other algorithms such as ID3 can produce decision trees with nodes that have more than 2 children.
# - The following figure shows the decision boundaries of our decision tree
# - Decision Trees tend to create lines/rectangles/boxes/.. and split the feature space linearly but iteratively.
#
# <div style="text-align:center"><img style="width:50%" src="static/imgs/decision_tree_boundaries.png"></div>
#
# - Decision Trees are intuitive, and their predictions are easily interpretable.
# - These types of models are called **white box** models.
# - In contrast, as we will see, Random Forests and Neural Networks are generally considered Black Box models.
# ## Estimating Class Probabilities
#
# - A decision tree can also estimate the probability that a certain instance belongs to a certain class.
# - It just returns the ratio of that class over the sum of all instances in the leaf.
# - Let's check this in scikit-learn:
tree_clf.predict_proba([[5, 1.5]])
tree_clf.predict([[5, 1.5]])
# - Interesting insight: you'll get the same probability as long as you're in a certain leaf box
# - It doesn't matter if your new data point gets closer to the decision boundaries.
# ## The CART Training Algorithm
#
# - Scikit-Learn uses the Classification and Regression Tree (CART) algorithm to train decision trees (also called "growing" trees).
# - The algorithm works by first splitting the training set by feature $k$ and threshold $t_k$.
# - How does it choose $k$ and $t_k$?
# - It searches for $(k,t_k)$ that produce the purest subsets.
# - Weighted by their size.
# - The following gives the loss function that CART tries to minimize:
#
# $$J(k,t_k)=\frac{m_{left}}{m}G_{left} + \frac{m_{right}}{m}G_{right}$$
#
# - Where:
# - $G_{left/right}$ measures the resulted impurity in the left/right subsets.
# - $m_{left/right}$ correspond to the number of instances in the left/right subsets.
# - Once the CART algorithm successfully split the initial training data into two subsets, it does the same thing to both subsets.
# - It stops recursing once it reaches the maximum allowed tree depth (the `max_depth` hyper-parameter).
# - Or if it cannot find a split that reduces impurity.
# - A few other hyper-parameters control stopping like:
# - `min_samples_split`, `min_samples_leaf`, `min_weight_fraction_leaf`, `max_leaf_nodes`.
# - The CART algorithm is greedy in the sense that it doesn't care if its current split will lead to an optimal downstream leaf.
# - It only cares about finding the best possible split at the current leaf.
# - In that sense, it doesn't necessarily result in an optimal solution.
# - Unfortunately, finding the optimal tree is known to be an **NP-Complete** problem with a complexity of $O(exp(m))$.
# ## Computational Complexity
#
# - Making a prediction requires us to go from the root the final leaf.
# - Decision trees are approximately balanced, so traversing the decision tree require going through roughly $O(log_{2}(m))$.
# - Since each node requires check the value of only one feature, the overall inference running time is $O(log_{2}(m))$.
# - Independent of the number of features.
# - So predictions are really fast, even when dealing with a large number of features.
# - The training algorithm compares all features (except if `max_features` is set) on all samples at each node.
# - Comparing all features at all samples at each node results in a training complexity of $O(n \times mlog_2(m))$.
# - For small training sets (less than a few thousands), scikit-learn can speed up training by presorting the data.
# ## Gini Impurity or Entropy?
#
# - In information theory, entropy is zero when all messages are identical.
# - In ML, entropy is often used as an impurity measure.
# - A set's entropy is zero when **it contains instances of only one class**.
# - The following formula shows the entropy at node $i$:
#
# $$H_i=-\sum_{k=1}^{n}p_{i,k}log_2(p_{i,k})$$
#
# - There's no big difference between using Gini or Entropy to measure impurity.
# - Gini impurity is slightly faster to compute.
# - When they differ, Entropy tends to produce more balanced trees.
# ## Regularization Hyperparameters
#
# - **Decision Trees make very few assumptions about the training data**.
# - If left unconstrained and because of its flexibility, a decision tree will adapt itself to perfectly fit the training data.
# - Naturally leading to overfitting.
# - Such a model is often called a *non-parameteric model* because the number of parameters is not determined before training.
# - You can at least restrict the maximum depth of the decision tree.
# - Other regularization hyper-parameters include:
# - `min_samples_split`: The minimum number of samples a node must have for it to split.
# - `min_samples_leaf`: The minimum number of samples a leaf must have.
# - `min_weight_fraction_leaf`: `mean_samples_leaf` as a fraction.
# - `max_leaf_nodes`: the maximum number of leaf nodes.
# - `max_features`: The maximum number of features that are evaluated for any split.
# - The following figure shows two decision trees trained on the same moon dataset, the left one represent an unconstrained trained decision tree, and the right one is regularized using the `min_samples_leaf` hyper-parameter:
#
# <div style="text-align:center"><img style="width:50%" src="static/imgs/regularized_tree.png"></div>
# ## Regression
#
# - Decision Trees are also capable of performing regression tasks.
# - Let's try it using scikit-learn:
# - First we want to generate a noisy quadratic dataset:
import numpy as np
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
X = np.linspace(start=0, stop=1, num=500)
y = (X-0.5)**2 + np.random.randn(500)/50.
plt.scatter(X, y, s=1.5, c='red')
tree_reg = DecisionTreeRegressor(max_depth=2)
tree_reg.fit(X[..., None], y[..., None])
# Let's check the resulting tree:
export_graphviz(tree_reg,
out_file='models/06/reg_tree.dot',
feature_names=['X'],
class_names=['y'],
rounded=True,
filled=True)
# ! dot -Tpng models/06/reg_tree.dot -o static/imgs/reg_tree.png
# <div style="text-align:center"><img style="width:50%" src="static/imgs/reg_tree.png"></div>
#
# - This tree looks very similar to the classification tree we built earlier.
# - The main difference is that instead of predicting a class for each node, it predicts a value.
# - the prediction represents the average target value for the group in the leaf.
# - As you increase the `max_depth` hyper-parameter, you provide the regression tree with more flexibility, the following showcases tree predictions in red:
#
# <div style="text-align:center"><img style="width:66%" src="static/imgs/regression_trees.png"></div>
#
# - The CART algorithm works almost the same as before, but instead of searching for a split that minimizes impurity, it searches for a split that produce balanced samples per leaf and minimize $MSE$.
# - We show the cost function that the algorithm tries to minimize:
#
# $$J(k,t_k)=\frac{m_{left}}{m}MSE_{left} + \frac{m_{right}}{m}MSE_{right} \\ MSE=\frac{1}{m}\sum_{i=1}^{m}(\hat{y}_{i}-y_{i})^{2}$$
#
# - Just like classification, regression trees are prone to overfitting the training data, without any regularization, we endup with the plot on the left, and setting `min_samples_leaf=10` produce a much reasonable model:
#
# <div style="text-align:center"><img style="width:66%" src="static/imgs/regularizing_trees.png"></div>
# ## Instability
#
# - Decision Trees have a few limitations:
# - Decision Trees love orthogonal decision boundaries.
# - Which makes them sensitive to training set rotation.
# - One way to limit this problem is to use PCA (Principal Component Analysis) which often results in a better orientation of the training data.
# - Decision Trees are sensitive to small variations in the training data.
# - In fact, because scikit-learn uses stochastic optimization, you might get different models for the same training dataset.
# - Random Forests can solve this problem by averaging incoming prediction from many decision trees.
# ---
# # Exercices
# **1. What is the approximate depth of a decision tree trained without restrictions on a training set with 1 million instances?**
#
# - If the tree is balanced, then at every layer it splits the samples in two, so the depth is $log_{2}(1\,000\,000) \approx 20$.
# - Acutally a bit more since the tree won't be perfectly balanced.
# **2. Is a node's Gini impurity generally lower or greater than its parent's? Always Lower/greater?**
#
# - Gini's impurity score is generally lower than its parent.
# - However, it is possible for a child node to have higher Gini's score as long as it's compensated by its other binary node and by the weights of its sample size.
# **3. If a decision tree is overfitting the training set, is it a good idea to try decreasing `max_depth`?**
#
# - Yes, It's a good idea since you're contraining the model's predictions to bigger `sample_size` averages.
# **4. If a decision tree is underfitting the training set, is it a good idea to try scaling the input features?**
#
# - Decision Trees don't need feature scaling for them to work, you can reduce underfitting by increasing `max_depth`, decreasing `min_leaf_samples` or any of the other regularization hyper-parameters.
# **5. If it takes one hour to train a decision tree on a training set containing one million instances, roughly how much time it would take it on a 10M training set?**
#
# - $11.66$hours.
# **6. If your training set contains 100K instances, will setting `presort=True` speedup training?**
#
# - No, sorting data will only boost performance when you have instance on the scale of less than a few thousands.
# **7. Train & Fine-tune a decision tree for the moons dataset by following these steps:**
# a. Use `make_moons(n_samples=10000, noise=0.4)` to generate a moons dataset
import sklearn
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=10000, noise=0.4)
plt.scatter(X[:, 0], X[:, 1], c=y, s=1)
plt.show()
# b. Use `train_test_split()` to split the data into a training set and a test test:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# c. Use grid search with cross validation (with the help of the `GridSearchCV`) to find good hyper-parameter values for a `DecisionTreeClassifier`
#
# *Hint: Try various values for `max_leaf_nodes`*
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
param_grid = {
'max_leaf_nodes': [3, 4, 5, 6, 7]
}
grid_searcher = GridSearchCV(estimator=clf, param_grid=param_grid, n_jobs=-1)
grid_searcher.fit(X_train, y_train)
grid_searcher.best_score_
grid_searcher.best_params_
# d. Train it on the full training set using these hyper-parameters, and measure your model's performance on the test set.
#
# *You should get roughly 85% to 87% accuracy*
clf = DecisionTreeClassifier(max_leaf_nodes=4)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# **8. Grow a Forest by following these steps**
# a. Continuing the previous exercice, generate a 1,000 subsets of the training set, each containing a 100 instances selected randomly.
#
# *Hint: you can use scikit-learn's `ShuffleSplit` class for this*
from sklearn.model_selection import ShuffleSplit
rs = ShuffleSplit(n_splits=1000, train_size=100, test_size=0)
# b. Train one decision tree on each subset, using the best hyper-parameter values found in the previous exercice. Evaluate these 1,000 decision trees on the test set. Since they were trained on smaller sets, these decision trees will likely perform worse than the first decision tree, achieving only aboyt 80% accuracy.
# +
decision_trees = list()
ds_test_scores = list()
for train_idxs, _ in rs.split(X_train, y_train):
# get sample
x_bs = X_train[train_idxs]
y_bs = y_train[train_idxs]
# train decision tree
clf = DecisionTreeClassifier(max_leaf_nodes=4)
clf.fit(x_bs, y_bs)
decision_trees.append(clf)
# evaluate decision tree
ds_test_scores.append(clf.score(X_test, y_test))
# delete model
del(clf)
# -
# c. Now comes the magic. For each test set instance, generate the predictions of the 1,000 decision trees, and keep only the most frequent prediction. This approach gives you *majority-vote* predictions over the test set.
from scipy.stats import mode
all_preds = list()
for tree in decision_trees:
all_preds.append(tree.predict(X_test).tolist())
trees_preds = np.array(all_preds)
trees_preds.shape
preds, _ = mode(trees_preds, axis=0)
# d. Evaluate these predictions on the test set: you should obtain a slightly higher accuracy than your first model (about 0.5% to 1.5% higher).
#
# *Congratulations, You have trained a random forest classifier!*
sum(preds.squeeze() == y_test)/len(y_test)
# ---
| 06.Decision_Trees.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Get GPU devices
# We are using here undocumended functions to get list of local devices and filter them by type equal to `GPU`.
# If GPU connected correctly to TenstoFlow it should see it in result
# +
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
get_available_gpus()
| get-gpu-devices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python Mcity OCTANE Socket.IO Interface
# The API provides Socket.IO support for listening to a subscribable streams of events through web sockets.
# Follow the steps below to test how to utilize the API through jQuery or other web javascript frameworks.
# Viewing the source of this page will reveal the code running behind the examples shown here.
#
# ## Connect
#
# Real time push event notification through the OCTANE Socket.IO interface utilizes namespaces and channels to control communication. As a client all communication will be done on the /octane namespace. This must be specified when connecting. Channels enable a client to subscribe to specific types of messages by joining or leaving a channel. On connection to the Mcity implementation the client is automatically joined to 2 channels (user, information). Namespaces and channel names are case sensitive. All channels and namespaces used in OCTANE are lower cased.
#
# SocketIO Namespace: **/octane**
#
# First let's load the required libraries and source our connection parameters from the environment. We'll setup a client and then after discussing authentication, connect.
# +
import os
from dotenv import load_dotenv
import socketio
#Load environment variables
load_dotenv()
api_key = os.environ.get('MCITY_OCTANE_KEY', None)
server = os.environ.get('MCITY_OCTANE_SERVER', 'http://localhost:5000')
namespace = "/octane"
#If no API Key provided, exit.
if not api_key:
print ("No API KEY SPECIFIED. EXITING")
exit()
#Create an SocketIO Python client.
sio = socketio.Client()
# -
# ## Authentication
#
# To use the Socket.IO interface a user must be authenticated. We support two ways of authentication.
#
# 1. Through a message to the server, after successful connection to socket.IO
# 2. Using a query Parameter
#
# The preferred way to authenticate is through the message to server after connect.
# To do this send the event "auth" with a payload that has a key of x-api-key and value of the API key.
#
# If utilizing the query parameter method, set X-API-KEY to your API key. The query parameter is not supported by all transport mechanisms.
# Query Parameter: **X-API-KEY=API_KEY_HERE**
# +
def send_auth():
"""
Emit an authentication event with the API key.
"""
sio.emit('auth', {'x-api-key': api_key}, namespace=namespace)
@sio.on('connect', namespace=namespace)
def on_connect():
"""
Handle connection event and send authentication key as soon as it's recieved.
"""
send_auth()
# -
# Now we have enough of a structure setup to allow us to connect and authenticate. We have no handlers for messages yet, but that will come next.
#Make connection.
sio.connect(server, namespaces=[namespace])
# ## Receive
#
# Immediately after authentication you'll start to receive different types of messages.
#
# The messages you can receive are documented as GET methods at this link: https://mcity.um.city/apidocs/#/WebSockets-Events The payloads (with exception of user) are structured as documented at the same link.
#
# By parsing the payload data from received events we can utilize the facilities real time push notification for changes in infrastructure.
#
# Our first setup will be to setup a message handler for intersection messages. We won't see any come across until we subscribe to the intersection channel, but that will happen next.
@sio.on('intersection_update', namespace=namespace)
def on_int_update(data):
"""
Event fired when an intersection has been updated.
"""
print('Int[U]:', data)
# ## Subscriptions
#
# Channels control the type of events you will receive. To see a listing of channels supported and a description see the API docs section here: https://mcity.um.city/apidocs/#/WebSockets-Channels
#
# To get a listing of channels along with your currently subscribed channels emit a channel event with no payload.
# See documentation on possible event types here: https://mcity.um.city/apidocs/#/WebSockets-Events
# The server will respond to our emit directly and list both the channels available to the clients and the channels we are currently subscribed to.
#
# Let's setup handlers for subscription and channel listing events and then join a channel.
#
# On connection the server OCTANE automatically adds your user to the USER and FACILITY information channels and provides a listing of all channels currently subscribed. Since our event handlers were not in place at that time, we didn't see those events happen.
# +
@sio.on('join', namespace=namespace)
def on_join(data):
"""
Event fired when user joins a channel
"""
print('Joining:', data)
@sio.on('leave', namespace=namespace)
def on_leave(data):
"""
Event fired when user leaves a channel
"""
print('Leaving:', data)
@sio.on('channels', namespace=namespace)
def on_channels(data):
"""
Event fired when a user requests current channel information.
"""
print('Channel information', data)
# -
# With all handlers in place, let's query channels possible, then join a channel, and leave.
sio.emit('channels', namespace='/octane')
sio.emit('join', {'channel': 'crosswalk'}, namespace='/octane')
sio.emit('channels', namespace='/octane')
sio.emit('leave', {'channel': 'crosswalk'}, namespace='/octane')
sio.emit('channels', namespace='/octane')
# ## Communication:
# OCTANE enables a user to communicate using Socket.IO with other clients. This is useful for implementing both process synchronization and just basic communication of users while testing.
#
# This type of communication only occurs on the USER channel to which you are automatically subscribed on connection. For more information about this channel see the API documentation here: https://mcity.um.city/apidocs/#/WebSockets-Channels
#
# The format of these messages and payloads is left to the user. Anything submitted on this event, will be emitted to all other users currently connected to the API.
#
# Let's setup a handler for this and emit an event to the users channel.
@sio.on('user_message', namespace=namespace)
def on_user_message(data):
"""
Event fired when a user sends a message to the user event channel.
"""
print('User Message:', data)
sio.emit('user_message', {'message': 'Testing messaging'}, namespace='/octane')
# ## Disconnecting
#
# To disconnect from the server send a disconnect_request event.
#
# If you submit a disconnect_event, the server will cleanup your subscriptions and then forcefully disconnect you.
#
# Let's disconnect from OCTANE by emitting a disconnect_event.
#
# +
@sio.on('disconnect_request', namespace=namespace)
def on_disconnect_request(data):
"""
Event fired on disconnect request.
"""
print('Ask to disconnect from server', data)
@sio.on('disconnect', namespace=namespace)
def on_disconnect_request():
"""
Event fired on disconnec.
"""
print('Disconnected from server')
# -
sio.emit('disconnect_request', namespace='/octane')
| python-socketio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# ## Функция `print`
# Функцията `print` приема 1 или повече аргумента и ги извежда на екрана разделени разделени със запетая.
# След изпълнението на функцията курсурът минава на следващия ред.
# Аргументите може да бъдат от различни типове.
print(2)
print("is even.")
print(2, "is even.")
# Можем да променим разделителя като добавим аргумент `sep='<<разделител>>`.
print(1, 2, 3)
print(1, 2, 3, sep='|')
# По подразбиране `print` поставя символ за край на реда след като изведе задацените данни.
# Това може да се промени като се добави аргумент `end='<<символ в края на реда>>'`.
print(1)
print(2)
print(3)
print(1, end=' ')
print(2, end=' ')
print(3, end=' ')
# #### Форматиране на символни низове
# Можем да създаваме символни низове от шаблони.
# В шаблона думите състоящи се от `%d` ще бъдат заместени от подадено цяло число.
# Думите състоящи се от `%s` ще бъдат заместени от подадено символен низ.
print('%d is odd, %d is even' % (3, 4))
print('Hello %s!' % 'Pesho')
| archive/2015/week6/Print.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import MDAnalysis as mda
from MDAnalysis.analysis import align
import matplotlib.pyplot as plt
# %matplotlib inline
from MDAnalysis.analysis.rms import RMSF
# based on the example from https://www.mdanalysis.org/pmda/api/rmsf.html
# -
def plot_RMSF(top, traj, selection1, selection2, params=None):
if params == None:
params = {
'axes.labelsize': 8,
'legend.fontsize': 10,
'xtick.labelsize': 10,
'ytick.labelsize': 10,
'text.usetex': False,
'figure.figsize': [4.5, 4.5],
'figure.dpi':300
}
plt.rcParams.update(params)
u = mda.Universe(top, traj)
protein = u.select_atoms(selection1)
# Fit to the initial frame to get a better average structure
# (the trajectory is changed in memory)
prealigner = align.AlignTraj(u, u, select=selection2,
in_memory=True).run()
# ref = average structure
ref_coordinates = u.trajectory.timeseries(asel=protein).mean(axis=1)
# Make a reference structure (need to reshape into a
# 1-frame "trajectory").
ref = mda.Merge(protein).load_new(ref_coordinates[:, None, :],
order="afc")
calphas = protein.select_atoms(selection2)
rmsfer = RMSF(calphas, verbose=True).run()
plt.plot(calphas.resnums, rmsfer.rmsf)
plt.ylabel("RMSF ($\AA$)")
plt.xlabel(r"Residue ID")
d = {'top':"./apdtrpap/datasets/Trajectory_select_and_merge_on_data_3_and_data_1_90.pdb" ,
'traj': "./apdtrpap/datasets/Trajectory_select_and_merge_on_data_3_and_data_1_91.dcd",
'selection1':"segid PROA",
'selection2':"segid PROA and name CA"}
plot_RMSF(d['top'],d['traj'],d['selection1'],d['selection2'])
d = {'top':"./apdtrpap_Tn/datasets/Trajectory_select_and_merge_on_data_3_and_data_1_6.pdb" ,
'traj': "./apdtrpap_Tn/datasets/Trajectory_select_and_merge_on_data_3_and_data_1_7.dcd",
'selection1':"segid PROA",
'selection2':"segid PROA and name CA"}
plot_RMSF(d['top'],d['traj'],d['selection1'],d['selection2'])
d = {'top':"./ar20.5_apdtrpap/datasets/Trajectory_selection_and_merge_on_data_3_and_data_1_6.pdb" ,
'traj': "./ar20.5_apdtrpap/datasets/Trajectory_selection_and_merge_on_data_3_and_data_1_7.dcd",
'selection1':"segid PROF",
'selection2':"segid PROF and name CA"}
plot_RMSF(d['top'],d['traj'],d['selection1'],d['selection2'])
d = {'top':"./ar20.5_apdtrpap_Tn/datasets/Trajectory_selection_and_merge_on_data_3_and_data_1_6.pdb" ,
'traj': "./ar20.5_apdtrpap_Tn/datasets/Trajectory_selection_and_merge_on_data_3_and_data_1_7.dcd",
'selection1':"segid PROF",
'selection2':"segid PROF and name CA"}
plot_RMSF(d['top'],d['traj'],d['selection1'],d['selection2'])
| scripts/rmsf/RMSF_alphacarbons_of_antigen_all_systems.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Project Purpose
#
# Telecommunications organizations often suffer from a loss of revenue due to customers choosing to terminate their services. According to the data consideration provided with the dataset, telecommunications companies experience customer churn at a rate of approximately 25% per year. This results in a loss of revenue as it cost approximately ten times more to acquire a new customer than to keep an existing customer.
#
# I will use a time-series analysis to answer the question "What does the projected revenue trend look like in the next year?"
#
# My goals for this time series are to:
# 1. Explore the data for structure and content.
# 2. Make necessary transformations.
# 3. Find a model that gives the most accurate fit to the data.
# 4. Use the model to forecast predicted revenue for the following year.
# #### Assumptions
#
# To perform a time series analysis, the following assumptions are made about the data:
# 1. The data must be stationary - The distribution of the data should not change over time. If the data shows any trends or changes due to season, it must be transformed before performing the analysis.
#
# 2. The autocorrelation should be constant - The way each value in the time series is related to its neighbors should remain the same (Fulton, n.d.).
# #### Exploration and Preprocessing
# +
# Import necessary libraries
import pandas as pd
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from statsmodels.tsa.stattools import adfuller
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.statespace.sarimax import SARIMAX
from scipy import signal
import statsmodels.api as sm
from pylab import rcParams
import warnings
warnings.filterwarnings('ignore') # Ignore warning messages for readability
# -
# Read in data set and view head
df = pd.read_csv('teleco_time_series.csv', index_col = 'Day', parse_dates = True)
pd.options.display.max_columns = None
df.head()
# Plot the time series
df.plot()
plt.title("Line Graph of Revenue Over Time")
plt.ylabel("Revenue (in million $)")
plt.show();
# View column names and data types
print("The last index in the dataframe is", df.index[-1], "and there are", len(df), "records present.")
print("This means that there are no gaps in the time sequence.")
# * The data was provided for each day during the first two years of operation. There is one record per day for 731 days. There were no gaps in the time sequence.
# +
# Run stationarity test
result = adfuller(df['Revenue'])
# Print test statistic
print("The t-statistic is:", round(result[0],2))
# Print p-value
print("The p-value is:", round(result[1],2))
# Print critical values
crit_vals = result[4]
print("The critical value of the t-statistic for a 95% confidence level is:", round(crit_vals['5%'],2))
# -
# * The data was evaluated using the augmented Dickey-Fuller test. This method uses the null-hypothesis that the time series is nonstationary due to trend (Fulton, n.d.). This test returned a t-statistic of -1.92 and a p-value of 0.32. To achieve a confidence level of 95% that we should reject the null hypothesis, the t-statistic should be below -2.87 and the p-value should be below 0.05. Both the results of the t-statistic and the p-value give evidence to reject the null. Therefore, the data will require a transformation to execute the ARIMA model since it is not stationary.
# ###### Steps for data preparation
# Since the ARIMA model can take trends and seasonality into account, I will not transform the data before splitting it into training and test data. The steps that I will take to ensure that the data is prepared for the ARIMA model are:
#
# 1. Check data for null values.
# 2. Add dummy dates of the datetime data type for analysis.
# 3. Split the data into 80% training and 20% testing data.
# #### <u>Step 1</u>
# Look at info to determine if any values are null
df.info()
# - There are no null values in the dataframe
# #### <u>Step 2</u>
# +
# Create one dimensional array
df1 = df.values.flatten()
# Create dummy dates for the Arima modules
dates = pd.date_range('1900-1-1', periods=len(df1), freq='D')
# Add the dates and the data to a new dataframe
ts = pd.DataFrame({'dates': dates, 'Revenue': df1})
# Set the dataframe index to be the dates column
df_ts = ts.set_index('dates')
df_ts.head()
# -
# #### <u>Step 3</u>
# +
# Determine cut off for an 80% training/20% testing data split
cutoff = round(len(df_ts)* 0.8)
cutoff_date = df_ts.iloc[[585]].index.values
Y, M, D, h, m, s = [cutoff_date.astype(f"M8[{x}]") for x in "YMDhms"]
# Print cutoff record and date
print("80% of the data includes", cutoff, "records.")
print ("The date for at index", cutoff, "is:", D)
# -
# Split the data into 80% training and 20% test sets. View tail of training set to make sure it stops at cutoff date.
df_train = df_ts.iloc[:cutoff + 1]
df_test = df_ts.iloc[cutoff + 1:]
df_train.tail(1)
# Ensure test data starts the day after the cutoff date
df_test.head(1)
# Ensure the training and test data still contain 731 records total
print("The training and test sets combined contain",len(df_train)+len(df_test), "records.")
# Save the training, and test sets to Excel files
df_train.to_excel('train.xlsx', index = False, encoding = 'utf-8')
df_test.to_excel('test.xlsx', index = False, encoding = 'utf-8')
# #### Time Series Analysis
# Calculate the first difference of the time series
df_diff = df.diff().dropna()
# +
# Run stationarity test
result = adfuller(df_diff['Revenue'])
# Print test statistic
print("The t-statistic is:", round(result[0],2))
# Print p-value
print("The p-value is:", round(result[1],2))
# Print critical values
crit_vals = result[4]
print("The critical value of the t-statistic for a 95% confidence level is:", round(crit_vals['5%'],2))
# -
# - After calculating the difference of the time series, the results of the test statistic and p-value give evidence that the data is now stationary using first difference transformation.
# Check for seasonality with autocorrelation plot
pd.plotting.autocorrelation_plot(df_ts);
# - Based on the plot, there does not appear to be any seasonality associated with the data.
# Display plot to check for trends in data
sns.regplot(x=df.index,y='Revenue',data=df, fit_reg=True);
# - The plot shows that there is an upward positive trend in the data over time.
# Plot autocorrelation for 25 lags
plot_acf(df_diff, lags = 25)
plt.show()
# Plot partial autocorrelation for 25 lags
plot_pacf(df_diff, lags=25)
plt.show()
# - The autocorrelation function appears to tail off, while the partial autocorrelation cuts off after one lag. This suggests that the model will be an AR(p) model.
# Calculate the first difference of the time series so that the data is stationary
df_diff = df.diff().dropna()
# Plot Power Spectral Density
f, Pxx_den = signal.welch(df_diff['Revenue'])
plt.semilogy(f, Pxx_den)
plt.xlabel('Frequency')
plt.ylabel('Power Spectral Density')
plt.show()
# - The power spectral density increases as frequency increases.
# Plot decomposition
rcParams['figure.figsize'] = 10, 4
decomposition = sm.tsa.seasonal_decompose(df_ts)
fig = decomposition.plot()
plt.show()
# - The data shows an upward trend. The compact consistency of the Seasonal plot shows that the data is not affected by seasonality. The residuals appear to be random.
# Display plot to check for trends in data
sns.regplot(x=df_diff.index,y='Revenue',data=df_diff, fit_reg=True);
# - The plot shows no trends when used with data transformed using the first difference.
# +
# Determine the ideal values for the AR and MA model
# Create empty list to store search results
order_aic_bic=[]
# Loop over p values from 0-2
for p in range(3):
# Loop over q values from 0-2
for q in range(3):
# create and fit ARMA(p,q) model
model = SARIMAX(df_train, order=(p,1,q))
results = model.fit()
# Append order and results tuple
order_aic_bic.append((p,q,results.aic,results.bic))
# Construct DataFrame from order_aic_bic
order_df = pd.DataFrame(order_aic_bic,
columns=['p', 'q', 'AIC', 'BIC'])
# Print the top model in order_df in order of increasing AIC
print(order_df.sort_values('AIC').head(1))
# Print the top model in order_df in order of increasing BIC
print(order_df.sort_values('BIC').head(1))
# -
# - Based on both the values of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), the best model for the data is the ARMA(1,0) model.
# +
# Create and fit ARIMA model based on ARMA(1,0) model and non-stationary data
model = SARIMAX(df_train, order=(1,1,0), trend = 'c')
results = model.fit()
# Calculate the mean absolute error from residuals
mae = np.mean(np.abs(results.resid))
# Print mean absolute error
print("The mean absolute error is:", round(mae,2))
# -
# Print summary
print(results.summary())
# - Since the p-value for the Ljung-Box test (Prob(Q)) is not statistically significant, the residuals are not correlated.
# - The p-value of the Jarque-Bera (Prob(JB)) is also not statistically significant, therefore the residuals are normally distributed.
# Create the 4 diagnostic plots
results.plot_diagnostics()
plt.show()
# - The Standardized Residual plot shows that there are no obvious patterns in the residuals.
# - The Histogram Plus KDE Estimate plot shows that the KDE curve is very close to a normal distribution.
# - Normal Q-Q plot shows a pattern where most of the data points reside along the straight line. The points at the top and bottom of the line where it varies may be due to a few outliers in the data.
# - The Correlogram plot shows that 95% of correlations for lag greater than one are not significant.
# ##### Model Summary
#
# The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were used to determine the best fit model. The lower the result for both the AIC and BIC tests, the better the model will fit the data. Both tests showed that an ARMA(1,0) model is the best fit for the data (Fulton, n.d.). Since there was no seasonality detected in the previous section, the seasonal order parameter has not been included in the model. As the original data was not stationary, a value of 1 is used for the degree of differencing to eliminate the upward trend. Therefore the ARIMA(1,1,0) was determined to be the best model.
# #### Forecasting
#
#
# The comparison of the model predictions to the test data and one year forecast are provided below.
# +
# Create ARIMA mean forecast prediction
arima_test = results.get_forecast(steps=145)
arima_mean = arima_test.predicted_mean
# Plot mean ARIMA predictions and observed for test data
plt.plot(df_test.index, arima_mean, label='ARIMA')
plt.plot(df_test, label='observed')
plt.legend()
plt.show()
# +
# Plot combined training and testing means with one year forecast
#Create ARMIMA model using complete data set
# Fit model
model = SARIMAX(df, order=(1,1,0), trend = 'c')
results = model.fit()
# Create forecast object for next 365 days
forecast_object = results.get_forecast(steps=365)
# Extract predicted mean attribute
mean = forecast_object.predicted_mean
# Calculate the confidence intervals
conf_int = forecast_object.conf_int()
# Extract the forecast dates
dates = mean.index
plt.figure()
# Plot past Revenue levels
plt.plot(df.index, df, label='past')
# Plot the prediction means as line
plt.plot(dates, mean, label='predicted')
# Shade between the confidence intervals
plt.fill_between(dates, conf_int.iloc[:,0], conf_int.iloc[:,1], alpha=0.2)
# Plot legend and show figure
plt.legend()
plt.show()
# -
# #### Results
#
# The selection of an ARIMA model was based on a search of parameters with the best AIC and BIC scores. Seasonal order did not need to be included in the parameters of the model. Also, the first difference was used due to the results of the augmented Dickey-Fuller test performed in section D1. I did choose a trend parameter of "continuous", as it appears that the data will continue to trend upward so it made sense to see that in future predictions.
#
# The prediction confidence interval was calculated using the conf_int() function on the object created using the get_forecast() function of the ARIMA model. The interval is the area that we expect the Revenue to be on a given day. As the forecasted date gets farther from the original dataset, the interval becomes wider. This means that as time goes on, it is harder to predict what the Revenue will be.
#
# The model residuals had a very low mean absolute error of 0.38, meaning that they are very close to the expected Revenue. The model summary showed that the residuals were not correlated and were normally distributed. The diagnostic plots showed further evidence that the model was a good fit in that:
#
# > - The Standardized Residual plot shows that there are no obvious patterns in the residuals.
# > - The Histogram Plus KDE Estimate plot shows that the KDE curve is very close to a normal distribution.
# > - The Normal Q-Q plot shows a pattern where most of the data points reside along the straight line. The points at the top and bottom of the line where it varies may be due to a few outliers in the data.
# > - The Correlogram plot shows that 95% of correlations for lag greater than one are not significant.
#
# Based on the continued expected upward trend, I would recommend that the stakeholders of the telecommunications company continue their churn mitigation efforts. I would also suggest that they continue to look for new cost-effective methods that can add to their customer base. The combined efforts of both actions should ensure that the upward revenue trend continues.
# #### Sources
#
# - <NAME>. (2018, January 17). Python time series Analysis Tutorial. Retrieved March 07, 2021, from https://www.datacamp.com/community/tutorials/time-series-analysis-tutorial
#
# - <NAME>. (n.d.). ARIMA Models in Python. Retrieved March 2, 2021, from
# https://learn.datacamp.com/courses/arima-models-in-python
#
# - <NAME>., <NAME>., <NAME>. & others (2001). SciPy: Open source scientific tools for Python. Retrieved March 07, 2021, from https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.welch.html
# #### Helpful Sites Used in Coding Project
#
# 1. https://learn.datacamp.com/courses/arima-models-in-python
# 2. https://stackoverflow.com/questions/49211611/using-statsmodel-arima-without-dates
# 3. https://www.datacamp.com/community/tutorials/time-series-analysis-tutorial
# 4. https://stackoverflow.com/questions/54308172/adding-a-trend-line-to-a-matplotlib-line-plot-python
# 5. https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.welch.html
| Time Series Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,md
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Explicit Forward Time Centered Space (FTCS) Difference Equation for the Heat Equation
#
# #### <NAME> <EMAIL> [Course Notes](https://johnsbutler.netlify.com/files/Teaching/Numerical_Analysis_for_Differential_Equations.pdf) [Github](https://github.com/john-s-butler-dit/Numerical-Analysis-Python)
# ## Overview
# This notebook will implement the explicit Forward Time Centered Space (FTCS) Difference method for the Heat Equation.
#
# ## The Heat Equation
# The Heat Equation is the first order in time ($t$) and second order in space ($x$) Partial Differential Equation [1-3]:
# $$ \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2},$$
# The equation describes heat transfer on a domain
# $$ \Omega = \{ t \geq 0\leq x \leq 1\}. $$
# with an initial condition at time $t=0$ for all $x$ and boundary condition on the left ($x=0$) and right side ($x=1$).
#
# ## Forward Time Centered Space (FTCS) Difference method
# This notebook will illustrate the Forward Time Centered Space (FTCS) Difference method for the Heat Equation with the __initial conditions__
# $$ u(x,0)=2x, \ \ 0 \leq x \leq \frac{1}{2}, $$
# $$ u(x,0)=2(1-x), \ \ \frac{1}{2} \leq x \leq 1, $$
# and __boundary condition__
# $$ u(0,t)=0, u(1,t)=0. $$
#
# LIBRARY
# vector manipulation
import numpy as np
# math functions
import math
import jupytext
# THIS IS FOR PLOTTING
# %matplotlib inline
import matplotlib.pyplot as plt # side-stepping mpl backend
import warnings
warnings.filterwarnings("ignore")
# ## Discete Grid
# The region $\Omega$ is discretised into a uniform mesh $\Omega_h$. In the space $x$ direction into $N$ steps giving a stepsize of
# $$ h=\frac{1-0}{N},$$
# resulting in
# $$x[i]=0+ih, \ \ \ i=0,1,...,N,$$
# and into $N_t$ steps in the time $t$ direction giving a stepsize of
# $$ k=\frac{1-0}{N_t}$$
# resulting in
# $$t[j]=0+jk, \ \ \ j=0,...,15.$$
# The Figure below shows the discrete grid points for $N=10$ and $Nt=100$, the known boundary conditions (green), initial conditions (blue) and the unknown values (red) of the Heat Equation.
N=10
Nt=1000
h=1/N
k=1/Nt
r=k/(h*h)
time_steps=15
time=np.arange(0,(time_steps+.5)*k,k)
x=np.arange(0,1.0001,h)
X, Y = np.meshgrid(x, time)
fig = plt.figure()
plt.plot(X,Y,'ro');
plt.plot(x,0*x,'bo',label='Initial Condition');
plt.plot(np.ones(time_steps+1),time,'go',label='Boundary Condition');
plt.plot(x,0*x,'bo');
plt.plot(0*time,time,'go');
plt.xlim((-0.02,1.02))
plt.xlabel('x')
plt.ylabel('time (ms)')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.title(r'Discrete Grid $\Omega_h,$ h= %s, k=%s'%(h,k),fontsize=24,y=1.08)
plt.show();
# ## Discrete Initial and Boundary Conditions
#
# The discrete initial conditions are
# $$ w[i,0]=2x[i], \ \ 0 \leq x[i] \leq \frac{1}{2} $$
# $$ w[i,0]=2(1-x[i]), \ \ \frac{1}{2} \leq x[i] \leq 1 $$
# and the discrete boundary conditions are
# $$ w[0,j]=0, w[10,j]=0, $$
# where $w[i,j]$ is the numerical approximation of $U(x[i],t[j])$.
#
# The Figure below plots values of $w[i,0]$ for the inital (blue) and boundary (red) conditions for $t[0]=0.$
# +
w=np.zeros((N+1,time_steps+1))
b=np.zeros(N-1)
# Initial Condition
for i in range (1,N):
w[i,0]=2*x[i]
if x[i]>0.5:
w[i,0]=2*(1-x[i])
# Boundary Condition
for k in range (0,time_steps):
w[0,k]=0
w[N,k]=0
fig = plt.figure(figsize=(8,4))
plt.plot(x,w[:,0],'o:',label='Initial Condition')
plt.plot(x[[0,N]],w[[0,N],0],'go',label='Boundary Condition t[0]=0')
#plt.plot(x[N],w[N,0],'go')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.title('Intitial and Boundary Condition',fontsize=24)
plt.xlabel('x')
plt.ylabel('w')
plt.legend(loc='best')
plt.show()
# -
# ## The Explicit Forward Time Centered Space (FTCS) Difference Equation
# The explicit Forwards Time Centered Space (FTCS) difference equation of the Heat Equation
# is derived by discretising
# $$ \frac{\partial u_{ij}}{\partial t} = \frac{\partial^2 u_{ij}}{\partial x^2},$$
# around $(x_i,t_{j})$ giving the difference equation
# $$
# \frac{w_{ij+1}-w_{ij}}{k}=\frac{w_{i+1j}-2w_{ij}+w_{i-1j}}{h^2},
# $$
# Rearranging the equation we get
# $$
# w_{ij+1}=rw_{i-1j}+(1-2r)w_{ij}+rw_{i+1j},
# $$
# for $i=1,...9$ where $r=\frac{k}{h^2}$.
#
# This gives the formula for the unknown term $w_{ij+1}$ at the $(ij+1)$ mesh points
# in terms of $x[i]$ along the jth time row.
#
# Hence we can calculate the unknown pivotal values of $w$ along the first row of $j=1$ in terms of the known boundary conditions.
#
# This can be written in matrix form
# $$ \mathbf{w}_{j+1}=A\mathbf{w}_{j} +\mathbf{b}_{j} $$
# for which $A$ is a $9\times9$ matrix:
# $$
# \left(\begin{array}{c}
# w_{1j+1}\\
# w_{2j+1}\\
# w_{3j+1}\\
# w_{4j+1}\\
# w_{5j+1}\\
# w_{6j+1}\\
# w_{7j+1}\\
# w_{8j+1}\\
# w_{9j+1}\\
# \end{array}\right).
# =\left(\begin{array}{cccc cccc}
# 1-2r&r& 0&0&0 &0&0&0\\
# r&1-2r&r&0&0&0 &0&0&0\\
# 0&r&1-2r &r&0&0& 0&0&0\\
# 0&0&r&1-2r &r&0&0& 0&0\\
# 0&0&0&r&1-2r &r&0&0& 0\\
# 0&0&0&0&r&1-2r &r&0&0\\
# 0&0&0&0&0&r&1-2r &r&0\\
# 0&0&0&0&0&0&r&1-2r&r\\
# 0&0&0&0&0&0&0&r&1-2r\\
# \end{array}\right)
# \left(\begin{array}{c}
# w_{1j}\\
# w_{2j}\\
# w_{3j}\\
# w_{4j}\\
# w_{5j}\\
# w_{6j}\\
# w_{7j}\\
# w_{8j}\\
# w_{9j}\\
# \end{array}\right)+
# \left(\begin{array}{c}
# rw_{0j}\\
# 0\\
# 0\\
# 0\\
# 0\\
# 0\\
# 0\\
# 0\\
# rw_{10j}\\
# \end{array}\right).
# $$
# It is assumed that the boundary values $w_{0j}$ and $w_{10j}$ are known for $j=1,2,...$, and $w_{i0}$ for $i=0,...,10$ is the initial condition.
#
# The Figure below shows the values of the $9\times 9$ matrix in colour plot form for $r=\frac{k}{h^2}$.
# +
A=np.zeros((N-1,N-1))
for i in range (0,N-1):
A[i,i]=1-2*r # DIAGONAL
for i in range (0,N-2):
A[i+1,i]=r # UPPER DIAGONAL
A[i,i+1]=r # LOWER DIAGONAL
fig = plt.figure(figsize=(6,4));
#plt.matshow(A);
plt.imshow(A,interpolation='none');
plt.xticks(np.arange(N-1), np.arange(1,N-0.9,1));
plt.yticks(np.arange(N-1), np.arange(1,N-0.9,1));
clb=plt.colorbar();
clb.set_label('Matrix elements values');
clb.set_clim((-1,1));
plt.title('Matrix r=%s'%(np.round(r,3)),fontsize=24)
fig.tight_layout()
plt.show();
# -
# ## Results
# To numerically approximate the solution at $t[1]$ the matrix equation becomes
# $$ \mathbf{w}_{1}=A\mathbf{w}_{0} +\mathbf{b}_{0} $$
# where all the right hand side is known.
# To approximate solution at time $t[2]$ we use the matrix equation
# $$ \mathbf{w}_{2}=A\mathbf{w}_{1} +\mathbf{b}_{1}. $$
# Each set of numerical solutions $w[i,j]$ for all $i$ at the previous time step is used to approximate the solution $w[i,j+1]$.
#
# The Figure below shows the numerical approximation $w[i,j]$ of the Heat Equation using the FTCS method at $x[i]$ for $i=0,...,10$ and time steps $t[j]$ for $j=1,...,15$. The left plot shows the numerical approximation $w[i,j]$ as a function of $x[i]$ with each color representing the different time steps $t[j]$. The right plot shows the numerical approximation $w[i,j]$ as colour plot as a function of $x[i]$, on the $x[i]$ axis and time $t[j]$ on the $y$ axis.
#
# For $r>\frac{1}{2}$ the method is unstable resulting a solution that oscillates unnaturally between positive and negative values for each time step.
# +
fig = plt.figure(figsize=(12,6))
plt.subplot(121)
for j in range (1,time_steps+1):
b[0]=r*w[0,j-1]
b[N-2]=r*w[N,j-1]
w[1:(N),j]=np.dot(A,w[1:(N),j-1])
plt.plot(x,w[:,j],'o:',label='t[%s]=%s'%(j,np.round(time[j],4)))
plt.xlabel('x')
plt.ylabel('w')
#plt.legend(loc='bottom', bbox_to_anchor=(0.5, -0.1))
plt.legend(bbox_to_anchor=(-.4, 1), loc=2, borderaxespad=0.)
plt.subplot(122)
plt.imshow(w.transpose())
plt.xticks(np.arange(len(x)), x)
plt.yticks(np.arange(len(time)), np.round(time,4))
plt.xlabel('x')
plt.ylabel('time')
clb=plt.colorbar()
clb.set_label('Temperature (w)')
plt.suptitle('Numerical Solution of the Heat Equation r=%s'%(np.round(r,3)),fontsize=24,y=1.08)
fig.tight_layout()
plt.show()
# -
# ## Local Trunction Error
# The local truncation error of the classical explicit difference approach to
# \begin{equation}
# \frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2}=0,
# \end{equation}
# with
#
# \begin{equation}
# F_{ij}(w)=\frac{w_{ij+1}-w_{ij}}{k}-\frac{w_{i+1j}-2w_{ij}+w_{i-1j}}{h^2}=0,
# \end{equation}
# is
# \begin{equation}
# T_{ij}=F_{ij}(U)=\frac{U_{ij+1}-U_{ij}}{k}-\frac{U_{i+1j}-2U_{ij}+U_{i-1j}}{h^2},
# \end{equation}
# By Taylors expansions we have
# \begin{eqnarray*}
# U_{i+1j}&=&U((i+1)h,jk)=U(x_i+h,t_j)\\
# &=&U_{ij}+h\left(\frac{\partial U}{\partial x} \right)_{ij}+\frac{h^2}{2}\left(\frac{\partial^2 U}{\partial x^2} \right)_{ij}+\frac{h^3}{6}\left(\frac{\partial^3 U}{\partial x^3} \right)_{ij} +...\\
# U_{i-1j}&=&U((i-1)h,jk)=U(x_i-h,t_j)\\
# &=&U_{ij}-h\left(\frac{\partial U}{\partial x} \right)_{ij}+\frac{h^2}{2}\left(\frac{\partial^2 U}{\partial x^2} \right)_{ij}-\frac{h^3}{6}\left(\frac{\partial^3 U}{\partial x^3} \right)_{ij} +...\\
# U_{ij+1}&=&U(ih,(j+1)k)=U(x_i,t_j+k)\\
# &=&U_{ij}+k\left(\frac{\partial U}{\partial t} \right)_{ij}+\frac{k^2}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij}+\frac{k^3}{6}\left(\frac{\partial^3 U}{\partial t^3} \right)_{ij} +...
# \end{eqnarray*}
#
# substitution into the expression for $T_{ij}$ then gives
#
# \begin{eqnarray*}
# T_{ij}&=&\left(\frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2} \right)_{ij}+\frac{k}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij}
# -\frac{h^2}{12}\left(\frac{\partial^4 U}{\partial x^4} \right)_{ij}\\
# & & +\frac{k^2}{6}\left(\frac{\partial^3 U}{\partial t^3} \right)_{ij}
# -\frac{h^4}{360}\left(\frac{\partial^6 U}{\partial x^6} \right)_{ij}+ ...
# \end{eqnarray*}
# But $U$ is the solution to the differential equation so
# \begin{equation} \left(\frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2} \right)_{ij}=0,\end{equation}
#
# the principal part of the local truncation error is
#
# \begin{equation}
# \frac{k}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij}-\frac{h^2}{12}\left(\frac{\partial^4 U}{\partial x^4} \right)_{ij}.
# \end{equation}
#
#
#
# Hence the truncation error is
# \begin{equation}
# T_{ij}=O(k)+O(h^2).
# \end{equation}
#
#
#
# ## Stability Analysis
#
# To investigating the stability of the fully explicit FTCS difference method of the Heat Equation, we will use the von Neumann method.
# The FTCS difference equation is:
# $$\frac{1}{k}(w_{pq+1}-w_{pq})=\frac{1}{h_x^2}(w_{p-1q}-2w_{pq}+w_{p+1q}),$$
# approximating
# $$\frac{\partial U}{\partial t}=\frac{\partial^2 U}{\partial x^2}$$
#
# at $(ph,qk)$. Substituting $w_{pq}=e^{i\beta x}\xi^{q}$ into the difference equation gives:
# $$e^{i\beta ph}\xi^{q+1}-e^{i\beta ph}\xi^{q}=r\{e^{i\beta (p-1)h}\xi^{q}-2e^{i\beta ph}\xi^{q}+e^{i\beta (p+1)h}\xi^{q} \}
# $$
#
# where $r=\frac{k}{h_x^2}$. Divide across by $e^{i\beta (p)h}\xi^{q}$ leads to
#
# $$ \xi-1=r(e^{i\beta (-1)h} -2+e^{i\beta h}), $$
#
# $$\xi= 1+r (2\cos(\beta h)-2),$$
#
# $$\xi=1-4r(\sin^2(\beta\frac{h}{2})).$$
# Hence
# $$\left| 1-4r(\sin^2(\beta\frac{h}{2}) )\right|\leq 1$$
# for this to hold
# $$ 4r(\sin^2(\beta\frac{h}{2}) )\leq 2 $$
# which means
# $$ r\leq \frac{1}{2}. $$
# therefore the equation is conditionally stable as $0 < \xi \leq 1$ for $r<\frac{1}{2}$ and all $\beta$ .
#
#
# ## References
# [1] <NAME> Numerical Solution of Partial Differential Equations: Finite Difference Method Oxford 1992
#
# [2] <NAME>. (2019). <NAME> Numerical Methods for Differential Equations. [online] Maths.dit.ie. Available at: http://www.maths.dit.ie/~johnbutler/Teaching_NumericalMethods.html [Accessed 14 Mar. 2019].
#
# [3] Wikipedia contributors. (2019, February 22). Heat equation. In Wikipedia, The Free Encyclopedia. Available at: https://en.wikipedia.org/w/index.php?title=Heat_equation&oldid=884580138 [Accessed 14 Mar. 2019].
#
#
#
| Chapter 08 - Heat Equations/.ipynb_checkpoints/801_Heat Equation- FTCS-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib notebook
# +
def calc_decision_boundary(weights):
m = -weights[2] / weights[1]
b = -weights[0]
return np.array([m, b])
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
# -
# # Gradient Descent
#
# Let's start with an intuitive explanation. You're lost in the mountains with zero visibility and need to get back to the bottom. At each step, you can feel the grade of the slope beneath your feet. If the slope is descending, you continue forward, otherwise you change course.
#
# We can visualize this with a simple mathematical sample.
# +
x = np.linspace(-1, 1, 100)
y = x**2
# loc depicts our current position
loc_x = 1.0
loc_y = loc_x**2
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y)
ax.scatter(loc_x, loc_y, c='r')
# -
# Here we are at the top of the hill. How do determine the slope of a function $f$ at a given point $x$?
#
# Good old calculus: $\frac{df}{dx}$. In this case $\frac{df}{dx} = 2x$
# loc depicts our current position
d_loc = 2 * loc_x
print("Derivative at x = {} is {}".format(loc_x, d_loc))
# We have calculated that a x = 1, the slope (derivative) of the function is 2. Now to use this knowledge to step forward.
#
# ## Update Step
#
# The all important update step is actually quite simple:
#
# $$x_{n+1} = x_n - \frac{df}{dx}.$$
#
# Let's apply this update step and check our position again.
# +
x_new = loc_x - d_loc
y_new = x_new ** 2
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y)
ax.scatter(loc_x, loc_y, c='g')
ax.scatter(x_new, y_new, c='r')
ax.set_title("One Step of Gradient Descent");
# -
# Wow we took a really big step and ended up on the other side of the function. It seems we missed an important part of gradient descent: the **step size**. To control the step size, we add a single parameter to the update step:
#
# $$x_{n+1} = x_n - \alpha * \frac{df}{dx}.$$
#
# This new value $\alpha$ is a parameter that controls how big of a step we can take. In the context of machine learning, the step size is called the **learning rate**. Let's reduce this to 0.1 and see where it takes us.
# +
alpha = 0.1
x_new = loc_x - alpha * d_loc
y_new = x_new ** 2
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y)
ax.scatter(x_new, y_new, c='r')
# -
# Next, let's use gradient descent to optimize our original linear classifier. Again, here is the data generated from our two distributions.
# +
a_samples = np.random.multivariate_normal([-1, 1], [[0.1, 0], [0, 0.1]], 100)
b_samples = np.random.multivariate_normal([1, -1], [[0.1, 0], [0, 0.1]], 100)
a_targets = np.zeros(100) # Samples from class A are assigned a class value of 0.
b_targets = np.ones(100) # Samples from class B are assigned a class value of 1.
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(a_samples[:, 0], a_samples[:, 1], c='b')
ax.scatter(b_samples[:, 0], b_samples[:, 1], c='r')
# -
# To set this up, we will **initialize** our network's parameters to be random values. We'll also introduce another neat trick. By including the bias in our parameter list, we can slightly simplify the forward calculation.
# +
# Classifier Parameters
weights = np.random.rand(2)
bias = 1
weights = np.concatenate(([bias], weights))
# For visualizing the line
m, b = calc_decision_boundary(weights)
# If the slope is undefined, it is vertical.
if weights[1] != 0:
x = np.linspace(-1, 1, 100)
y = m * x + b
else:
x = np.zeros(100) + b
y = np.linspace(-1, 1, 100)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y, c='g')
ax.scatter(a_samples[:, 0], a_samples[:, 1], c='b')
ax.scatter(b_samples[:, 0], b_samples[:, 1], c='r')
plt.axis('equal')
# -
# If the bias is included in our weights vector, how do we compute the result? Concatenate a 1 to the input before multiplying.
# +
# Linear combination of weights and input
y_a = weights @ np.concatenate((np.ones((100, 1)), a_samples), axis=-1).T
y_b = weights @ np.concatenate((np.ones((100, 1)), b_samples), axis=-1).T
# Sigmoid function
pred_a = sigmoid(y_a)
pred_b = sigmoid(y_b)
l2_a = 0.5 * ((a_targets - pred_a)**2)
l2_b = 0.5 * ((b_targets - pred_b)**2)
loss_a = l2_a.sum()
loss_b = l2_b.sum()
print("Loss A = {}".format(loss_a))
print("Loss B = {}".format(loss_b))
# Combine and normalize the error between 0 and 1.
loss = np.concatenate((l2_a, l2_b)).mean()
print("Normalized loss = {}".format(loss))
# -
# So how do we use gradient descent to optimize our simple perceptron? First, note that we will be using the sigmoid function because it is **continuous**. This is a very important property when it comes to optimization because derivatives on discrete functions are undefined, although they can be approximated.
#
# Our current classifier is a series of two functions followed by a loss function:
#
# $$L(g(\mathbf{w}\mathbf{x}))$$
#
# where $g(x) = \frac{1}{1 + \exp^{-x}}$
#
# With each update of gradient descent we want to modify our parameters $\mathbf{w}$. So we need to calculate $\frac{\partial L}{\partial w_i}$. To find this gradient, we utilize the chain rule from Calculus which is:
#
# $$\frac{dz}{dx} = \frac{dz}{dy} \cdot \frac{dy}{dx}.$$
#
# Applying this to our classifier, we have:
#
# $$\frac{\partial L}{\partial w_i} = \frac{\partial L}{\partial y} \frac{\partial y}{\partial a} \frac{\partial a}{\partial w_i}$$
#
# Here we let $a = \sum_i w_i x_i$ so $\frac{\partial a}{\partial w_i} = x_i$ and $\frac{dy}{da} = g'(a) = g(a) * (1 - g(a))$.
#
# The squared L2 loss function we are using is again defined as:
#
# $$L = \frac{1}{2} (\hat{y} - y)^2$$
#
# This is a convenient choice because the derivative is simple to calculate:
#
# $$\frac{dL}{dy} = \hat{y} - y.$$
#
# Written fully,
#
# $$\frac{\partial L}{\partial w_i} = (\hat{y} - y) * g(a) * (1 - g(a)) * x_i.$$
# Combine our dataset into one single object
samples = np.concatenate((a_samples, b_samples))
targets = np.concatenate((a_targets, b_targets))
print(samples.shape)
# +
idx = np.random.randint(200, size=1)
alpha = 1
x = samples[idx]
y_hat = targets[idx]
print(x.shape)
# Linear combination of weights and input
a = weights @ np.concatenate((np.ones((1, 1)), x), axis=-1).T
# Sigmoid function
z = sigmoid(a)
loss = 0.5 * (z - y_hat)**2
print("Loss = {}".format(loss))
dw0 = (z - y_hat) * z * (1 - z)
dw1 = (z - y_hat) * z * (1 - z) * x[0, 0]
dw2 = (z - y_hat) * z * (1 - z) * x[0, 1]
print(dw0, dw1)
weights[0] = weights[0] - alpha * dw0
weights[1] = weights[1] - alpha * dw1
weights[2] = weights[2] - alpha * dw2
# for i in range(200):
# idx = np.random.randint(200, size=1)
# alpha = 1
# x = samples[idx]
# y_hat = targets[idx]
# # Linear combination of weights and input
# a = weights @ np.concatenate((np.ones((1, 1)), x), axis=-1).T
# # Sigmoid function
# z = sigmoid(a)
# loss = 0.5 * (z - y_hat)**2
# dw0 = (z - y_hat) * z * (1 - z)
# dw1 = (z - y_hat) * z * (1 - z) * x[0, 0]
# dw2 = (z - y_hat) * z * (1 - z) * x[0, 1]
# weights[0] = weights[0] - alpha * dw0
# weights[1] = weights[1] - alpha * dw1
# weights[2] = weights[2] - alpha * dw2
# For visualizing the line
m, b = calc_decision_boundary(weights)
# If the slope is undefined, it is vertical.
if weights[1] != 0:
x = np.linspace(-1, 1, 100)
y = m * x + b
else:
x = np.zeros(100) + b
y = np.linspace(-1, 1, 100)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y, c='g')
ax.scatter(a_samples[:, 0], a_samples[:, 1], c='b')
ax.scatter(b_samples[:, 0], b_samples[:, 1], c='r')
plt.axis('equal')
| neural_networks/gradient-descent.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PROBLEMAS DIVERSOS
# <h3>1.</h3>
# Escribí un programa que solicite al usuario ingresar la cantidad de kilómetros recorridos por una motocicleta y la cantidad de litros de combustible que consumió durante ese recorrido. Mostrar el consumo de combustible por kilómetro.
# <code>Kilómetros recorridos: 260
# Litros de combustible gastados: 12.5
# El consumo por kilómetro es de 20.8<code>
kilometro = float(input('Kilometros recorridos: '))
litro = float(input('Litro de combustible gastado: '))
consumo = kilometro / litro
print(f'El consumo por kilometro es de {consumo}')
# ### 2.
# Escriba un programa que pida los coeficientes de una ecuación de segundo grado <code>(a x² + b x + c = 0)</code> y escriba la solución.
#
# Se recuerda que una ecuación de segundo grado puede no tener solución, tener una solución única, tener dos soluciones o que todos los números sean solución.
# <img src='https://i.pinimg.com/originals/d3/f7/01/d3f701528ad56ce0f5a98d7c91722fd7.png'>
# Su programa debe indicar:
# - En caso la ecuación cuadrática tenga solución real, su programa debe brindar la solución
# - En caso su ecuación no tenga solución real, su programa debe brindar un mensaje que diga "Ecuación no presenta solución real"
a = int(input('Ingrese primer coeficiente: '))
b = int(input('Ingrese segundo coeficiente: '))
c = int(input('Ingrese tercer coeficiente: '))
print(f'La ecuacion es {a}x² + {b}x + {c}')
disc = ((b**2) - 4*a*c)
if disc < 0:
print('La ecuacion no tiene solucion')
elif disc == 0:
solucion = (-b/(2*a))
print(f'La solucion doble de la ecuacion es {solucion}')
else:
solucion1 = (-b+(disc**0.5))/(2*a)
solucion2 = (-b-(disc**0.5))/(2*a)
print(f'Solucion 1 de la ecuacion: {solucion1}')
print(f'Solucion 12 de la ecuacion: {solucion2}')
| Modulo1/Ejercicios/Problemas Diversos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/valefe/examples-of-torches/blob/main/MAACT_Attention_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="iIytkcd2h7io"
# !pip install --upgrade torchtext
# + id="vT97PDOIbJhe"
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
# + id="TWWCSrdGbEWO"
class PositionalEncoding(nn.Module): #
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
self.register_buffer('pe', pe)
def forward(self, x):
x = x + self.pe[:x.size(0), :]
return self.dropout(x)
# + id="pu5eRI5UX8xK"
class TransformerModel(nn.Module):
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
super(TransformerModel, self).__init__()
from torch.nn import TransformerEncoder, TransformerEncoderLayer
self.model_type = 'Transformer'
self.pos_encoder = PositionalEncoding(ninp, dropout)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, ntoken)
self.init_weights()
def generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, src, src_mask):
src = self.encoder(src) * math.sqrt(self.ninp)
src = self.pos_encoder(src)
output = self.transformer_encoder(src, src_mask)
output = self.decoder(output)
return output
# + colab={"base_uri": "https://localhost:8080/"} id="3NDeia2VX9_r" outputId="95571c02-9577-4888-8ded-9de81c4c6340"
import io
from torchtext.utils import download_from_url, extract_archive
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
url = 'https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip'
test_filepath, valid_filepath, train_filepath = extract_archive(download_from_url(url))
tokenizer = get_tokenizer('basic_english')
vocab = build_vocab_from_iterator(map(tokenizer,
iter(io.open(train_filepath,
encoding="utf8"))))
def data_process(raw_text_iter):
data = [torch.tensor([vocab[token] for token in tokenizer(item)],
dtype=torch.long) for item in raw_text_iter]
return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))
train_data = data_process(iter(io.open(train_filepath, encoding="utf8")))
val_data = data_process(iter(io.open(valid_filepath, encoding="utf8")))
test_data = data_process(iter(io.open(test_filepath, encoding="utf8")))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def batchify(data, bsz):
# Divide the dataset into bsz parts.
nbatch = data.size(0) // bsz
# Trim off any extra elements that wouldn't cleanly fit (remainders).
data = data.narrow(0, 0, nbatch * bsz)
# Evenly divide the data across the bsz batches.
data = data.view(bsz, -1).t().contiguous()
return data.to(device)
batch_size = 20
eval_batch_size = 10
train_data = batchify(train_data, batch_size)
val_data = batchify(val_data, eval_batch_size)
test_data = batchify(test_data, eval_batch_size)
# + id="nStAgYOMeNjx"
bptt = 35
def get_batch(source, i):
seq_len = min(bptt, len(source) - 1 - i)
data = source[i:i+seq_len]
target = source[i+1:i+1+seq_len].reshape(-1)
return data, target
# + id="1rjV7aOZeRj-"
ntokens = len(vocab.stoi) # the size of vocabulary
emsize = 200 # embedding dimension
nhid = 200 # the dimension of the feedforward network model in nn.TransformerEncoder
nlayers = 2 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
nhead = 2 # the number of heads in the multiheadattention models
dropout = 0.2 # the dropout value
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(device)
# + id="jHu8TDL3eVBk"
criterion = nn.CrossEntropyLoss()
lr = 5.0 # learning rate
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
import time
def train():
model.train() # Turn on the train mode
total_loss = 0.
start_time = time.time()
src_mask = model.generate_square_subsequent_mask(bptt).to(device)
for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)):
data, targets = get_batch(train_data, i)
optimizer.zero_grad()
if data.size(0) != bptt:
src_mask = model.generate_square_subsequent_mask(data.size(0)).to(device)
output = model(data, src_mask)
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
total_loss += loss.item()
log_interval = 200
if batch % log_interval == 0 and batch > 0:
cur_loss = total_loss / log_interval
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches | '
'lr {:02.2f} | ms/batch {:5.2f} | '
'loss {:5.2f} | ppl {:8.2f}'.format(
epoch, batch, len(train_data) // bptt, scheduler.get_lr()[0],
elapsed * 1000 / log_interval,
cur_loss, math.exp(cur_loss)))
total_loss = 0
start_time = time.time()
def evaluate(eval_model, data_source):
eval_model.eval() # Turn on the evaluation mode
total_loss = 0.
src_mask = model.generate_square_subsequent_mask(bptt).to(device)
with torch.no_grad():
for i in range(0, data_source.size(0) - 1, bptt):
data, targets = get_batch(data_source, i)
if data.size(0) != bptt:
src_mask = model.generate_square_subsequent_mask(data.size(0)).to(device)
output = eval_model(data, src_mask)
output_flat = output.view(-1, ntokens)
total_loss += len(data) * criterion(output_flat, targets).item()
return total_loss / (len(data_source) - 1)
# + colab={"base_uri": "https://localhost:8080/", "height": 854} id="5UrKIzSSeXT7" outputId="0143a570-3042-4ae1-946e-06088539579d"
best_val_loss = float("inf")
epochs = 3 # The number of epochs
best_model = None
for epoch in range(1, epochs + 1):
epoch_start_time = time.time()
train()
val_loss = evaluate(model, val_data)
print('-' * 89)
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '
'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),
val_loss, math.exp(val_loss)))
print('-' * 89)
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = model
scheduler.step()
| MAACT_Attention_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Take Action: Select Day to analyze, and environment
day_to_analyze = "2021-01-14"
env = "PAPER" # values may be PAPER / PROD / BACKTEST
# #### Imports
# +
import asyncio
import json
import math
import sys
from datetime import date, datetime, timedelta
import alpaca_trade_api as tradeapi
import matplotlib.pyplot as plt
import nest_asyncio
import numpy as np
import pandas as pd
import pytz
import requests
from dateutil import parser
from IPython.display import HTML, display, Markdown
from liualgotrader.analytics import analysis
from pytz import timezone
# %matplotlib inline
nest_asyncio.apply()
# -
# #### Load trading day details
day_to_analyze = datetime.strptime(day_to_analyze, "%Y-%m-%d")
trades = analysis.load_trades(day_to_analyze, env)
algo_runs = analysis.load_runs(day_to_analyze, env)
# ## Show trading session performance
# +
symbol_name = []
counts = []
revenues = []
est = pytz.timezone("US/Eastern")
batch_ids = trades.batch_id.unique().tolist()
# batch_ids.reverse()
current_max = pd.options.display.max_rows
pd.set_option("display.max_rows", None)
for batch_id in batch_ids:
how_was_my_day = pd.DataFrame()
how_was_my_day["symbol"] = trades.loc[trades["batch_id"] == batch_id][
"symbol"
].unique()
how_was_my_day["revenues"] = how_was_my_day["symbol"].apply(
lambda x: analysis.calc_batch_revenue(x, trades, batch_id)
)
how_was_my_day["count"] = how_was_my_day["symbol"].apply(
lambda x: analysis.count_trades(x, trades, batch_id)
)
if len(algo_runs.loc[algo_runs["batch_id"] == batch_id].start_time) > 0:
batch_time = (
algo_runs.loc[algo_runs["batch_id"] == batch_id]
.start_time.min()
.tz_localize("utc")
.astimezone(est)
)
else:
continue
env = algo_runs[algo_runs["batch_id"] == batch_id].algo_env.tolist()[0]
win_ratio = round(
1.0
* len(how_was_my_day[how_was_my_day.revenues >= 0])
/ len(how_was_my_day[how_was_my_day.revenues < 0]),
2,
)
revenues = round(sum(how_was_my_day["revenues"]), 2)
traded = len(how_was_my_day)
print(
f"[{env}] {batch_id}\n{batch_time}\nTotal revenues=${revenues}\nTotal traded:{traded} Win/Lose ration {win_ratio}"
)
display(
Markdown(f"{len(how_was_my_day[how_was_my_day.revenues >= 0])} **Winners**")
)
display(how_was_my_day[how_was_my_day.revenues >= 0].sort_values('revenues'))
display(Markdown(f"{len(how_was_my_day[how_was_my_day.revenues < 0])} **Lossers**"))
display(how_was_my_day[how_was_my_day.revenues < 0].sort_values('revenues'))
pd.set_option("display.max_rows", current_max)
# -
# #### Load stock OHLC details
api = tradeapi.REST(base_url="https://api.alpaca.markets")
minute_history = {}
for symbol in trades.symbol.unique().tolist():
minute_history[symbol] = api.polygon.historic_agg_v2(
symbol,
1,
"minute",
_from=str((day_to_analyze - timedelta(days=3)).date()),
to=str((day_to_analyze + timedelta(days=1)).date()),
).df
# ## Show Trading details per Symbol
est = pytz.timezone("US/Eastern")
position = {}
for symbol in minute_history:
symbol_df = trades.loc[trades["symbol"] == symbol]
start_date = symbol_df["tstamp"].min().to_pydatetime()
start_date = start_date.replace(
hour=9, minute=30, second=0, microsecond=0, tzinfo=est
)
end_date = start_date.replace(hour=16, minute=0, tzinfo=est)
cool_down_date = start_date + timedelta(minutes=5)
try:
start_index = minute_history[symbol]["close"].index.get_loc(
start_date, method="nearest"
)
end_index = minute_history[symbol]["close"].index.get_loc(
end_date, method="nearest"
)
except Exception as e:
print(f"Error for {symbol}: {e}")
continue
cool_minute_history_index = minute_history[symbol]["close"].index.get_loc(
cool_down_date, method="nearest"
)
open_price = float(minute_history[symbol]["close"][cool_minute_history_index])
plt.plot(
minute_history[symbol]["close"][start_index:end_index],
label=symbol,
)
plt.xticks(rotation=45)
delta = 0
profit = 0
operations = []
deltas = []
profits = []
times = []
prices = []
qtys = []
indicators = []
target_price = []
stop_price = []
daily_change = []
precent_vwap = []
algo_names = []
for index, row in symbol_df.iterrows():
delta = (
row["price"]
* row["qty"]
* (1 if row["operation"] == "sell" and row["qty"] > 0 else -1)
)
profit += delta
plt.scatter(
row["tstamp"].to_pydatetime(),
row["price"],
c="g" if row["operation"] == "buy" else "r",
s=100,
)
algo_names.append(row["algo_name"])
deltas.append(round(delta, 2))
profits.append(round(profit, 2))
operations.append(row["operation"])
times.append(pytz.utc.localize(pd.to_datetime(row["tstamp"])).astimezone(est))
prices.append(row["price"])
qtys.append(row["qty"])
indicator = json.loads(row.indicators)
indicators.append(indicator)
target_price.append(row["target_price"])
stop_price.append(row["stop_price"])
daily_change.append(
f"{round(100.0 * (float(row['price']) - open_price) / open_price, 2)}%"
)
precent_vwap.append(
f"{round(100.0 * (indicator['buy']['avg'] - open_price) / open_price, 2)}%"
if indicator["buy"] and "avg" in indicator["buy"]
else ""
)
d = {
"profit": profits,
"algo": algo_names,
"trade": deltas,
"operation": operations,
"at": times,
"price": prices,
"qty": qtys,
"daily change": daily_change,
"vwap": precent_vwap,
"indicators": indicators,
"target price": target_price,
"stop price": stop_price,
}
display(Markdown("***"))
display(Markdown(f"**{symbol}** analysis with profit **${round(profit, 2)}**"))
display(HTML(pd.DataFrame(data=d).to_html()))
plt.legend()
plt.show()
| analysis/notebooks/portfolio_performance_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Classification in GPflow
# --
#
# *<NAME> and <NAME> 2016*
#
# This script replicates
#
# Hensman, Matthews, Ghahramani, AISTATS 2015, Scalable Variational Gaussian Process Classification, Figure 1 Row 1.
#
# It serves to demonstrate sparse variational GP classification on a simple, easily visualized dataset.
# +
from matplotlib import pyplot as plt
# %matplotlib inline
import sys
import csv
import numpy as np
import gpflow
from gpflow.test_util import notebook_niter, notebook_list
import logging
logging.disable(logging.WARN)
# -
Xtrain = np.loadtxt('data/banana_X_train', delimiter=',')
Ytrain = np.loadtxt('data/banana_Y_train', delimiter=',').reshape(-1,1)
# +
def gridParams():
mins = [-3.25,-2.85 ]
maxs = [ 3.65, 3.4 ]
nGrid = 50
xspaced = np.linspace(mins[0], maxs[0], nGrid)
yspaced = np.linspace(mins[1], maxs[1], nGrid)
xx, yy = np.meshgrid(xspaced, yspaced)
Xplot = np.vstack((xx.flatten(),yy.flatten())).T
return mins, maxs, xx, yy, Xplot
def plot(m, ax):
col1 = '#0172B2'
col2 = '#CC6600'
mins, maxs, xx, yy, Xplot = gridParams()
p = m.predict_y(Xplot)[0]
ax.plot(Xtrain[:,0][Ytrain[:,0]==1], Xtrain[:,1][Ytrain[:,0]==1], 'o', color=col1, mew=0, alpha=0.5)
ax.plot(Xtrain[:,0][Ytrain[:,0]==0], Xtrain[:,1][Ytrain[:,0]==0], 'o', color=col2, mew=0, alpha=0.5)
if hasattr(m, 'feature') and hasattr(m.feature, 'Z'):
Z = m.feature.Z.read_value()
ax.plot(Z[:,0], Z[:,1], 'ko', mew=0, ms=4)
ax.set_title('m={}'.format(Z.shape[0]))
else:
ax.set_title('full')
ax.contour(xx, yy, p.reshape(*xx.shape), [0.5], colors='k', linewidths=1.8, zorder=100)
# +
# Setup the experiment and plotting.
Ms = [4, 8, 16, 32, 64]
# Run sparse classification with increasing number of inducing points
models = []
for index, num_inducing in enumerate(notebook_list(Ms)):
# kmeans for selecting Z
from scipy.cluster.vq import kmeans
Z = kmeans(Xtrain, num_inducing)[0]
m = gpflow.models.SVGP(
Xtrain, Ytrain, kern=gpflow.kernels.RBF(2),
likelihood=gpflow.likelihoods.Bernoulli(), Z=Z)
# Initially fix the hyperparameters.
m.feature.set_trainable(False)
gpflow.train.ScipyOptimizer().minimize(m, maxiter=notebook_niter(20))
# Unfix the hyperparameters.
m.feature.set_trainable(True)
gpflow.train.ScipyOptimizer(options=dict(maxiter=notebook_niter(200))).minimize(m)
models.append(m)
# -
# Run variational approximation without sparsity..
# ..be aware that this is much slower for big datasets,
# but relatively quick here.
m = gpflow.models.VGP(Xtrain, Ytrain,
kern=gpflow.kernels.RBF(2),
likelihood=gpflow.likelihoods.Bernoulli())
gpflow.train.ScipyOptimizer().minimize(m, maxiter=notebook_niter(2000))
models.append(m)
# +
# make plots.
fig, axes = plt.subplots(1, len(models), figsize=(12.5, 2.5), sharex=True, sharey=True)
for i, m in enumerate(models):
plot(m, axes[i])
axes[i].set_yticks([])
axes[i].set_xticks([])
# -
| doc/source/notebooks/classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/restrepo/rest-api-doc/blob/master/code/lens.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="13Rd5QLYs4F1" colab_type="text"
# ## Lens.org
# Downloaded Feb 28, 2020
# 1. https://link.lens.org/qZU5XxZlg0
# ```
# Scholarly Works (24,623) = Institution Name: University of Antioquia
# Filters: Date published = ( 1920-01-01 - 2018-12-31 )
# ```
# 2. https://link.lens.org/Emw54rq1ykj
# ```
# Scholarly Works (1,313) = Institution Name: University of Antioquia
# Filters: Date published = ( 2019-01-01 - 2019-12-31 )
# ```
# 3.https://link.lens.org/sLRYf3gaFob
# ```
# Scholarly Works (83) = Institution Name: University of Antioquia
# Filters: Date published = ( 2020-01-01 - 2020-02-28 )
# ```
# + id="AebBxyf-scOW" colab_type="code" colab={}
import pandas as pd
import time
# + [markdown] id="--c-Sc3XF8fQ" colab_type="text"
# ## Merge WEB and API data
# + [markdown] id="UfJRzAzrGC3X" colab_type="text"
# ### WEB data
# + id="z-i63Bv0scOb" colab_type="code" colab={}
data='https://github.com/restrepo/rest-api-doc/raw/master/data'
ln=pd.read_json('{}/lens_udea_2018-12-31.json.gz'.format(data),
compression='gzip')
time.sleep(1)
ln=ln.append(
pd.read_json('{}/lens_udea_2019-01-01_2019-12-31.json'.format(data)))
time.sleep(1)
ln=ln.append(
pd.read_json('{}/lens_udea_2020-01-01_2020-12-31.json'.format(data)))
ln=ln.reset_index(drop=True)
# + id="pnue73AlscOe" colab_type="code" outputId="c2096bbc-7732-4ab8-f29e-9fad84746e63" colab={"base_uri": "https://localhost:8080/", "height": 34}
ln.shape
# + id="Kww3_nlCscOi" colab_type="code" outputId="95318a83-2d1d-4932-b5a5-e3f480e98b84" colab={"base_uri": "https://localhost:8080/", "height": 383}
ln[:3]
# + [markdown] id="zRSpsscfGNnf" colab_type="text"
# ### Lens API file
# + id="mrfJO27-GQ22" colab_type="code" colab={}
lnapi=pd.read_json('https://github.com/restrepo/lensapi/raw/master/data/udea.json.gz')
# + id="FArknaKyGtEG" colab_type="code" outputId="f4dda540-87fe-44c6-8f82-4b2ea5ca76eb" colab={"base_uri": "https://localhost:8080/", "height": 120}
ln.columns
# + id="UFrgu6jdGvZ0" colab_type="code" outputId="d5d6c069-e444-4234-b190-a36c1fb0dafc" colab={"base_uri": "https://localhost:8080/", "height": 165}
lnapi.columns
# + id="wQw0BBSMNGha" colab_type="code" outputId="a6775101-2a2a-4d7b-b8e4-a26e97c809d9" colab={"base_uri": "https://localhost:8080/", "height": 171}
new_cols=list(pd.np.setdiff1d(lnapi.columns,ln.columns))
new_cols
# + id="P_Poo2XwP-u0" colab_type="code" outputId="ffc85547-397d-4333-e627-abbd8d0f1a9f" colab={"base_uri": "https://localhost:8080/", "height": 222}
ln.lens_id
# + [markdown] id="GEae4a_EGLwP" colab_type="text"
# ### Merge web+api
# + id="0Pw-nwcbQMsG" colab_type="code" colab={}
udea=ln.merge(lnapi[['lens_id']+new_cols ],on='lens_id',how='left')
# + id="LuRA0tyRRCFv" colab_type="code" outputId="ebd98d5a-4d3b-40e8-80c5-3e6a587c4439" colab={"base_uri": "https://localhost:8080/", "height": 34}
ln.shape[0],udea.shape[0]
# + id="FSqhPEYSRtsc" colab_type="code" colab={}
udea.to_json('udea_web_merge_api_records_True.json.gz',orient='records',
lines=True,compression='gzip')
# + id="x6wkAYOUSc76" colab_type="code" outputId="b3e27021-310d-47b0-9703-f4b8665da2cb" colab={"base_uri": "https://localhost:8080/", "height": 34}
# ls -lh udea_web_merge_api_records_True.json.gz
# + [markdown] id="3sC2Z4TERCvC" colab_type="text"
# # Load merged file from Google Drive
# + id="abNHJmihWXjK" colab_type="code" colab={}
# !pip install wosplus 2> /dev/null > /dev/null
# + id="EzccBOwBWp-e" colab_type="code" colab={}
import wosplus as wp
# + id="U8f0d0ZWXsia" colab_type="code" outputId="fe2baa59-1e49-4b73-a2fc-b583404bd5de" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %%writefile drive.cfg
[FILES]
udea_web_merge_api_records_True.json = 1_0jxpFlM_VqwguubL6vMMmi9Mbeslokp
# + id="ZiANrq-kWs-5" colab_type="code" colab={}
u=wp.wosplus('drive.cfg')
# + id="ASEx9wXdYMyv" colab_type="code" colab={}
u.load_biblio('udea_web_merge_api_records_True.json',orient='records',lines=True)
# + id="HKQARcNDYwNx" colab_type="code" colab={}
udeaapi=u.WOS.drop('Tipo',axis='columns').reset_index(drop=True)
# + id="MfTPmhJZRS01" colab_type="code" outputId="93b5e798-7e2f-4ab8-b1d9-ad49827b501a" colab={"base_uri": "https://localhost:8080/", "height": 181}
udeaapi.columns
# + id="Ep7lmhyaULQb" colab_type="code" outputId="014d8d9a-fd3c-4802-af92-4eb9294a657f" colab={"base_uri": "https://localhost:8080/", "height": 138}
columns=[
#{'Colav':'','Lens':'','WOS' :'','SCP':'','Google_Scholar':'','CrossRef':''},
{'Colav' :{'name' :'title',
'type' :'list',
'value' :[{'name' :None,
'type' :'dict',
'value':{'name_1':'en',
'type_1':'str',
'name_2':'es',
'type_2':'str'
}
}]},
'Lens' :{'name' :'title',
'type' :'str'},
'WOS' :{'name' :'TI',
'type' :'str'},
'SCP' :{'name' :'Title',
'type' :'str',
'format':'en (es) OR en (e[s])'},
'Google_Scholar':{'name':'title',
'type':'str'}
},
{'Colav' :{'name':'abstract',
'type':'str'},
'Lens' :{'name':'abstract',
'type':'str'},
'WOS' :{'name':'AB',
'type':'str'},
'SCP' :{'name':'Abstract',
'type':'str'},
'Google_Scholar':{'name':'abstract',
'type':'str'}
},
{'Colav' :{'name' :'authors',
'type' :'list',
'value' :[{'name' :None,
'type' :'dict',
'value':{'name':'en',
'type':'str',
'name':'es',
'type':'str'
}
}]},
'Lens' :{'name' :'authors',
'type' :'list',
'value' :[{'name' :None,
'type' :'dict',
'value':{'name_1' :'first name',
'type_1' :'str',
'format_1':'Name1 Name2'
'name_2':'es',
'type_2':'str'
}
}]}
}
]
data=pd.DataFrame(columns)
data
# + id="lRWKoJkfMMfQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 363} outputId="a2cbdc89-0566-43ac-d220-5144c0702dfe"
udeaapi.authors.loc[0]
# + id="HuXFfoP8ehcY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="34d21d29-2c45-4277-ac88-35c111a1e998"
udeaapi[udeaapi.authors.apply(lambda l: [ d.get('collective_name')#[dd for dd in d.get('collective_name') if isinstance(d.get('collective_name'),list)]
for d in l if isinstance(d,dict)] if isinstance(l,list) else None
).apply(lambda l: [n for n in l if n is not None] if isinstance(l,list) else [] ).apply(len)>0]
# + id="aYZyN6Z9TOfF" colab_type="code" colab={}
data.to_json('kk.json')
# + id="5fq5cRxtTTL2" colab_type="code" outputId="a300a6d6-22b9-4e05-ffa1-18b61d153af5" colab={"base_uri": "https://localhost:8080/", "height": 110}
data=pd.read_json('kk.json')
data
# + id="sOeK3JiqVoR7" colab_type="code" outputId="25724a5b-8fef-4832-bb9e-1c667ec80368" colab={"base_uri": "https://localhost:8080/", "height": 34}
data.Colav.apply(lambda d: d.get('value') if isinstance(d,dict) else None ).loc[0]
# + id="MQYEP9MPTzpE" colab_type="code" outputId="effa0ee0-2da9-473c-9038-6fa5d95285b3" colab={"base_uri": "https://localhost:8080/", "height": 34}
data.Colav.apply(lambda d: [dd.get('value') for dd in d.get('value')] if isinstance(d,dict) else None ).loc[0]
# + id="Vel9ay_d9-cc" colab_type="code" outputId="d6890107-0f87-4f5e-9c50-19565b0b208a" colab={"base_uri": "https://localhost:8080/", "height": 34}
udeaapi.title.loc[26014]
# + [markdown] id="ErDarKc4scOl" colab_type="text"
# # Affiliation analysis
# + id="sAgizVtLscOm" colab_type="code" outputId="aa748d08-4cbc-46c1-bc82-723220be7b5e" colab={}
i=200
print(i,end='\r')
print(i,end='\r')
# + id="zGHjhp8lscOo" colab_type="code" colab={}
# Extract dictionary columns from a list of dictionaries
key={}
key[0]='authors' # level 0. Must be an string
key[1]=['affiliations','first_name','last_name','initials']
def nesteddf(df,key):
aff=pd.DataFrame()
#split df
dfy=df[~df[key[0]].isna()].reset_index(drop=True)
dfn=df[df[key[0]].isna()].reset_index(drop=True)
#dfy[key] is a list of dictionaries
lenmax=dfy[key[0]].apply(len).max()
for i in range(lenmax):
if i%100==0:print(i,end='\r')
i_aff=pd.DataFrame()
#select dictionary i from list
i_key=dfy[~dfy[key[0]].str[i].isna()].reset_index(drop=True)
for k in key[1]:
i_aff[k]=i_key[key[0]].str[i].apply( lambda x: x.get(k) )
#i_aff now have key[1]
aff=aff.append(i_aff).reset_index(drop=True)
return aff
# + id="VSAJN78MscOr" colab_type="code" outputId="9d42f532-a8f7-44a6-c7c7-bae158f690e5" colab={}
aff=nesteddf(ln,key)
# + [markdown] id="wLMWBaqRscOu" colab_type="text"
# ## Extract affiliations
# + id="7o9lSlbOscOu" colab_type="code" colab={}
daff=aff[aff.affiliations.apply(len)>0].reset_index(drop=True)
daff['name_0']=daff.affiliations.str[0].apply( lambda x: x.get('name') )
# + [markdown] id="x2JEx100scOx" colab_type="text"
# Drop repeated full names with same primary affiliation
# + id="BtSc4GN_scOy" colab_type="code" colab={}
daff=daff.drop_duplicates(['name_0','first_name','last_name','initials']).sort_values('last_name').reset_index(drop=True)
# + [markdown] id="pkoTcCk5scO2" colab_type="text"
# Number of "different" authors of the full set of papers:
# + id="lWgNhfOlscO3" colab_type="code" outputId="9795116d-b01f-408d-f166-df84496b8b02" colab={}
daff.shape
# + [markdown] id="t7G9wNdascO6" colab_type="text"
# normalize authors
# + id="6KPI6GCRscO7" colab_type="code" colab={}
daff['full_name']=daff['first_name']+' '+daff['last_name']
# + [markdown] id="qhHMy3RrscO9" colab_type="text"
# Checking only primary affiliation
# + id="mVbW6AyqscO-" colab_type="code" outputId="9dcd625d-e94d-407c-f711-c9d3e3e5a9ab" colab={}
tmp=daff[daff.name_0=='<NAME>']
tmp.shape
# + [markdown] id="O3G5CTn3scPA" colab_type="text"
# Check some author
# + id="-JdL71lQscPB" colab_type="code" outputId="f03578ba-3499-4f83-e644-0842e106f57b" colab={}
daff[ ( daff.full_name.str.contains('Restrepo') ) & ( daff.full_name.str.contains('Diego') ) ].sort_values('full_name')
# + id="FXEK8JhcscPF" colab_type="code" outputId="59a96a62-5003-4371-a3be-86c8c06e3530" colab={}
daff[ ( daff.full_name.str.contains('D[\.\s]Restrepo') ) & ( daff.full_name.str.contains('^D[\.\s]') ) ].sort_values('full_name')
# + [markdown] id="BrfJlk9mscPI" colab_type="text"
# Search all declarated affillations
# + id="CQhX-tvescPJ" colab_type="code" outputId="71a7f85e-7151-4363-d57d-faed46fafcb9" colab={}
key={}
key[0]='affiliations' # level 0. Must be an string
key[1]=['grid','name','name_raw']
aff=nesteddf(daff,key)
# + id="mz8-hoHBscPL" colab_type="code" outputId="7a8cfa5c-af0f-4b12-f4f6-492fe524e3e3" colab={}
aff[:3]
# + id="AzDStBvXscPN" colab_type="code" outputId="d781487c-c3eb-4c8c-d10c-ea93a3cdbfc1" colab={}
aff [ ( aff.name=='University of Antioquia' ) & (
(aff.name_raw.str.contains('F[íi]sica')) | (aff.name_raw.str.contains('Physics'))
) ].shape
# + id="l4v3FKQXscPP" colab_type="code" outputId="bcc303b4-c2c0-4f4d-c0ca-6b55d1afce51" colab={}
aff [ ( aff.name=='University of Antioquia' ) & ( aff.name_raw.str.contains('Physics') ) ].s
# + id="hYLyR_2_scPU" colab_type="code" colab={}
rdf=pd.DataFrame()
# + id="-HHUtAIVscPX" colab_type="code" colab={}
rdf['institution']=aff.name.value_counts().keys()
# + id="N5b0MudJscPZ" colab_type="code" colab={}
rdf['colaborators']=aff.name.value_counts().values
# + [markdown] id="2-tLPV8bscPb" colab_type="text"
# Recover full information of institution
# + id="2xtXj7iMscPc" colab_type="code" colab={}
rdf=rdf.merge(aff[['grid','name']].drop_duplicates('name').reset_index(drop=True),
left_on='institution',right_on='name',how='left').drop('name',axis='columns')
# + id="Y-Xd6zL2scPe" colab_type="code" outputId="3258f8eb-c4fa-4cd9-ed3f-4adbca9f0017" colab={}
rdf[:1]
# + id="uVYG4QmpscPh" colab_type="code" colab={}
key={}
key[0]='grid'
key[1]=['addresses','wikipedia_url','id','links','types']
def extract_nested_columns(df,key,col_type='dict'):
import sys
if col_type=='dict':
sdf=df[key[0]]
elif col_type=='list':
sdf=df[key[0]].str[0]
else:
sys.exit('bad column type')
dfy=df[~sdf.isna()].reset_index(drop=True)
dfn=df[ sdf.isna()].reset_index(drop=True)
for k in key[1]:
if col_type=='dict':
sdfy=dfy[key[0]]
elif col_type=='list':
sdfy=dfy[key[0]].str[0]
dfy[k]=sdfy.apply( lambda x: x.get(k) )
dfn[k]=None
dft=dfy.append(dfn).reset_index(drop=True)
return dft
# + id="OtqkdLOjscPk" colab_type="code" colab={}
df=extract_nested_columns(rdf,key)
# + id="Kno4v20TscPn" colab_type="code" outputId="1be91704-5d93-4867-cd35-967135ebd0d2" colab={}
df[:2]
# + id="49Hm7kOUscPp" colab_type="code" colab={}
key={}
key[0]='addresses'
key[1]=['city','country_code','lat','lon']
dff=extract_nested_columns(df,key,col_type='list')
# + id="tjPvfrAsscPt" colab_type="code" colab={}
dff=dff.drop(['grid','addresses'],axis='columns')
# + id="Qw0NPLSwscPw" colab_type="code" outputId="1d1f8036-a8d6-4e8c-bce2-c0e1bd738f33" colab={}
dff
# + id="hsYnoeJRscP0" colab_type="code" colab={}
dff[1:].to_excel('colab.xlsx',index=False)
# + id="xCTg66RQscP3" colab_type="code" outputId="d656b75f-043c-4dff-b87d-cb59e817189a" colab={}
pd.read_excel('colab.xlsx')
# + id="irZOLGogscP5" colab_type="code" outputId="f5a41fbd-4918-4383-cf27-620fc2e22ccf" colab={}
dff.country_code.value_counts()
# + [markdown] id="2hwyc42tscP7" colab_type="text"
# ## Checks
# + id="141Nz-L1scP8" colab_type="code" outputId="0f1738bb-b674-45d2-867a-b30888c139ab" colab={}
dff[dff.country_code=='CO'].reset_index(drop=True)
# + id="jRhce3ZMscP_" colab_type="code" colab={}
autores_udea=daff[daff.name_0=='University of Antioquia'].reset_index(drop=True)
# + id="TDppvcFJscQC" colab_type="code" colab={}
autores_udea_norm=autores_udea[~autores_udea.first_name.isna()].reset_index(drop=True)
autores_udea_norm=autores_udea_norm[~autores_udea_norm.last_name.isna()].reset_index(drop=True)
# + id="uhsnvK3EscQE" colab_type="code" outputId="5977b6e7-4255-4879-b054-72e4e39622d6" colab={}
autores_udea_norm[autores_udea_norm.first_name.str.contains('Restrepo')].reset_index(drop=True)[29:]
# + id="43lulqjPscQG" colab_type="code" outputId="61971a74-c1c2-4bcc-ccdb-2b6b35b709f6" colab={}
daff[daff.name_0!='University of Antioquia'][:10]
# + id="j2u54oT4scQI" colab_type="code" outputId="32e2bc1a-929c-4a41-a655-acfd9c1a70cd" colab={}
autores_udea_norm[( autores_udea_norm.last_name.str.contains('Restrepo') ) &
( autores_udea_norm.first_name.str.contains('D') )].reset_index(drop=True)
# + id="u1l8kqYZscQM" colab_type="code" outputId="313c7cd4-bcc9-48e7-cb85-34cc307c40d3" colab={}
autores_udea_norm[( autores_udea_norm.first_name.str.contains('Restrepo') ) &
( autores_udea_norm.first_name.str.contains('D') )].reset_index(drop=True)
# + id="Y0j4cwi0scQQ" colab_type="code" outputId="006f9370-c4ce-4d99-b769-2249714c69eb" colab={}
aff.grid[0]
# + id="qdt27-s3scQR" colab_type="code" outputId="2852a980-1b4f-43b9-84a9-d53577976528" colab={}
aff[:1].grid.apply( lambda x: x.get('addresses') )
# + id="HcovEJrOscQT" colab_type="code" outputId="eab35085-21b1-42ad-8193-6266fea74535" colab={}
# Extract dictionary columns from a list of dictionaries
key={}
key[0]='authors' # level 0. Must be an string
key[1]=['affiliations','first_name','last_name','initials']
df=ln
aff=pd.DataFrame()
if True:
#split df
dfy=df[~df[key[0]].isna()].reset_index(drop=True)
dfn=df[df[key[0]].isna()].reset_index(drop=True)
#dfy[key] is a list of dictionaries
lenmax=dfy[key[0]].apply(len).max()
for i in range(lenmax):
print(i)
i_aff=pd.DataFrame()
#select dictionary i from list
i_key=dfy[~dfy[key[0]].str[i].isna()].reset_index(drop=True)
for k in key[1]:
i_aff[k]=i_key[key[0]].str[i].apply( lambda x: x.get(k) )
#i_aff now have key[1]
aff=aff.append(i_aff).reset_index(drop=True)
# + id="mwTTcAxmscQZ" colab_type="code" outputId="d5fb7fd0-4e38-4502-9b73-cff6033c93c9" colab={}
i_aff['affiliations'][0]
# + id="hSNrv961scQa" colab_type="code" outputId="b8b11756-b55f-4014-8ff7-1bb2357835df" colab={}
i_aff[5:6]
# + id="jLpf9HbYscQd" colab_type="code" colab={}
lenmax=dfy[key].apply(len).max()
for i in [100]:#range(1):
s=dfy[key].str[i]#.apply( lambda x: x.get(getkey) )
# + id="pI34QHIkscQe" colab_type="code" outputId="807f305c-3882-43e9-902f-0061c92157f9" colab={}
len( dfy[~dfy[key].str[i].isna()][key][553] )
# + id="SXMUFv7HscQg" colab_type="code" colab={}
ln['aff']=ln.authors.str[0].apply(lambda x: x.get('affiliations')).str[0]
# + id="JZNTCfdAscQh" colab_type="code" colab={}
lny=ln[~ln.aff.isna()].reset_index(drop=True)
lnn=ln[ln.aff.isna()].reset_index(drop=True)
# + id="NvRO35QyscQk" colab_type="code" colab={}
lnn['aff_name']=None
# + id="X6jp4LxKscQl" colab_type="code" colab={}
lny['aff_name']=lny.aff.apply(lambda x: x.get('name'))
# + id="h1tESY7nscQn" colab_type="code" colab={}
ln=lny.append(lny).reset_index(drop=True)
# + id="gMjFfryoscQo" colab_type="code" outputId="88090140-2c20-490a-9f3d-df668fd7657d" colab={}
ln.aff.apply(lambda x: x.get('grid'))[:6].apply(lambda x: x.get('addresses'))
# + id="QukHsDTxscQp" colab_type="code" colab={}
x={'ids': [], 'grid': None}
# + id="W5XxooydscQr" colab_type="code" outputId="979ba9ac-da82-4107-a3e5-231933f8861c" colab={}
x.get('grid') is None
# + id="mE0PaJYascQs" colab_type="code" outputId="094ff526-08c4-4950-c5b0-33feff1478b0" colab={}
ln.aff[:6]
# + id="sxuOQNxIscQu" colab_type="code" colab={}
lnn=ln[ln.aff.apply(lambda x: x.get('grid') is None)].reset_index(drop=True)
lny=ln[~ln.aff.apply(lambda x: x.get('grid') is None)].reset_index(drop=True)
# + id="sbLIx91EscQv" colab_type="code" colab={}
lny['country_code']=lny.aff.apply(
lambda x: x.get('grid') ).apply(
lambda x: x.get('addresses') ).str[0].apply(
lambda x: x.get('country_code') )
# + id="N5L_awTAscQy" colab_type="code" outputId="a66f0d76-2f07-4ab6-dd15-2338499e4e15" colab={}
lny.aff.apply(
lambda x: x.get('grid') ).apply(
lambda x: x.get('addresses') )
# + id="VOhciGdZscQ0" colab_type="code" outputId="1d594964-e688-4d04-a0b6-bb8f3d5d1ab4" colab={}
ln.citation_ids[12]
# + id="WXNespiYscQ3" colab_type="code" outputId="2c43f33e-1b35-47d6-c818-6c1efc149625" colab={}
ln.title[12]
# + id="pHG0btoWscQ5" colab_type="code" outputId="7dea4bef-03d9-4f12-a524-976a104a2a1a" colab={}
ln.references[12]
# + id="t8ISEF6lscQ7" colab_type="code" outputId="5a32dd05-ead6-4ad8-a5bf-4a55e9f6c1d0" colab={}
ln[ln.record_lens_id.str.contains('269-340-095-748')]
# + id="Sp0Ckc4oscQ8" colab_type="code" outputId="2<PASSWORD>-2<PASSWORD>" colab={}
pd.DataFrame( pd.DataFrame( ln.authors[0] ).affiliations[0] ).grid[0]
# + id="hr1q2cn3scQ-" colab_type="code" colab={}
| code/lens.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import tensorflow as tf
# things we need for NLP
import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
# things we need for Tensorflow
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.optimizers import SGD
import pandas as pd
import pickle
import random
# -
import json
with open('intents edited.json') as file:
intents = json.load(file)
data = pickle.load( open( "HC-assistant-data.pkl", "rb" ) )
words = data['words']
classes = data['classes']
with open(f'HC-assistant-model.pkl', 'rb') as f:
model = pickle.load(f)
# +
def clean_up_sentence(sentence):
# tokenize the pattern - split words into array
sentence_words = nltk.word_tokenize(sentence)
# stem each word - create short form for word
sentence_words = [stemmer.stem(word.lower()) for word in sentence_words]
return sentence_words
# return bag of words array: 0 or 1 for each word in the bag that exists in the sentence
def bow(sentence, words):
# tokenize the pattern
sentence_words = clean_up_sentence(sentence)
# bag of words - matrix of N words, vocabulary matrix
bag = [0]*len(words)
for s in sentence_words:
for i,w in enumerate(words):
if w == s:
# assign 1 if current word is in the vocabulary position
bag[i] = 1
return(np.array(bag))
# -
def chat():
print("Chat with HEIGHTS COMMUNICATIONS bot assistant! (type quit to exit)\n\n")
while True:
inp = input("You: ")
if inp.lower() == "quit":
break
input_data = pd.DataFrame([bow(inp, words)], dtype=float, index=['input'])
results = model.predict([input_data])[0]
results_index = np.argmax(results)
tag = classes[results_index]
if results[results_index] > 0.7:
for tg in intents["intents"]:
if tg['tag'] == tag:
responses = tg["responses"]
print("Assistant:", random.choice(responses))
else:
print("Assistant:","I am sorry. I am not aware of what you are asking. Please call us at +923232779999")
chat()
| AI chatbot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:part_counting]
# language: python
# name: conda-env-part_counting-py
# ---
# %load_ext autoreload
# %autoreload 2
# +
import cv2
import matplotlib.pyplot as plt
import numpy as np
import open3d as o3d
from dotenv import load_dotenv, find_dotenv
from pathlib import Path
from src.data.rgbd import load_rgbd
from src.data.pcd import load_pcd
# find .env automagically by walking up directories until it's found
dotenv_path = find_dotenv()
project_dir = Path(dotenv_path).parent
# load up the entries as environment variables
load_dotenv(dotenv_path)
o3d.visualization.webrtc_server.enable_webrtc()
# +
raw_data_dir = project_dir/'data/raw/render_results_imov_cam_mist_simple'
img_fpath = np.random.choice(list(raw_data_dir.glob('*/*.exr')))
# -
# # Fix distortion
#
# Our depth channel corresponds to the distance to the camera, instead of the depth to the camera level. This was causing distortions in the point clouds.
# +
f = 711.1111
cx = cy = 256
rows, cols = depth.shape
c, r = np.meshgrid(np.arange(cols), np.arange(rows), sparse=True)
c = c - cx
r = r - cy
valid = (depth > 0) & (depth < 1.5)
d_px = np.where(valid, np.sqrt(np.square(c) + np.square(r)), np.nan) # pixel distance to (cx,cy) (image center)
z_px = np.where(valid, np.sqrt(np.square(d_px) + np.square(f)), np.nan)
z = np.where(valid, depth * f / z_px, np.nan)
x = np.where(valid, z * c / f, 0)
y = np.where(valid, z * r / f, 0)
pcd = np.dstack((x, y, z))
pcd.resize(rows*cols, 3)
pcd
# -
o3d.visualization.draw([
o3d.geometry.PointCloud(o3d.utility.Vector3dVector(
np.array([p for p in pcd if not np.isnan(p[-1])])
))
])
# # Conversion from (fixed) RGBD image
# +
rgbd_img = load_rgbd(img_fpath, (256,256), 711.1111)
fig, axs = plt.subplots(1,2)
fig.set_size_inches(10,10)
axs[0].imshow(rgbd_img.color, cmap='gray')
axs[1].imshow(rgbd_img.depth, cmap='gray')
axs[0].set_axis_off()
axs[1].set_axis_off()
fig.show()
# +
f = 711.1111
t = 512
camera_params = o3d.camera.PinholeCameraIntrinsic(t,t,f,f,t/2,t/2)
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_img, camera_params)
np.asarray(pcd.points)
# -
o3d.visualization.draw([pcd])
# # Refactored
# +
pcd = load_pcd(img_fpath)
o3d.visualization.draw([pcd])
| notebooks/00.02-bmp-rgbd-to-point-cloud.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# # Self-Driving Car Engineer Nanodegree
#
# ## Deep Learning
#
# ## Project: Build a Traffic Sign Recognition Classifier
#
# In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
#
# > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
# "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
#
# In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project.
#
# The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
#
#
# >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
# ---
# ## Step 0: Load The Data
# +
# Load pickled data
import pickle
import numpy as np
training_file = 'traffic-signs-data/train.p'
validation_file= 'traffic-signs-data/valid.p'
testing_file = 'traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# -
# ---
#
# ## Step 1: Dataset Summary & Exploration
#
# The pickled data is a dictionary with 4 key/value pairs:
#
# - `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
# - `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
# - `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
# - `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
#
# Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results.
# ### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
# +
n_train = len(X_train)
n_validation = len(X_valid)
n_test = len(X_test)
image_shape = X_train[0].shape
n_classes = y_train.max()+1
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
# +
def convertToOneHot(y):
oneHot = np.zeros((len(y), max(y) + 1))
oneHot[np.arange(0, len(y)), y] = 1
return oneHot
Y_train = convertToOneHot(y_train)
Y_test = convertToOneHot(y_test)
Y_valid = convertToOneHot(y_valid)
# -
# ### Include an exploratory visualization of the dataset
# Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
#
# The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.
#
# **NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
# +
import matplotlib.pyplot as plt
# %matplotlib inline
def show_images(image_dataset, n_rows, n_cols, graph_title='Training Images'):
plt.figure(figsize=(8, 6.5))
selected_classes = np.random.randint(44, size=n_rows)
image_number = 1
for row in selected_classes:
for col in range(1, n_cols + 1):
plt.subplot(n_rows, n_cols, image_number)
image_number += 1
x_selected = X_train[y_train == row]
random_index = np.random.randint(x_selected.shape[0])
plt.imshow(x_selected[random_index, :, :, :])
plt.axis('off')
plt.title('class: {}'.format(row))
plt.suptitle(graph_title)
plt.show()
def show_class_distribution(class_labels):
plt.figure(figsize=(10, 4))
examples_per_class = np.bincount(class_labels)
num_classes = len(examples_per_class)
plt.bar(np.arange(num_classes), examples_per_class, 0.8, color='green', label='Inputs per class')
plt.xlabel('Class number')
plt.ylabel('Examples per class')
plt.title('Distribution of Training Examples Amongst Classes')
plt.show()
show_images(X_train, 5, 5)
show_class_distribution(y_train)
# -
# ----
#
# ## Step 2: Design and Test a Model Architecture
#
# Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
#
# The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
#
# With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
#
# There are various aspects to consider when thinking about this problem:
#
# - Neural network architecture (is the network over or underfitting?)
# - Play around preprocessing techniques (normalization, rgb to grayscale, etc)
# - Number of examples per label (some have more than others).
# - Generate fake data.
#
# Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
# ### Pre-process the Data Set (normalization, grayscale, etc.)
# Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project.
#
# Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
#
# Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
# ### Preprocessing
# +
from skimage import exposure
def preprocessing(X):
#Scaling features
X = (X / 255.).astype(np.float32)
#Converting to grayscale using BT.601 recommendation
X = 0.299 * X[:, :, :, 0] + 0.587 * X[:, :, :, 1] + 0.114 * X[:, :, :, 2]
# X -= np.mean(X, axis = 0)
# X /= np.std(X, axis = 0)
for i in range(X.shape[0]):
X[i] = exposure.equalize_adapthist(X[i])
# Add a single grayscale channel
X = X.reshape(X.shape + (1,))
return X
# -
X_train = preprocessing(X_train)
X_valid = preprocessing(X_valid)
X_test = preprocessing(X_test)
from sklearn.utils import shuffle
X_train, Y_train = shuffle(X_train, Y_train)
# ### Q1 - Preprocessing techniques used
# - All images were scaled down as it helps in SGD to converge quickly. From my experiments, I found that scaling worked better than mean normalization so I didn't use mean normalization finally.
# - Images were converted to grayscale using [BT.601 recommendation](https://en.wikipedia.org/wiki/Rec._601).
# - Contrast Limited Adaptive Histogram Equalization (CLAHE) was used to improve the contrast in images. It is suitable for improving the local contrast and enhancing the definitions of edges in each region of an image.
# - One-hot encoding was used to convert labels to vectors.
# - The dataset was shuffled.
#
# ### Model Architecture
# +
from collections import namedtuple
HyperParam = namedtuple('Parameters', [
'num_classes', 'batch_size', 'max_epochs',
'learning_rate', 'l2_lambda',
'conv1_k', 'conv1_d', 'conv1_p',
'conv2_k', 'conv2_d', 'conv2_p',
'conv3_k', 'conv3_d', 'conv3_p',
'fc4_size', 'fc4_p'
])
# -
# ### Defining layer factories
# +
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
def conv_relu_factory(input, kernel_size, depth):
weights = tf.get_variable( 'weights',
shape = [kernel_size, kernel_size, input.get_shape()[3], depth],
initializer = tf.contrib.layers.xavier_initializer()
)
biases = tf.get_variable( 'biases',
shape = [depth],
initializer = tf.constant_initializer(0.0)
)
conv = tf.nn.conv2d(input, weights,
strides = [1, 1, 1, 1], padding = 'SAME')
return tf.nn.relu(conv + biases)
def fully_connected_factory(input, size):
weights = tf.get_variable( 'weights',
shape = [input.get_shape()[1], size],
initializer = tf.contrib.layers.xavier_initializer()
)
biases = tf.get_variable( 'biases',
shape = [size],
initializer = tf.constant_initializer(0.0)
)
return tf.matmul(input, weights) + biases
def fully_connected_relu_factory(input, size):
return tf.nn.relu(fully_connected_factory(input, size))
def pool_factory(input, size):
return tf.nn.max_pool(
input,
ksize = [1, size, size, 1],
strides = [1, size, size, 1],
padding = 'SAME'
)
# -
# ### Define model architecture
def model(input, params, predict_flag):
with tf.variable_scope('conv1'):
conv1 = conv_relu_factory(input, kernel_size = params.conv1_k, depth = params.conv1_d)
pool1 = pool_factory(conv1, size = 2)
pool1 = tf.cond(predict_flag,
lambda: pool1,
lambda: tf.nn.dropout(pool1, keep_prob = params.conv1_p)
)
with tf.variable_scope('conv2'):
conv2 = conv_relu_factory(pool1, kernel_size = params.conv2_k, depth = params.conv2_d)
pool2 = pool_factory(conv2, size = 2)
pool2 = tf.cond(predict_flag,
lambda: pool2,
lambda: tf.nn.dropout(pool2, keep_prob = params.conv2_p),
)
with tf.variable_scope('conv3'):
conv3 = conv_relu_factory(pool2, kernel_size = params.conv3_k, depth = params.conv3_d)
pool3 = pool_factory(conv3, size = 2)
pool3 = tf.cond(predict_flag,
lambda: pool3,
lambda: tf.nn.dropout(pool3, keep_prob = params.conv3_p)
)
# 1st conv-relu output
pool1 = pool_factory(pool1, size = 4)
shape = pool1.get_shape().as_list()
pool1 = tf.reshape(pool1, [-1, shape[1] * shape[2] * shape[3]])
# 2nd conv-relu output
pool2 = pool_factory(pool2, size = 2)
shape = pool2.get_shape().as_list()
pool2 = tf.reshape(pool2, [-1, shape[1] * shape[2] * shape[3]])
# 3rd conv-relu output
shape = pool3.get_shape().as_list()
pool3 = tf.reshape(pool3, [-1, shape[1] * shape[2] * shape[3]])
flattened = tf.concat([pool1, pool2, pool3], 1)
with tf.variable_scope('fc4'):
fc4 = fully_connected_relu_factory(flattened, size = params.fc4_size)
fc4 = tf.cond(predict_flag,
lambda: fc4,
lambda: tf.nn.dropout(fc4, keep_prob = params.fc4_p),
)
with tf.variable_scope('out'):
logits = fully_connected_factory(fc4, size = params.num_classes)
softmax = tf.nn.softmax(logits)
return logits, softmax
# ### Q-2 Model architecture
#
# I used [<NAME> and <NAME>'s Multiscale convnet architecture](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It has 3 convolutional layers for feature extraction, followed by pooling layers and finally a fully connected layer for softmax classification. The unique characterisitc of multi scale features is that the conv layers' output is not only forwarded into subsequent layer, but is also branched off and fed into classifier (e.g. fully connected layer).
#
# 
# The size of each layer is defined in the HyperParam namedtuple below.
# ### Train, Validate and Test the Model
# A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
# sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
# +
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
import time
params = parameters
# Build the graph
graph = tf.Graph()
with graph.as_default():
tf_x = tf.placeholder(tf.float32, shape = (None, image_shape[0],
image_shape[1],
1))
tf_y = tf.placeholder(tf.float32, shape = (None, n_classes))
predict_flag = tf.placeholder(tf.bool)
current_epoch = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(params.learning_rate,
current_epoch,
decay_steps = params.max_epochs,
decay_rate = 0.01)
logits, predictions = model(tf_x, params, predict_flag)
with tf.variable_scope('fc4', reuse = True):
l2_loss = tf.nn.l2_loss(tf.get_variable('weights'))
softmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=tf_y)
loss = tf.reduce_mean(softmax_cross_entropy) + params.l2_lambda * l2_loss
optimizer = tf.train.AdamOptimizer(
learning_rate = learning_rate).minimize(loss)
with tf.Session(graph = graph) as session:
session.run(tf.global_variables_initializer())
def calculate_accuracy(X, y):
all_predictions = []
all_losses = []
sce = []
for offset in range(0, len(y), params.batch_size):
end = offset + params.batch_size
x_batch, y_batch = X[offset:end], y[offset:end]
feed_dict = {
tf_x : x_batch,
tf_y : y_batch,
predict_flag : True
}
[pred_batch, loss_batch] = session.run([predictions, loss], feed_dict)
all_predictions.extend(pred_batch)
all_losses.append(loss_batch)
all_predictions = np.array(all_predictions)
all_losses = np.array(all_losses)
accuracy = 100.0 * np.sum(np.argmax(all_predictions, 1) == np.argmax(y, 1)) / all_predictions.shape[0]
all_losses = np.mean(all_losses)
return (accuracy, all_losses)
for epoch in range(params.max_epochs):
current_epoch = epoch
X_train, Y_train = shuffle(X_train, Y_train)
for offset in range(0, len(Y_train), params.batch_size):
end = offset + params.batch_size
x_batch, y_batch = X_train[offset:end], Y_train[offset:end]
session.run([optimizer], feed_dict = {
tf_x : x_batch,
tf_y : y_batch,
predict_flag : False
}
)
valid_accuracy, valid_loss = calculate_accuracy(X_valid, Y_valid)
train_accuracy, train_loss = calculate_accuracy(X_train, Y_train)
print("-------------- EPOCH %4d/%d --------------" % (epoch, params.max_epochs))
print(" Train loss: %.8f, accuracy: %.2f%%" % (train_loss, train_accuracy))
print("Validation loss: %.8f, accuracy: %.2f%%" % (valid_loss, valid_accuracy))
saver = tf.train.Saver()
if current_epoch % 10 == 0:
save_path = saver.save(session, "./" + str(current_epoch) + "_exp2_model.ckpt")
print("Model saved in file: %s" % save_path)
# -
parameters = HyperParam(
num_classes = n_classes,
batch_size = 512,
max_epochs = 50,
learning_rate = 0.001,
l2_lambda = 0.001,
conv1_k = 5, conv1_d = 32, conv1_p = 0.8,
conv2_k = 5, conv2_d = 64, conv2_p = 0.6,
conv3_k = 5, conv3_d = 128, conv3_p = 0.6,
fc4_size = 1024, fc4_p = 0.5
)
# ### Q4 - Model training
#
# The above parameters and hyperparameters were used for model training. Adam optimizer was used with L2 regularization and dropout probability of 0.8, 0.6, 0.6 and 0.4 in the consecutive convolutional and FC layers.
# ### Q5 - Solution Approach
# As visible from the above training logs, this network with the choice of hyperparameters was able to achieve 98.57% validation accuracy. There is definitely a possibility of improving the accuracy further by exploring other choices of hyperparameters and adding synthetic data.
# I would like to highlight that the use of **multiscale convnets** greatly attributed to achieving an **accuracy of 98.57%** on the validation set without any data augmentation.
# Usual ConvNets are organized in strict feed-forward layered architectures in which the output of one layer is fed only to the layer above.
# However, in this model, the output of the first stage is branched out and fed to the classifier, in addition to the output of the second stage. Additionally, applying a second subsampling stage on the branched output yielded higher accuracies than with just one. This multiscale technique especially helps in recognizing the features of same classes that look similar but vary in scale [[1]](https://www.researchgate.net/profile/Yann_Lecun/publication/224260345_Traffic_sign_recognition_with_multi-scale_Convolutional_Networks/links/0912f50f9e763201ab000000/Traffic-sign-recognition-with-multi-scale-Convolutional-Networks.pdf). _Using the same network without multiscale features yielded an accuracy of <95% on the validation set._
# I also tried to use the [Spatial Transformer Network](https://arxiv.org/abs/1506.02025) module but could not get it to work. STNs have demonstrated the ability of introducing spatial invariance in the networks.
# ### Calculating accuracy on test set
with tf.Session(graph = graph) as sess:
saver = tf.train.Saver()
saver.restore(sess, './models/40_exp2_model.ckpt')
session = sess
test_accuracy, test_loss = calculate_accuracy(X_test, Y_test)
print("Test loss: %.8f, accuracy: %.2f%%" % (test_loss, test_accuracy))
# ---
#
# ## Step 3: Test a Model on New Images
#
# To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
#
# You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
# ### Load and Output the Images
# +
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import scipy.misc
NEW_IMAGES_FOLDER = './images/'
images = ['1.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg']
def resize_image(image_file):
image = plt.imread(NEW_IMAGES_FOLDER + image_file)
return scipy.misc.imresize(image, (32, 32))
resized_image_data = [resize_image(image) for image in images]
def display_images(imgs_data, grey = False):
index = 1
plt.figure(figsize=(11,7))
for img in imgs_data:
plt.subplot(1, 5, index)
plt.imshow(img[:,:,0]) if grey else plt.imshow(img,)
plt.axis('off')
index += 1
plt.show()
display_images(resized_image_data)
# -
# As visible from the above plots, all images appear to be good quality except the last one i.e. "Slippery road ahead". As the image appears quite blurred, the model might misclassify it. Moreover, there's a label below the image.
preprocessed_images = preprocessing(np.array(resized_image_data))
# ### Predict the Sign Type for Each Image
# +
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
import pandas as pd
from matplotlib import gridspec
def print_result(top_k_prob, top_k_indices):
class_names = pd.read_csv('./signnames.csv')['SignName'].values
index = 0
img_index = 0
plt.figure(figsize=(14, 11))
gs = gridspec.GridSpec(5, 2, width_ratios=[1, 0.45])
for i in range(5):
img = resized_image_data[img_index]
img_index += 1
plt.subplot(gs[index])
plt.imshow(img)
plt.axis('off')
# plt.title(tag)
index += 1
plt.subplot(gs[index])
plt.barh(np.arange(1, 6, 1),
top_k_prob[i, :],
0.8,
color='green')
plt.yticks(np.arange(1, 6, 1), class_names[top_k_indices[i, :]])
index += 1
plt.suptitle('Test Images and their Softmax Probabilities')
plt.show()
with tf.Session(graph = graph) as sess:
saver = tf.train.Saver()
saver.restore(sess, './models/40_exp2_model.ckpt')
x_batch = preprocessed_images
feed_dict = {
tf_x : x_batch,
predict_flag : True
}
ps = sess.run(predictions, feed_dict)
top_k_op = tf.nn.top_k(ps, k=5)
top_k_results = sess.run(top_k_op)
print_result(top_k_results.values, top_k_results.indices)
# -
# ### Analyze Performance
# As visible from the above visualization, the model performs accurately for all the images. Compared to the rest, the model is less confident for the slippery road image due to the blurriness of the image.
#
# **As calculated before, the accuracy on the original test set was 97.64% whereas it is 100% on these 5 images taken from the web.**
# ### Output Top 5 Softmax Probabilities For Each Image Found on the Web
# Already done in above visualization
# ### Project Writeup
#
# Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file.
# > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
# "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
| Traffic_Sign_Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # бинарное дерево
# +
class BST:
def __init__(self, node):
self.root = node
def add(self, val):
add_node = Node(val)
node = self.root
prev_node = None
last = ""
while node:
if node.val < add_node.val:
prev_node = node
node = node.right
last = 'r'
elif node.val > add_node.val:
prev_node = node
node = node.left
last = 'l'
else:
return
if last == "l":
prev_node.left = add_node
else:
prev_node.right = add_node
# -
# # бинарная куча
# гарантируетсяся, что каждый элемент больше, чем свои дети (на максимум)
# хотим операции делать за $O(log(n))$
# заполняем сверхну вниз слева направо
# данные для i храним в позиции 2i+1 и 2i+2
# +
class MaxHeap:
def __init__(self):
self.data = []
self.n = 0
def sift_down(self, node):
left_child = node * 2 + 1
right_child = node * 2 + 2
if left_child < len(self.data) and self.data[right_child] > max(self.data[node], self.data[left_child]):
self.data[node], self.data[right_child] = self.data[right_child], self.data[node]
self.sift_down(right_child)
elif left_child < len(self.data) and self.data[left_child] > self.data[node]:
self.data[node], self.data[left_child] = self.data[left_child], self.data[node]
self.sift_down(left_child)
def sift_up(self, node):
if node == 0:
return
parent = (node - 1) // 2
if self.data[parent] < self.data[node]:
self.data[parent], self.data[node] = self.data[node], self.data[parent]
self.sift_up(parent)
def add(self, value):
self.data.append(value)
self.n += 1
self.sift_up(self.n-1)
def get_max(self):
self.data[0], self.data[self.n - 1] = self.data[self.n - 1], self.data[0]
max_val = self.data.pop()
self.n -= 1
self.sift_down(0)
return max_val
heap = MaxHeap()
values = list(map(int, input().split()))[:-1]
for val in values:
heap.add(val)
print(heap.data)
heap.add(23)
print(heap.data)
| 02 Algorithms/Lecture 7/Lecture 7 data structures 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Credit Risk Classification
#
# Credit risk poses a classification problem that’s inherently imbalanced. This is because healthy loans easily outnumber risky loans. In this Challenge, you’ll use various techniques to train and evaluate models with imbalanced classes. You’ll use a dataset of historical lending activity from a peer-to-peer lending services company to build a model that can identify the creditworthiness of borrowers.
#
# ## Instructions:
#
# This challenge consists of the following subsections:
#
# * Split the Data into Training and Testing Sets
#
# * Create a Logistic Regression Model with the Original Data
#
# * Predict a Logistic Regression Model with Resampled Training Data
#
# ### Split the Data into Training and Testing Sets
#
# Open the starter code notebook and then use it to complete the following steps.
#
# 1. Read the `lending_data.csv` data from the `Resources` folder into a Pandas DataFrame.
#
# 2. Create the labels set (`y`) from the “loan_status” column, and then create the features (`X`) DataFrame from the remaining columns.
#
# > **Note** A value of `0` in the “loan_status” column means that the loan is healthy. A value of `1` means that the loan has a high risk of defaulting.
#
# 3. Check the balance of the labels variable (`y`) by using the `value_counts` function.
#
# 4. Split the data into training and testing datasets by using `train_test_split`.
#
# ### Create a Logistic Regression Model with the Original Data
#
# Employ your knowledge of logistic regression to complete the following steps:
#
# 1. Fit a logistic regression model by using the training data (`X_train` and `y_train`).
#
# 2. Save the predictions on the testing data labels by using the testing feature data (`X_test`) and the fitted model.
#
# 3. Evaluate the model’s performance by doing the following:
#
# * Calculate the accuracy score of the model.
#
# * Generate a confusion matrix.
#
# * Print the classification report.
#
# 4. Answer the following question: How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?
#
# ### Predict a Logistic Regression Model with Resampled Training Data
#
# Did you notice the small number of high-risk loan labels? Perhaps, a model that uses resampled data will perform better. You’ll thus resample the training data and then reevaluate the model. Specifically, you’ll use `RandomOverSampler`.
#
# To do so, complete the following steps:
#
# 1. Use the `RandomOverSampler` module from the imbalanced-learn library to resample the data. Be sure to confirm that the labels have an equal number of data points.
#
# 2. Use the `LogisticRegression` classifier and the resampled data to fit the model and make predictions.
#
# 3. Evaluate the model’s performance by doing the following:
#
# * Calculate the accuracy score of the model.
#
# * Generate a confusion matrix.
#
# * Print the classification report.
#
# 4. Answer the following question: How well does the logistic regression model, fit with oversampled data, predict both the `0` (healthy loan) and `1` (high-risk loan) labels?
#
# ### Write a Credit Risk Analysis Report
#
# For this section, you’ll write a brief report that includes a summary and an analysis of the performance of both machine learning models that you used in this challenge. You should write this report as the `README.md` file included in your GitHub repository.
#
# Structure your report by using the report template that `Starter_Code.zip` includes, and make sure that it contains the following:
#
# 1. An overview of the analysis: Explain the purpose of this analysis.
#
#
# 2. The results: Using bulleted lists, describe the balanced accuracy scores and the precision and recall scores of both machine learning models.
#
# 3. A summary: Summarize the results from the machine learning models. Compare the two versions of the dataset predictions. Include your recommendation for the model to use, if any, on the original vs. the resampled data. If you don’t recommend either model, justify your reasoning.
# +
# Import the modules
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import confusion_matrix
from imblearn.metrics import classification_report_imbalanced
import warnings
warnings.filterwarnings('ignore')
# -
# ---
# ## Split the Data into Training and Testing Sets
# ### Step 1: Read the `lending_data.csv` data from the `Resources` folder into a Pandas DataFrame.
# +
# Read the CSV file from the Resources folder into a Pandas DataFrame
lending_df = pd.read_csv('Resources/lending_data.csv')
# Review the DataFrame
lending_df
# -
# ### Step 2: Create the labels set (`y`) from the “loan_status” column, and then create the features (`X`) DataFrame from the remaining columns.
# +
# Separate the data into labels and features
# Separate the y variable, the labels
y = lending_df['loan_status']
# Separate the X variable, the features
X = lending_df[['loan_size','interest_rate','borrower_income','debt_to_income','num_of_accounts','derogatory_marks','total_debt']]
# -
# Review the y variable Series
y.head()
# Review the X variable DataFrame
X.head()
# ### Step 3: Check the balance of the labels variable (`y`) by using the `value_counts` function.
# Check the balance of our target values
y.value_counts()
# ### Step 4: Split the data into training and testing datasets by using `train_test_split`.
# +
# Import the train_test_learn module
from sklearn.model_selection import train_test_split
# Split the data using train_test_split
# Assign a random_state of 1 to the function
X_train, X_test, y_train, y_test = train_test_split(features, labels, random_state=1)
# -
# ---
# ## Create a Logistic Regression Model with the Original Data
# ### Step 1: Fit a logistic regression model by using the training data (`X_train` and `y_train`).
# +
# Import the LogisticRegression module from SKLearn
from sklearn.linear_model import LogisticRegression
# Instantiate the Logistic Regression model
# Assign a random_state parameter of 1 to the model
logistic_regression_model = LogisticRegression(random_state=1)
# Fit the model using training data
logistic_regression_model.fit(X_train, y_train)
# -
# ### Step 2: Save the predictions on the testing data labels by using the testing feature data (`X_test`) and the fitted model.
# Make a prediction using the testing data
testing_predictions = logistic_regression_model.predict(X_test)
# ### Step 3: Evaluate the model’s performance by doing the following:
#
# * Calculate the accuracy score of the model.
#
# * Generate a confusion matrix.
#
# * Print the classification report.
# Print the balanced_accuracy score of the model
balanced_accuracy_score(y_test, testing_predictions)
# Generate a confusion matrix for the model
confusion_matrix(y_test, testing_predictions)
# Print the classification report for the model
print(classification_report_imbalanced(y_test, testing_predictions))
# ### Step 4: Answer the following question.
# **Question:** How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?
#
# **Answer:** The model predicts `0` better than `1`, ie, low risk loans are predicted better than the high risk loans.
# ---
# ## Predict a Logistic Regression Model with Resampled Training Data
# ### Step 1: Use the `RandomOverSampler` module from the imbalanced-learn library to resample the data. Be sure to confirm that the labels have an equal number of data points.
# +
# Import the RandomOverSampler module form imbalanced-learn
from imblearn.over_sampling import RandomOverSampler
# Instantiate the random oversampler model
# # Assign a random_state parameter of 1 to the model
random_oversampler = RandomOverSampler(random_state=1)
# Fit the original training data to the random_oversampler model
X_resampled, y_resampled = random_oversampler.fit_resample(X_train, y_train)
# -
# Count the distinct values of the resampled labels data
y_resampled.value_counts()
# ### Step 2: Use the `LogisticRegression` classifier and the resampled data to fit the model and make predictions.
# +
# Instantiate the Logistic Regression model
# Assign a random_state parameter of 1 to the model
logistic_regression_model = LogisticRegression(random_state=1)
# Fit the model using the resampled training data
logistic_regression_model.fit(X_resampled, y_resampled)
# Make a prediction using the testing data
resampled_testing_predictions = logistic_regression_model.predict(X_resampled)
# -
# ### Step 3: Evaluate the model’s performance by doing the following:
#
# * Calculate the accuracy score of the model.
#
# * Generate a confusion matrix.
#
# * Print the classification report.
# Print the balanced_accuracy score of the model
balanced_accuracy_score(y_resampled, resampled_testing_predictions)
# Generate a confusion matrix for the model
confusion_matrix(y_resampled, resampled_testing_predictions)
# Print the classification report for the model
print(classification_report_imbalanced(y_resampled, resampled_testing_predictions))
# ### Step 4: Answer the following question
# **Question:** How well does the logistic regression model, fit with oversampled data, predict both the `0` (healthy loan) and `1` (high-risk loan) labels?
#
# **Answer:** After Random Over Sampler, the `healthy_loan` and `high-risk loan` are predicted with equally with very high efficiency"
| credit_risk_resampling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# * statistische Tests (Verteilungstest) pro Sensor mit Zahlenausp
# * unterschiedliche Datentypen pro Sensor
# * Oultier Quantile Stack Overflow
#
# fehlende Werte, fehlerhafte Daten, verschiedene Datentypen
# +
{"broker_timestamp":1576770850896,"iCarrierID":"20","Mes.iResourceId":"0",
"Mes.diPNo":"0","Mes.diONo":"0","iCode":"0","iPar4":"0","iPar1":"0","iPar3":"0",
"iPar2":"0","Mes.iOpNo":"0","Mes.iOPos":"0","timestamp":1576770825945,"modul":"ASR32",
"subpart":"Band2","sensor":"2","topic":"Festo/ASR32/Band2/Stopper2/RFID/2"}
{"broker_timestamp":1576771321127,"value":false,"timestamp":1576771321127,
"modul":"ASR32","subpart":"connected","sensor":"connected","topic":"Festo/ASR32/connected"}
# -
# MQTT Topic: `Festo/CECC_Energy4/rActivePowerInt`
# Festo = Hersteller der Anlage
#
# `Festo` = Main Topic
#
# `CECC_Energy4` = Modul
#
# `rActivePowerInt` = Sensor/Variable (bspw. Temp, Luftfeuchte, hier Stromstärke)
#
# `value` = 11.417114, (Stromstärke) = 11 mA
#
# +
{
"_id": "000000fb-254a-32f0-8a0a-de2c4b99c318",
"_rev": "1-5ec01a352817f926cce32abdac1793d8",
"timestamp": 1520431262359, //Wann Sensor die Daten erzeugt hat
"modul": "CECC_Energy1",
"value": 11.417114, //je Sensor unterschiedlich, kann boolean oder float wert sein
"topic": "Festo/CECC_Energy1/rActivePowerInt", //immer Festo (main topic)/Anlage(Modul)/Sensor
"Sensor": "rActivePowerInt" //Elektrische Kennzahlen, Spannung/Stromstärke in 3 Phasen + Luftdruck bei Modul Energy
}
{
"_id": "0000057b-7107-3939-abb2-86eb0d781444",
"_rev": "1-0d01667a3ca5317dd9e5526b47222f8d",
"subpart0": "Robotino",
"timestamp": 1515656242170,
"modul": "ShuntMagazineBack",
"value": false,
"topic": "Festo/ShuntMagazineBack/Robotino/xW1_GF38", //befindet sich ein Obj. unter mir
"Sensor": "xW1_GF38"
}
//BG Snesoren Boolean: befindet sich ein Produkt vor mir --> aber nur bei Veränderung
//xW1_GF38 --> GF38 Sensor
//grafischer Verlauf
# -
# `
# |-- Mes.diONo: string (nullable = true) - Auftragsnummer
# |-- Mes.diPNo: string (nullable = true) - Produktnummer
# |-- Mes.iOPos: string (nullable = true) - Orderposition (Position bei bpsw. 10 gleichen Produkten - Position im Auftrag
# |-- Mes.iOpNo: string (nullable = true) - Operationnr. (Arbeitsgang der ausegführt werden soll) - nächte Schritt
# |-- Mes.iResourceId: string (nullable = true) - nächste Applikation/Station
# |-- value: boolean (nullable = true) - Ist RFID Scanner belegt
# |-- iCarrierID: string (nullable = true)
# |-- iCode: string (nullable = true)
# |-- iPar1: string (nullable = true) - sollten EMPTY sein
# |-- iPar2: string (nullable = true) - sollten EMPTY sein
# |-- iPar3: string (nullable = true) - sollten EMPTY sein
# |-- iPar4: string (nullable = true) - sollten EMPTY sein
# `
train.groupby('Age').agg({'Purchase': 'mean'}).show()
# `Output:
# +-----+-----------------+
# | Age| avg(Purchase)|
# +-----+-----------------+
# |51-55|9534.808030960236|
# |46-50|9208.625697468327|
# | 0-17|8933.464640444974|
# |36-45|9331.350694917874|
# |26-35|9252.690632869888|
# | 55+|9336.280459449405|
# |18-25|9169.663606261289|
# +-----+-----------------+
# `
import findspark
findspark.init() #necessary to find the local spark
import pandas as pd
from tqdm import tqdm_notebook as tqdm
from pyspark.sql import Row, SparkSession
from pyspark import SparkContext
from pyspark.sql.types import IntegerType, StringType, DoubleType, BooleanType, TimestampType, StructField, StructType
from pyspark.sql.functions import col, sum
sc = SparkContext()
spark = SparkSession.builder\
.getOrCreate()
# ### Check duplicates
# check functionality
d = [{'name': 'Alice', 'age': 1},{'name': 'Bob', 'age': 1},{'name': 'Bob', 'age': 1},{'name': 'Alice', 'age': 3},{'name': 'Alice', 'age': 1}]
df2 = spark.createDataFrame(d)
df2.createOrReplaceTempView("df2")
df2.show()
df2.groupBy(df2.columns)\
.count()\
.where(col('count') > 1)\
.show()
# .select(sum('count'))\
# +
# def rename_columns(df, columns):
# for col in columns:
# print("{}_{}".format(col.split(".")[0],col.split(".")[1]))
# print(col)
# df.withColumnRenamed(col,"{}_{}".format(col.split(".")[0],col.split(".")[1]))
# rename_columns(df3,['Mes.diONo'])
# -
spark.sql("SELECT age, name, COUNT(*) FROM df2 GROUP BY age, name HAVING COUNT(*) > 1").show(1000, False)
| Spark/spark_scripts/.ipynb_checkpoints/data_quality_analysis_Notizen-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Concise Implementation of Multilayer Perceptrons
# :label:`sec_mlp_concise`
#
# As you might expect, by relying on the high-level APIs,
# we can implement MLPs even more concisely.
#
# + origin_pos=3 tab=["tensorflow"]
from d2l import tensorflow as d2l
import tensorflow as tf
# + [markdown] origin_pos=4
# ## Model
#
# As compared with our concise implementation
# of softmax regression implementation
# (:numref:`sec_softmax_concise`),
# the only difference is that we add
# *two* fully-connected layers
# (previously, we added *one*).
# The first is our hidden layer,
# which contains 256 hidden units
# and applies the ReLU activation function.
# The second is our output layer.
#
# + origin_pos=7 tab=["tensorflow"]
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(10)])
# + [markdown] origin_pos=8
# The training loop is exactly the same
# as when we implemented softmax regression.
# This modularity enables us to separate
# matters concerning the model architecture
# from orthogonal considerations.
#
# + origin_pos=11 tab=["tensorflow"]
batch_size, lr, num_epochs = 256, 0.1, 10
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
trainer = tf.keras.optimizers.SGD(learning_rate=lr)
# + origin_pos=12 tab=["tensorflow"]
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
# + [markdown] origin_pos=13
# ## Summary
#
# * Using high-level APIs, we can implement MLPs much more concisely.
# * For the same classification problem, the implementation of an MLP is the same as that of softmax regression except for additional hidden layers with activation functions.
#
# ## Exercises
#
# 1. Try adding different numbers of hidden layers (you may also modify the learning rate). What setting works best?
# 1. Try out different activation functions. Which one works best?
# 1. Try different schemes for initializing the weights. What method works best?
#
# + [markdown] origin_pos=16 tab=["tensorflow"]
# [Discussions](https://discuss.d2l.ai/t/262)
#
| d2l-en/tensorflow/chapter_multilayer-perceptrons/mlp-concise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Guide 1: Working with the Lisa cluster
#
# This tutorial explains how to work with the Lisa cluster for the Deep Learning course at the University of Amsterdam. Every student will receive an account to have resources for training deep neural networks and get familiar with working on a cluster. It is recommended to have listened to the presentation by the SURFsara team before going through this tutorial.
#
# ## First steps
#
# ### How to connect to Lisa
#
# You can login to Lisa using a secure shell (SSH):
#
# ```bash
# ssh -X <EMAIL>
# ```
#
# Replace `lgpu___` by your username. You will be connected to one of its login nodes, and have the view of a standard Linux system in your home directory. Note that you should only use the login node as an interface, and not as compute unit. Do not run any trainings on this node, as it will be killed after 15 minutes, and slows down the communication with Lisa for everyone. Instead, Lisa uses a SLURM scheduler to handle computational expensive jobs (see below).
#
# If you want to transfer files between Lisa and your local computer, you can use standard Unix commands such as `scp` or `rsync`, or graphical interfaces such as [FileZilla](https://filezilla-project.org/) (use port 22 in FileZilla) or [WinSCP](https://winscp.net/eng/index.php) (for Windows PC).
# A copy operation from Lisa to your local computer with `rsync`, started from your local computer, could look as follows:
#
# ```
# rsync -av <EMAIL>:~/source destination
# ```
#
# Replace `lgpu___` by your username, `source` by the directory/file on Lisa you want to copy on your local machine, and `destination` by the directory/file it should be copied to. Note that `source` is referenced from your home directory on Lisa. If you want to copy a file from your local computer to Lisa, use:
#
# ```
# rsync -av source <EMAIL>:~/destination
# ```
#
# Again, replace `source` with the directory/file on your local computer you want to copy to Lisa, and `destination` by the directory/file it should be copied to.
#
# ### Modules
#
# Lisa uses modules to provide you various pre-installed software. This includes simple Python, but also the NVIDIA libraries CUDA and cuDNN that are necessary to use GPUs in PyTorch. A standard pack of software we use is the following:
#
# ```bash
# module load 2019
# module load Python/3.7.5-foss-2019b
# module load CUDA/10.1.243
# module load cuDNN/7.6.5.32-CUDA-10.1.243
# module load NCCL/2.5.6-CUDA-10.1.243
# ```
#
# When working on the login node, it is sufficient to load the `2019` software pack and the `Python/...` module. CUDA and cuDNN is only required when you run a job on a node.
#
# *Update from March 2021*: If you install PyTorch using the cudatoolkits in conda, it is usually not needed to load the separate modules for CUDA and cuDNN anymore. You can remove those from your job file, and only need to load Anaconda.
#
# ### Install the environment
#
# To run the Deep Learning assignments and other code like the notebooks on Lisa, you need to install the [provided environment for Lisa](https://github.com/uvadlc/uvadlc_practicals_2020/blob/master/environment_Lisa.yml). Lisa provides an Anaconda module, which you can load via `module load Anaconda3/2018.12` (remember to load the `2019` module beforehand). Install the environment with the following command from the login node:
#
# ```bash
# conda env create -f environment.yml
# ```
#
# If you experience issues with the Anaconda module, you can also install Anaconda yourself ([download link](https://docs.anaconda.com/anaconda/install/linux/)) or ask your TA for help.
#
# ## The SLURM scheduler
#
# Lisa relies on a SLURM scheduler to organize the jobs on the cluster. When logging into Lisa, you cannot just start a python script with your training, but instead submit a job to the scheduler. The scheduler will decide when and on which node to run your job, based on the number of nodes available and other jobs submitted.
#
# ### Job files
#
# We provide a template for a job file that you can use on Lisa. Create a file with any name you like, for example `template.job`, and start the job by executing the command `sbatch template.job`.
#
# ```bash
# # #!/bin/bash
#
# #SBATCH --partition=gpu_shared_course
# #SBATCH --gres=gpu:1
# #SBATCH --job-name=ExampleJob
# #SBATCH --ntasks=1
# #SBATCH --cpus-per-task=3
# #SBATCH --time=04:00:00
# #SBATCH --mem=32000M
# #SBATCH --output=slurm_output_%A.out
#
# module purge
# module load 2019
# module load Python/3.7.5-foss-2019b
# module load CUDA/10.1.243
# module load cuDNN/7.6.5.32-CUDA-10.1.243
# module load NCCL/2.5.6-CUDA-10.1.243
# module load Anaconda3/2018.12
#
# # Your job starts in the directory where you call sbatch
# # cd $HOME/...
# # Activate your environment
# source activate ...
# # Run your code
# srun python -u ...
# ```
#
# #### Job arguments
#
# You might have to change the `#SBATCH` arguments depending on your needs. We describe the arguments below:
#
# * `partition`: The partition of Lisa on which you want to run your job. As a student, you only have access to the partition gpu_shared_course, which provides you nodes with NVIDIA GTX1080Ti GPUs (11GB).
# * `gres`: Generic resources include the GPU which is crucial for deep learning jobs. You can select up to two GPUs with your account, but if you haven't designed your code to explicitly run on multiple GPUs, please use only one GPU (so no need to change what we have above).
# * `job-name`: Name of the job to pop up when you list your jobs with squeue (see below).
# * `ntasks`: Number of tasks to run with the job. In our case, we will always use 1 task.
# * `cpus-per-task`: Number of CPUs you request from the nodes. The gpu_shared_course partition restricts you to max. 3 CPUs per job/GPU.
# * `time`: Estimated time your job needs to finish. It is no problem if your job finishes earlier than the specified time. However, if your job takes longer, it will be instantaneously killed after the specified time. Still, don't specify unnecessarily long times as this causes your job to be scheduled later (you need to wait longer in the queue if other people also want to use the cluster). A good rule of thumb is to specify ~20% more than what you would expect.
# * `mem`: RAM of the node you need. Note that this is *not* the GPU memory, but the random access memory of the node. On gpu_shared_course, you are restricted to 64GB per job/GPU which is more than you need for the assignments.
# * `output`: Output file to which the slurm output should be written. The tag "%A" is automatically replaced by the job ID. Note that if you specify the output file to be in a directory that does not exist, no output file will be created.
#
# SLURM allows you to specify many more arguments, but the ones above are the important ones for us. If you are interested in a full list, see [here](https://slurm.schedmd.com/sbatch.html).
#
# #### Scratch
#
# If you work with a lot of data, or a larger dataset, it is advised to copy your data to the `/scratch` directory of the node. Otherwise, the read/write operation might become a bottleneck of your job. To do this, simply use your copy operation of choice (`cp`, `rsync`, ...), and copy the data to the directory `$TMPDIR`. You should add this command to your job file before calling `srun ...`. Remember to point to this data when you are running your code. In case you also write something on the scratch, you need to copy it back to your home directory before finishing the job.
# ### Starting and organizing jobs
#
# To start a job, you simply have to run `sbatch jobfile` where you replace `jobfile` by the filename of the job. Note that no specific file postfix like `.job` is necessary for the job (you can use `.txt` or any other you prefer). After your job has been submitted, it will be first placed into a waiting queue. The SLURM scheduler decides when to start your job based on the time of your job, all other jobs currently running or waiting, and available nodes.
#
# Besides `sbatch`, you can interact with the SLURM scheduler via the following commands:
#
# * `squeue`: Lists all jobs that are currently submitted to Lisa. This can be a lot of jobs as it includes all partitions. You can make it partition-specific using `squeue -p gpu_shared_course`, or only list the jobs of your account: `squeue -u lgpu___` (again, replace `lgpu___` by your username). See the [slurm documentation](https://slurm.schedmd.com/squeue.html) for details.
# * `scancel JOBID`: Cancels and stops a job, independent of whether it is running or pending. The job ID can be found using `squeue`, and is printed when submitting the job via `sbatch`.
# * `scontrol show job JOBID`: Shows additional information of a specific job, like the estimated start time.
#
# ## Troubleshooting
#
# It can happen that you encounter some issues when interacting with Lisa. A short FAQ is provided on the [SURFSara website](https://userinfo.surfsara.nl/systems/lisa/faq), and here we provide a list of common questions/situations we have experienced from past students.
#
# ### Lisa is refusing connection
#
# It can occasionally happen that Lisa refuses the connection when you try to ssh into it. If this happens, you can first try to login to different login nodes. Specifically, try the following three login nodes:
#
# ```bash
# ssh -X lgqu____@login3.<EMAIL>.surfsara.nl
# ssh -X lgqu____@login4.<EMAIL>.surfsara.nl
# ssh -X lgqu____@login-gpu.lisa.surfsara.nl
# ```
#
# If none of those work, you can try to use the [Pulse Secure UvA VPN](https://student.uva.nl/en/content/az/uvavpn/download/download-uvavpn-software.html?cb) before connecting to Lisa. If this still does not work, then the connection issue is likely not on your side. The problem often resolves after 2-3 hours, and Lisa let's you login after it again. If the problem doesn't resolve after couple of hours, please contact your TA, and eventually the SURFSara helpdesk.
#
# ### Slurm output file missing
#
# If a job of yours is running, but no slurm output file is created, check whether the path to the output file specified in your job file actually exists. If the specified file points to a non-existing directory, no output file will be created. Note that this is not an issue by default, but you are running your job "blind" without seeing the stdout or stderr channels.
#
# ### Slurm output file is empty for a long time
#
# The slurm output file can lag behind in showing the outputs of your running job. If your job is running for couple of minutes and you would have expected a few print statements to have happened, try to flush your stdout stream ([how to flush the output in python](https://stackoverflow.com/questions/230751/how-to-flush-output-of-print-function)).
#
# ### All my jobs are pending
#
# With your student account, the SLURM scheduler restricts you to run only two jobs in parallel at a time. However, you can still queue more jobs that will run in sequence. This is done because with more than 200 students, Lisa could get crowded very fast if we don't guarantee a fair share of resources. If all of your jobs are pending, you can check the reason for pending in the last column of `squeue`. All reasons are listed in the squeue [documentation](https://slurm.schedmd.com/squeue.html) under *JOB REASON CODES*. The following ones are common:
#
# * `Priority`: There are other jobs on Lisa with a higher priority that are also waiting to be run. This means you just have to be patient.
# * `QOSResourceLimit`: The job is requesting more resources than allowed. Check your job file as you are only allowed to have at max. 2 GPUs, 6 CPU cores and 125GB RAM.
# * `Resources`: All nodes on Lisa are currently busy, yours will be scheduled soon.
#
# You can also see the estimated start time of a job by running `scontrol show job JOBID`. However, note that this is the "worst case" scenario for the current number of submitted jobs, as in if all currently running jobs would need their maximum runtime. At the same time, if more people would submit their job with higher priority, yours can fall back in the queue and get a later start time.
| docs/tutorial_notebooks/tutorial1/Lisa_Cluster.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import os
import csv
def read_data(data_file, types_file, miss_file, true_miss_file):
# Read types of data from data file
with open(types_file) as f:
types_dict = [{k: v for k, v in row.items()}
for row in csv.DictReader(f, skipinitialspace=True)]
# Read data from input file
with open(data_file, 'r') as f:
data = [[float(x) for x in rec] for rec in csv.reader(f, delimiter=',')]
data = np.array(data)
raw_data = data
# Substitute NaN values by something (we assume we have the real missing value mask)
if true_miss_file:
with open(true_miss_file, 'r') as f:
missing_positions = [[int(x) for x in rec] for rec in csv.reader(f, delimiter=',')]
missing_positions = np.array(missing_positions)
true_miss_mask = np.ones([np.shape(data)[0], len(types_dict)])
true_miss_mask[missing_positions[:, 0] - 1, missing_positions[:, 1] - 1] = 0 # Indexes in the csv start at 1
data_masked = np.ma.masked_where(np.isnan(data), data)
# We need to fill the data depending on the given data...
data_filler = []
for i in range(len(types_dict)):
if types_dict[i]['type'] == 'cat' or types_dict[i]['type'] == 'ordinal':
aux = np.unique(data[:, i])
data_filler.append(aux[0]) # Fill with the first element of the cat (0, 1, or whatever)
else:
data_filler.append(0.0)
data = data_masked.filled(data_filler)
else:
true_miss_mask = np.ones([np.shape(data)[0], len(types_dict)]) # It doesn't affect our data
# Construct the data matrices
data_complete = []
for i in range(np.shape(data)[1]):
if types_dict[i]['type'] == 'cat':
# Get categories
cat_data = [int(x) for x in data[:, i]]
categories, indexes = np.unique(cat_data, return_inverse=True)
# Transform categories to a vector of 0:n_categories
new_categories = np.arange(int(types_dict[i]['dim']))
cat_data = new_categories[indexes]
# Create one hot encoding for the categories
aux = np.zeros([np.shape(data)[0], len(new_categories)])
aux[np.arange(np.shape(data)[0]), cat_data] = 1
data_complete.append(aux)
elif types_dict[i]['type'] == 'ordinal':
# Get categories
cat_data = [int(x) for x in data[:, i]]
categories, indexes = np.unique(cat_data, return_inverse=True)
# Transform categories to a vector of 0:n_categories
new_categories = np.arange(int(types_dict[i]['dim']))
cat_data = new_categories[indexes]
# Create thermometer encoding for the categories
aux = np.zeros([np.shape(data)[0], 1 + len(new_categories)])
aux[:, 0] = 1
aux[np.arange(np.shape(data)[0]), 1 + cat_data] = -1
aux = np.cumsum(aux, 1)
data_complete.append(aux[:, :-1])
elif types_dict[i]['type'] == 'count':
if np.min(data[:, i]) == 0:
aux = data[:, i] + 1
data_complete.append(np.transpose([aux]))
else:
data_complete.append(np.transpose([data[:, i]]))
else:
data_complete.append(np.transpose([data[:, i]]))
data = np.concatenate(data_complete, 1)
# Read Missing mask from csv (contains positions of missing values)
n_samples = np.shape(data)[0]
n_variables = len(types_dict)
miss_mask = np.ones([np.shape(data)[0], n_variables])
# If there is no mask, assume all data is observed
if os.path.isfile(miss_file):
with open(miss_file, 'r') as f:
missing_positions = [[int(x) for x in rec] for rec in csv.reader(f, delimiter=',')]
missing_positions = np.array(missing_positions)
miss_mask[missing_positions[:, 0] - 1, missing_positions[:, 1] - 1] = 0 # Indexes in the csv start at 1
return data, types_dict, miss_mask, true_miss_mask, n_samples, raw_data
data, types_dict, miss_mask, true_miss_mask, n_samples, raw_data = read_data('data.csv', 'data_types.csv', 'Missing10-50_4.csv', None)
df = pd.DataFrame(data)
miss_mask.shape, data.shape
raw_data.shape
miss_mask[:, -1].sum()
types_dict
df.head()
# +
path = '../Results/model_HIVAE_inputDropout_Wine_Missing10-50_1_z2_y5_s10_batch100/model_HIVAE_inputDropout_Wine_Missing10-50_1_z2_y5_s10_batch100_data'
dt = np.array(pd.read_csv(path + '_true.csv', header=None))
dr = np.array(pd.read_csv(path + '_reconstruction.csv', header=None))
# -
dft = pd.DataFrame(dt)
dft.head()
dfr = pd.DataFrame(dr)
dfr.head()
dt_norm = dt / np.linalg.norm(dt)
dr_norm = dr / np.linalg.norm(dr)
req_loss = dt_norm - dr_norm
np.linalg.norm(req_loss)
np.sum(np.abs(dt[:,-1] - dr[:,-1]))/dt.shape[0]
# +
path = '../Results_csv/model_HIVAE_inputDropout_Wine_Missing10-50_1_z2_y5_s10_batch100/model_HIVAE_inputDropout_Wine_Missing10-50_1_z2_y5_s10_batch100'
tre = np.array(pd.read_csv(path + '_train_error.csv', header=None))
tee = np.array(pd.read_csv(path + '_test_error.csv', header=None))
# +
path = '../Results_test_csv/model_HIVAE_inputDropout_Wine_Missing10-50_1_z2_y5_s10_batch100/model_HIVAE_inputDropout_Wine_Missing10-50_1_z2_y5_s10_batch100'
test_tre = np.array(pd.read_csv(path + '_train_error.csv', header=None))
test_tee = np.array(pd.read_csv(path + '_test_error.csv', header=None))
# -
import matplotlib.pyplot as plt
plt.plot(tre.mean(1), label='Observed')
plt.plot(tee.mean(1), label='Missing')
plt.title('Mean train error on observed and missing data')
plt.legend()
plt.ylabel('Error')
plt.xlabel('Epochs');
print('At final epoch:')
print(' Mean train error on observed data:', tre[-1,:].mean())
print(' Mean train error on missing data:', tee[-1,:].mean())
print('Mean test error on observed data:', test_tre.mean())
print('Mean test error on missing data:', test_tee.mean())
A = np.random.randint(2, size=(4000, 4100))
| examples/notebooks/defaultCredit/data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import ee
import io
import time
import requests
import datetime
import itertools
import urllib.request
import numpy as np
import pandas as pd
import rsfuncs as rs
import geopandas as gp
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
from pandas.tseries.offsets import MonthEnd
from dateutil.relativedelta import relativedelta
ee.Initialize()
# %load_ext autoreload
# %autoreload 2
# -
# +
def get_ims(dataset, startdate,enddate, area, scale_factor, return_dates = False, table = False, monthly_mean = False, monthly_sum = False):
'''
Returns gridded images for EE datasets
'''
if monthly_mean:
if monthly_sum:
raise ValueError("cannot perform mean and sum reduction at the same time")
ImageCollection = dataset[0]
var = dataset[1]
native_res = dataset[3]
dt_idx = pd.date_range(startdate,enddate, freq='MS')
ims = []
seq = ee.List.sequence(0, len(dt_idx)-1)
num_steps = seq.getInfo()
# TODO: Make this one loop ?
print("processing:")
print("{}".format(ImageCollection.first().getInfo()['id']))
for i in tqdm(num_steps):
start = ee.Date(startdate).advance(i, 'month')
end = start.advance(1, 'month');
if monthly_mean:
im1 = ee.ImageCollection(ImageCollection).select(var).filterDate(start, end).mean().set('system:time_start', end.millis())
im = ee.ImageCollection(im1)
elif monthly_sum:
im1 = ee.ImageCollection(ImageCollection).select(var).filterDate(start, end).sum().set('system:time_start', end.millis())
im = ee.ImageCollection(im1)
else:
im = ee.ImageCollection(ImageCollection).select(var).filterDate(start, end).set('system:time_start', end.millis())
# This try / catch is probably not great, but needs to be done for e.g. grace which is missing random months
try:
result = im.getRegion(area,native_res,"epsg:4326").getInfo()
ims.append(result)
except:
continue
results = []
dates = []
print("postprocesing")
for im in tqdm(ims):
header, data = im[0], im[1:]
df = pd.DataFrame(np.column_stack(data).T, columns = header)
df.latitude = pd.to_numeric(df.latitude)
df.longitude = pd.to_numeric(df.longitude)
df[var] = pd.to_numeric(df[var])
images = []
for idx,i in enumerate(df.id.unique()):
t1 = df[df.id==i]
arr = rs.array_from_df(t1,var)
arr[arr == 0] = np.nan
images.append(arr*scale_factor)# This is the only good place to apply the scaling factor.
if return_dates:
date = df.time.iloc[idx]
dates.append(datetime.datetime.fromtimestamp(date/1000.0))
results.append(images)
return [item for sublist in results for item in sublist]
def get_scale_factor(area, var = "SCALE_FACTOR"):
img = ee.ImageCollection(ee.Image("NASA/GRACE/MASS_GRIDS/LAND_AUX_2014").select(var).clip(area)).getRegion(area,50000,"epsg:4326").getInfo()
header, data = img[0], img[1:]
df = pd.DataFrame(np.column_stack(data).T, columns = header)
df.latitude = pd.to_numeric(df.latitude)
df.longitude = pd.to_numeric(df.longitude)
df[var] = pd.to_numeric(df[var])
sfim = []
for idx,i in enumerate(df.id.unique()):
t1 = df[df.id==i]
arr = rs.array_from_df(t1,var)
arr[arr == 0] = np.nan
sfim.append(arr)
return sfim[0]
def get_grace_unc(dataset, startdate, enddate, area):
col = dataset[0]
var = "uncertainty"
scaling_factor = dataset[2]
dt_idx = pd.date_range(startdate,enddate, freq='MS')
sums = []
seq = ee.List.sequence(0, len(dt_idx)-1)
num_steps = seq.getInfo()
print("processing:")
print("{}".format(col.first().getInfo()['id']))
for i in tqdm(num_steps):
start = ee.Date(startdate).advance(i, 'month')
end = start.advance(1, 'month');
try:
im = ee.ImageCollection(col).select(var).filterDate(start, end).sum().set('system:time_start', end.millis())
t2 = im.multiply(ee.Image.pixelArea()).multiply(scaling_factor).multiply(1e-6) # Multiply by pixel area in km^2
scale = t2.projection().nominalScale()
sumdict = t2.reduceRegion(
reducer = ee.Reducer.sum(),
geometry = area,
scale = scale)
result = sumdict.getInfo()[var] * 1e-5# cm to km
sums.append(result)
except:
sums.append(np.nan) # If there is no grace data that month, append a np.nan
sumdf = pd.DataFrame(np.array(sums), dt_idx+MonthEnd(0))
sumdf.columns = [var]
df = sumdf.astype(float)
return df
# +
# Select area
shp = gp.read_file("../shape/argus_grace.shp")
area = rs.gdf_to_ee_poly(shp)
# Load data
data = rs.load_data()
# Set GRACE study period
strstart = '2002-04-01'
strend = '2017-03-31'
startgrace = datetime.datetime.strptime(strstart, "%Y-%m-%d")
endgrace = datetime.datetime.strptime(strend, "%Y-%m-%d")
# -
# +
# Plot the spatial scaling coefficients for sanity
scaling_factor = get_scale_factor(area)
plt.imshow(scaling_factor)
plt.colorbar()
# rs.write_raster(scaling_factor, shp, "grace_scaling_factor.tif")
# -
# Get grace products
cri = rs.get_grace(data['cri'], startgrace, endgrace, area)
mas = rs.get_grace(data['mas'], startgrace, endgrace, area)
gfz = rs.get_grace(data['gfz'], startgrace, endgrace, area)
csr = rs.get_grace(data['csr'], startgrace, endgrace, area)
jpl = rs.get_grace(data['jpl'], startgrace, endgrace, area)
# Get uncertainty
cri_unc = get_grace_unc(data['cri_unc'], startgrace, endgrace, area)
mas_unc = get_grace_unc(data['mas_unc'], startgrace, endgrace, area)
mas_upper = pd.DataFrame(mas['lwe_thickness'] + mas_unc['uncertainty'], columns = ['mas_upper'])
mas_lower = pd.DataFrame(mas['lwe_thickness'] - mas_unc['uncertainty'], columns = ['mas_lower'])
cri_upper = pd.DataFrame(cri['lwe_thickness'] + cri_unc['uncertainty'], columns = ['cri_upper'])
cri_lower = pd.DataFrame(cri['lwe_thickness'] - cri_unc['uncertainty'], columns = ['cri_lower'])
# +
# Now the Hydrology data
# +
# Add an extra month beforehand, since we are differencing from the starting point, (e.g. dS/dt)
strstart = '2002-03-01'
strend = '2017-03-31'
startdate = datetime.datetime.strptime(strstart, "%Y-%m-%d")
enddate = datetime.datetime.strptime(strend, "%Y-%m-%d")
# -
# Soil moisture
tc_sm = rs.calc_monthly_mean(data['tc_sm'], startdate, enddate, area)
tc_sm.rename(columns = {"soil" : "tc_sm"}, inplace = True)
# +
# Get alldepths gldas
gldas_gsm1 = rs.calc_monthly_mean(data['gsm1'], startdate, enddate, area)
gldas_gsm2 = rs.calc_monthly_mean(data['gsm2'], startdate, enddate, area)
gldas_gsm3 = rs.calc_monthly_mean(data['gsm3'], startdate, enddate, area)
gldas_gsm4 = rs.calc_monthly_mean(data['gsm4'], startdate, enddate, area)
# Compile GLDAS
gldas_sm = pd.DataFrame(pd.concat([gldas_gsm1,gldas_gsm2,gldas_gsm3,gldas_gsm4], axis = 1).sum(axis =1))
gldas_sm.columns=['sm_gldas']
# +
# Get smos / smap data
smos_ssm = rs.calc_monthly_mean(data['smos_ssm'], "2010-01-01", enddate, area)
smos_susm = rs.calc_monthly_mean(data['smos_susm'],"2010-01-01", enddate, area)
smos_sm = pd.concat([smos_ssm, smos_susm], axis = 1).sum(axis =1)
smap_ssm = rs.calc_monthly_mean(data['smap_ssm'], '2015-04-01', enddate, area)
smap_susm = rs.calc_monthly_mean(data['smap_susm'],'2015-04-01', enddate, area)
smap_sm = pd.concat([smap_ssm, smap_susm], axis = 1).sum(axis =1)
# Make dfs and rename cols
smap_sm = pd.DataFrame(smap_sm, columns = ['sm_smap'])
smos_sm = pd.DataFrame(smos_sm, columns = ['sm_smos'])
# -
# SWE-E
fldas_swe = rs.calc_monthly_mean(data['fldas_swe'], startdate,enddate, area)
gldas_swe = rs.calc_monthly_mean(data['gldas_swe'], startdate,enddate, area)
tc_swe = rs.calc_monthly_mean(data['tc_swe'], startdate,enddate, area)
dmet_swe = rs.calc_monthly_mean(data['dmet_swe'], startdate,enddate, area)
# Preprocessed swe data from UCB LRM / SNODAS
lrm_swe = rs.get_lrm_swe("../shape/argus_grace.shp", data_dir = "../data/LRM_SWE_monthly/")
snodas_swe = rs.get_snodas_swe("../shape/argus_grace.shp", data_dir ="../data/SNODAS/SNODAS_processed/" )
fig, ax = plt.subplots()
lrm_swe.plot(ax = ax)
snodas_swe.plot(ax = ax)
fldas_swe.plot(ax = ax)
dmet_swe.plot(ax = ax)
gldas_swe.plot(ax = ax)
tc_swe.plot(ax = ax)
# Rename swe columns
dmet_swe.rename(columns = {'swe' : 'swe_dmet'}, inplace = True)
fldas_swe.rename(columns= {'SWE_inst':'swe_fldas'}, inplace = True)
gldas_swe.rename(columns = {'SWE_inst':'swe_gldas'}, inplace = True)
tc_swe.rename(columns = {'swe':'swe_tc'}, inplace = True)
# +
# Get the Reservoir data in the grace domain
shpfile = "../shape/argus_grace.shp"
print("**** Begin Fetching CDEC Reservoir Storage Data for {} ****".format(shpfile))
# Read the shapefile
gdf = gp.read_file(shpfile)
# Spatial join cdec reservoirs to supplied gdf
reservoirs = gp.read_file("../shape/cdec_reservoirs.shp")
within_gdf = gp.sjoin(reservoirs, gdf, how='inner', op='within')
# Download Storage (SensorNums = 15) data by query str:
start = datetime.datetime(1997, 1, 1)
end = datetime.datetime(2020, 1, 1)
dt_idx = pd.date_range(start,end, freq='M')
data = {}
for i in tqdm(within_gdf.ID):
print("processing " + i )
url = "https://cdec.water.ca.gov/dynamicapp/req/CSVDataServlet?Stations={}&SensorNums=15&dur_code=M&Start=1997-01-01&End=2020-01-01".format(i)
urlData = requests.get(url).content
resdf = pd.read_csv(io.StringIO(urlData.decode('utf-8')))
if resdf.empty:
pass
else:
data[i] = resdf
storage = []
for k,v in data.items():
dat = pd.to_numeric(data[k].VALUE, errors = "coerce")
if not len(dat) < len(dt_idx):
storage.append(pd.to_numeric(data[k].VALUE, errors = "coerce"))
storage_sum = np.nansum(np.column_stack(storage), axis = 1) * 1.23348e-6 # acre ft to km^3
Sres = pd.DataFrame(zip(dt_idx,storage_sum), columns = ['date',"Sres"])
Sres.set_index('date', inplace = True)
print("Mean reservoir storage = {} km^3".format(np.mean(Sres)))
print("Reservoir Storage DONE ====================================== ")
# +
# Calc means for each station with data
stn_means = {}
for k,v in data.items():
stn_means[k] = np.nanmean(pd.to_numeric(v.VALUE, errors = "coerce"))* 1.23348e-6 # acre ft to km^3
# Reassemble a gdf to write out
sdfs=[]
for k,v in stn_means.items():
sdf = within_gdf[within_gdf['ID'] == k]
sdf.loc[:,'Smean'] = v
sdfs.append(sdf)
# -
# Write reservoirs in the grace domain
pd.concat(sdfs).to_file("../shape/reservoirs_grace.shp")
# +
# Rename all df columns and format for ensemble runs
# SWE
swecoldict = {'swe_gldas':'GLDAS',
'swe_fldas':'FLDAS',
'swe_dmet':"Daymet",
'swe_tc':'TC',
'swe_lrm': "UCB LRM",
'swe_snodas': "SNODAS"}
swedf = pd.concat([dmet_swe,gldas_swe,fldas_swe,tc_swe,lrm_swe,snodas_swe], axis = 1)
swedf.rename(columns = swecoldict, inplace = True)
# SM
smcoldict = { 'sm_gldas':'GLDAS',
'tc_sm':'TC',
'sm_smos': "SMOS",
'sm_smap': "SMAP"}
smdf = pd.concat([gldas_sm,tc_sm,smos_sm,smap_sm], axis = 1)
smdf.rename(columns = smcoldict, inplace = True)
# Grace
cri.rename(columns = {'lwe_thickness': 'lwe_thickness_cri'}, inplace = True)
mas.rename(columns = {'lwe_thickness': 'lwe_thickness_mas'}, inplace = True)
grdf = pd.concat([cri,mas,jpl,gfz,csr], axis = 1)
gm_col_dict = {'lwe_thickness_cri':'GLDAS',
'lwe_thickness_cri':'FLDAS',
'lwe_thickness_cri':"Daymet",
'lwe_thickness_cri':'TC',
'lwe_thickness_cri': "UCB LRM"}
cri.rename(columns = {'lwe_thickness': 'lwe_thickness_cri'}, inplace = True)
mas.rename(columns = {'lwe_thickness': 'lwe_thickness_mas'}, inplace = True)
grdf = pd.concat([cri,cri_lower, cri_upper,mas,mas_lower, mas_upper,jpl,gfz,csr], axis = 1)
gm_col_dict = { 'lwe_thickness_cri':'CRI',
'lwe_thickness_mas':'MAS',
'lwe_thickness_jpl':"JPL",
'lwe_thickness_gfz':'GFZ',
'lwe_thickness_csr':"CSR",
'cri_lower' : "CRI Lower Bound",
'cri_upper' : "CRI Upper Bound",
'mas_upper' : "MAS Upper Bound",
'mas_lower' : "MAS Lower Bound",
}
grdf.rename(columns = gm_col_dict, inplace = True)
# +
# Make the ensemble
swelist = ['UCB LRM','SNODAS', "TC","FLDAS"]
smlist = ['SMOS','SMAP','TC','GLDAS']
glist = ["CRI Upper Bound", "CRI", "CRI Lower Bound"]
combolist = list(itertools.product(*[swelist,smlist,glist]))
print("Ensemble has {} members".format(str(len(combolist))))
# -
grace_ens = []
for swe,sm,grace in combolist:
ds = (grdf[grace] - swedf[swe].diff() - smdf[sm].diff() - Sres['Sres'].diff())
end = grdf['CRI'].dropna().index[-1]
mask = (ds.index < end)
grace_ens.append(ds.loc[mask].interpolate(method="polynomial", order = 3))
# +
fig, ax1 = plt.subplots()
# Uncomment to plot all ensembles
for d in grace_ens:
ax1.plot(d / 155000 * 1e6, alpha = 0.0, color = 'gray')
p25 = np.nanquantile(np.vstack(grace_ens)/ 155000 * 1e6,0.15, axis =0)
p75 = np.nanquantile(np.vstack(grace_ens)/ 155000 * 1e6,0.85, axis =0)
ax1.plot(grace_ens[0].index,np.nanmean(np.vstack(grace_ens)/ 155000 * 1e6, axis = 0), linewidth = 2, color = 'red')
ax1.set_ylabel('ΔSgw (mm)')
# ax1.set_ylim([np.nanmin(p25),np.nanmax(p75)])
ax2 = ax1.twinx()
# Uncomment to plot all ensembles
for d in grace_ens:
ax2.plot(d, alpha = 0.0, color = 'gray')
# 75 th 25th quantile
p25 = np.nanmin(np.vstack(grace_ens), axis =0)
p75 = np.nanmax(np.vstack(grace_ens), axis =0)
ax2.plot(grace_ens[0].index,np.nanmean(np.vstack(grace_ens), axis = 0), alpha = 0)
ax2.fill_between(grace_ens[0].index,p25,p75, alpha = 0.3, color = 'red')
ax2.tick_params(axis='y')
ax2.set_ylabel('ΔSgw ($km^3$)')
ax2.set_ylim([np.nanmin(p25),np.nanmax(p75)])
ax2.set_title("third degree polynomial fit")
plt.show()
# +
fig, ax1 = plt.subplots()
# Uncomment to plot all ensembles
for d in grace_ens:
ax1.plot(d / 155000 * 1e6, alpha = 0.0, color = 'gray')
p25 = np.nanquantile(np.vstack(grace_ens)/ 155000 * 1e6,0.15, axis =0)
p75 = np.nanquantile(np.vstack(grace_ens)/ 155000 * 1e6,0.85, axis =0)
ax1.plot(grace_ens[0].index,np.nanmean(np.vstack(grace_ens)/ 155000 * 1e6, axis = 0), linewidth = 2, color = 'red')
ax1.set_ylabel('ΔSgw (mm)')
# ax1.set_ylim([np.nanmin(p25),np.nanmax(p75)])
ax2 = ax1.twinx()
# Uncomment to plot all ensembles
for d in grace_ens:
ax2.plot(d, alpha = 0.0, color = 'gray')
# 75 th 25th quantile
p25 = np.nanmin(np.vstack(grace_ens), axis =0)
p75 = np.nanmax(np.vstack(grace_ens), axis =0)
ax2.plot(grace_ens[0].index,np.nanmean(np.vstack(grace_ens), axis = 0), alpha = 0)
ax2.fill_between(grace_ens[0].index,p25,p75, alpha = 0.3, color = 'red')
ax2.tick_params(axis='y')
ax2.set_ylabel('ΔSgw ($km^3$)')
ax2.set_ylim([np.nanmin(p25),np.nanmax(p75)])
plt.show()
# -
grace_df_out = pd.DataFrame([np.nanmin(np.vstack(grace_ens), axis =0),
np.nanmean(np.vstack(grace_ens), axis =0),
np.nanmax(np.vstack(grace_ens), axis =0)]).T
grace_df_out.index = grace_ens[0].index
grace_df_out.columns = ['grace_min', 'grace_mean', 'grace_max']
grace_df_out.dropna().to_csv("../data/grace.csv")
| code/03_GRACE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 64-bit
# name: python3
# ---
# # Simple steps and procedures followed in the implementation of accuracy assessment
# * 1. import libraries,
# * 2. import the dataset,
# * 3. view the dataset,
# * 4. Do a pairplot of the dataset
# * 5. Extract the output variable into y
# * 6. Examine y
# * 7. Import the library for splitting
# * 8. View the reg Co efficient of correlation
from matplotlib import pyplot as plt
import seaborn as sns
#@Telling the matplotlib to plot a line within this notebook
# %matplotlib inline
import pandas as pd
from pandas import*
data = pd.read_csv('Correlation2010.csv', index_col = 0) #To removee the unnamed part
data.head(5)
sns.pairplot(data, x_vars = ['Ch-a Estimates'], y_vars= 'Chl-a Reference', height= 4, aspect = 1.0, kind='reg')
plt.title("Correlation Between Chl-a Observed and Estimates 2010")
# ## Pearson Correlation co-efficient (R)
corrDf = data[["Chl-a Reference", "Ch-a Estimates"]]
corrDf.corr()
| 1.Simple Linear Regression/3.Model Accurcay/Regression Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from gs_quant.instrument import IRSwaption, IRSwap
from gs_quant.risk import IRDeltaParallel, IRDelta
from gs_quant.common import AggregationLevel
from gs_quant.session import Environment, GsSession
# + pycharm={"name": "#%%\n"}
# external users should substitute their client id and secret; please skip this step if using internal jupyterhub
GsSession.use(client_id=None, client_secret=None, scopes=('run_analytics',))
# -
# ### Solve for Delta
# Size swap to match swaption delta
# + pycharm={"name": "#%%\n"}
# create 1y5y payer swaption and calculate delta
payer_swaption = IRSwaption('Pay', '5y', 'EUR', expiration_date='1y', buy_sell='Sell')
swaption_delta = payer_swaption.calc((IRDeltaParallel, IRDeltaParallelLocalCcy))
# + pycharm={"name": "#%%\n"}
# solve for $ delta as IRDeltaParallel is in USD
hedge1 = IRSwap('Receive', '5y', 'USD', fixed_rate='atm', notional_amount='${}/bp'.format(swaption_delta[IRDeltaParallel]))
swap_delta = hedge1.calc(IRDeltaParallel)
# check that delta is approximately equal
swaption_delta[IRDeltaParallel] - swap_delta
# + pycharm={"name": "#%%\n"}
# solve for local ccy delta
hedge2 = IRSwap('Receive', '5y', 'EUR', fixed_rate='atm', notional_amount='{}/bp'.format(swaption_delta[IRDeltaParallelLocalCcy]))
parallel_local_delta = IRDelta(aggregation_level=AggregationLevel.Type, currency='local')
swap_delta_local = hedge2.calc(parallel_local_delta)
#check that delta is approximately equal
swaption_delta[parallel_local_delta] - swap_delta_local
| gs_quant/documentation/02_pricing_and_risk/00_instruments_and_measures/examples/01_rates/000118_solve_delta.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
from bs4 import BeautifulSoup
import re
import json
import time
# +
with open('negeri.json') as fopen:
negeris = json.load(fopen)
max_page = 101
processed = ['selangor']
negeris = [r['slugId'] for r in negeris['data']['places']['popular']['items'] if r['slugId'] not in processed]
negeris
# -
headers = """
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
accept-encoding: gzip, deflate, br
accept-language: en-MY,en;q=0.9,en-US;q=0.8,ms;q=0.7
cache-control: max-age=0
cookie: KP_UID=35430865-8c0a-d297-d836-4abd6c36ca88; feature-RWD-6082=false; _gcl_au=1.1.1750258107.1591344254; _ga=GA1.3.1576862493.1591344255; _gid=GA1.3.820041376.1591344255; gdpr_notification=yes; scarab.visitor=%226DBDAEC813BAD72D%22; ASP.NET_SessionId=o2dtvxcqrim4y0fxg4byty0j; anonymoususerid=47733004-62ab-4c77-bf4d-804f23e527d9; scarab.mayAdd=%5B%7B%22i%22%3A%228775647%22%7D%5D; scarab.profile=%228775647%7C1591344450%22; KP_UIDz=3u1wq3j%2BxrKi5q0goXvjoA%3D%3D%3A%3AZJJj906tLlDktNPhXOkOKF2RVcWoavII5OaFeNm%2Fd4J0%2FURLl2EdOM6xzlOazuNcm8LbGdyXVxBw5xsVq3%2BWVeX6mriRnDkTaIoAS%2BLYefSpqE%2BSWVx4qCmmRyOLQbmvb9u0O134eI%2BoRo5IFSpZSAPJ1GA3tmnQJKYn3%2BGQvtCdxDAiwQL9FJcrHX5wZ6riv8icKx3HOcdJXCAtsni9whlOQ9BlIFl6uYO%2BJ3yS6I74lbbWgGpEWaVlKPaXXBQ7MCPEdGCi6zAiwW4Aa4Ud6sk0U1VYC4qtGwxHny5prjsPgmgnzH8bIbM97w%2FXA1zFtVxOLN0XuNmhZQDV7xKhLbFwasTW5Z4Xht6UZRwxMkX5z7x5A3EfT2SnWrEhOVUKQkb7dlpCMig3HmtZ1%2BBLmz4wJMolVSdWIog23R1g6DOtVRKsi0eM%2BJJo%2FADZuB4uKIuXX4NhtSQvYpfmkBqaGl29kNiwkNoPVX6sCpwP4D3DgfwrUFY0Yas%2FQydtg5bUOWccWuYnHh2v4T9EpoZlK6Bbv%2BC3IotlB9ztlL%2F091D%2F5%2FPh5soWv0ojRXmUi0vb
sec-fetch-dest: document
sec-fetch-mode: navigate
sec-fetch-site: same-origin
sec-fetch-user: ?1
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36
"""
# +
combined = {}
for row in headers.split('\n')[1:-1]:
splitted = row.split(':')
key = splitted[0]
val = ''.join(splitted[1:])
combined[key] = val.strip()
combined
# +
from tqdm import tqdm
for negeri in negeris:
print(negeri)
for i in tqdm(range(max_page)):
try:
url = f'https://www.iproperty.com.my/sale/{negeri}/all-residential/?l1&page={i + 1}'
r = requests.get(url,headers = combined)
bs = BeautifulSoup(r.content)
scripts = bs.findAll('script')
r = [script.get_text() for script in scripts if 'window.__INITIAL_STATE__' in script.get_text()][0]
r = re.sub(r'[ ]+', ' ', r).strip()
r = r.strip().replace('window.__INITIAL_STATE__ = ', '').replace(';\n window.__RENDER_APP_ERROR__ = false;\n window.__SERVICE_ENV__ = "production";\n window.__PUBLIC_PATH__= "https://assets-cdn.iproperty.com.my/assets/";', '')
with open(f'{negeri}-page-{i + 1}.json', 'w') as fopen:
json.dump(json.loads(r), fopen)
time.sleep(2)
except Exception as e:
print(i)
print(e)
# -
| crawl/iproperty/notebook/sales-residential.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:wildfires] *
# language: python
# name: conda-env-wildfires-py
# ---
# ## Setup
from specific import *
# ### Retrieve previous results from the 'model' notebook
X_train, X_test, y_train, y_test = data_split_cache.load()
rf = get_model()
X_train.shape[0], X_train.shape[0] / 2000, shap_params["max_index"], shap_params[
"job_samples"
]
# ### SHAP values - via PBS array jobs
# #### Normal SHAP values - single thread timing
#
# ~ ? s / sample, ? GB mem
# +
# %%time
# get_shap_values(rf, X_train[0:100], interaction=False)
# -
plt.plot([10, 30, 50, 100], [19.6, 57.2, 95, (3 * 60) + 18], marker="o")
plt.xlabel("Samples")
plt.ylabel("time (s)")
# ### SHAP values - see PBS (array) job folder 'shap'!
# #### Load the SHAP values (keeping track of missing chunks)
# +
@shap_cache
def load_shap_from_chunks():
# Load the individual data chunks.
shap_chunks = []
missing = []
for index in tqdm(range(shap_params["max_index"] + 1), desc="Loading chunks"):
try:
shap_chunks.append(
SimpleCache(
f"tree_path_dependent_shap_{index}_{shap_params['job_samples']}",
cache_dir=os.path.join(CACHE_DIR, "shap"),
verbose=0,
).load()
)
except NoCachedDataError:
missing.append(index)
if missing:
print("missing:", missing)
print("nr missing:", len(missing))
raise Exception("missing data")
return np.vstack(shap_chunks)
shap_values = load_shap_from_chunks()
# -
# ### Manual recalculation
# +
# import multiprocessing
# def cached_get_shap_values(index, model, X):
# tree_path_dependent_shap_cache = SimpleCache(
# f"tree_path_dependent_shap_{index}_{shap_params['job_samples']}",
# cache_dir=os.path.join(CACHE_DIR, "shap"),
# )
# @tree_path_dependent_shap_cache
# def _shap_vals():
# return get_shap_values(model, X, interaction=False)
# return _shap_vals()
# with multiprocessing.Pool(processes=get_ncpus()) as pool:
# async_results = [
# pool.apply_async(
# cached_get_shap_values, (
# index,
# rf,
# X_train[index * shap_params['job_samples'] : (index + 1) * shap_params['job_samples']]
# )
# )
# for index in []
# ]
# results = [async_result.get() for async_result in async_results]
# -
# %%time
with figure_saver("SHAP"):
shap.summary_plot(
shap_values,
shorten_columns(X_train[: shap_params["total_samples"]]),
title="SHAP Feature Importances",
show=False,
)
mean_abs_shap = np.mean(np.abs(shap_values), axis=0)
mean_shap_importances = pd.DataFrame(
[shorten_features(X_train.columns), mean_abs_shap], index=["column", "shap"]
)
mean_shap_importances = mean_shap_importances.T
mean_shap_importances.sort_values("shap", ascending=False, inplace=True)
mean_shap_importances
# ### Interaction SHAP values - see PBS (array) job folder 'shap_interaction'!
#
# ~ ? s / sample !!, ? GB mem
# +
# %%time
# get_shap_values(rf, X_train[0:5], interaction=True)
# +
@shap_interact_cache
def load_shap_interact_from_chunks():
# Load the individual data chunks.
shap_interact_chunks = []
missing = []
for index in tqdm(
range(0, shap_interact_params["max_index"] + 1), desc="Loading chunks"
):
try:
shap_interact_chunks.append(
SimpleCache(
f"tree_path_dependent_shap_interact_{index}_{shap_interact_params['job_samples']}",
cache_dir=os.path.join(CACHE_DIR, "shap_interaction"),
verbose=0,
).load()
)
except NoCachedDataError:
missing.append(index)
if missing:
print("missing:", missing)
print("nr missing:", len(missing))
raise Exception("missing data")
return np.vstack(shap_interact_chunks)
shap_interact_values = load_shap_interact_from_chunks()
# -
# ### Manual recalculation
# +
# import multiprocessing
# def cached_get_interact_shap_values(index, model, X):
# tree_path_dependent_shap_interact_cache = SimpleCache(
# f"tree_path_dependent_shap_interact_{index}_{shap_interact_params['job_samples']}",
# cache_dir=os.path.join(CACHE_DIR, "shap_interaction"),
# )
# @tree_path_dependent_shap_interact_cache
# def _shap_vals():
# return get_shap_values(model, X, interaction=True)
# return _shap_vals()
# with multiprocessing.Pool(processes=get_ncpus()) as pool:
# async_results = [
# pool.apply_async(
# cached_get_interact_shap_values, (
# index,
# rf,
# X_train[index * shap_interact_params['job_samples'] : (index + 1) * shap_interact_params['job_samples']]
# )
# )
# for index in []
# ]
# results = [async_result.get() for async_result in async_results]
# -
with figure_saver("SHAP_interaction"):
shap.summary_plot(
shap_interact_values,
shorten_columns(X_train[: shap_interact_params["total_samples"]]),
title="SHAP Feature Interactions",
show=False,
)
with figure_saver("SHAP_interaction_compact"):
shap.summary_plot(
shap_interact_values,
shorten_columns(X_train[: shap_interact_params["total_samples"]]),
title="SHAP Feature Interactions",
show=False,
plot_type="compact_dot",
)
mean_interact = np.mean(np.abs(shap_interact_values), axis=0)
fig, ax = plt.subplots(figsize=(10, 10))
ax = sns.heatmap(
np.log(mean_interact),
square=True,
xticklabels=shorten_features(X_train.columns),
yticklabels=shorten_features(X_train.columns),
ax=ax,
)
ax.xaxis.set_tick_params(rotation=90)
ax.yaxis.set_tick_params(rotation=0)
_ = fig.suptitle("log(SHAP Interaction Values)")
# ### Get the most significant interactions
def get_highest_interact_index(index, mean_interact):
indices = np.argsort(-mean_interact[index])
return indices[indices != index][0]
# +
masked_mean_interact = mean_interact.copy()
masked_mean_interact[np.triu_indices(mean_interact.shape[0])] = np.nan
fig, ax = plt.subplots(figsize=(10, 10))
ax = sns.heatmap(
np.log(masked_mean_interact),
square=True,
xticklabels=shorten_features(X_train.columns),
yticklabels=shorten_features(X_train.columns),
ax=ax,
)
ax.xaxis.set_tick_params(rotation=90)
ax.yaxis.set_tick_params(rotation=0)
_ = fig.suptitle("log(SHAP Interaction Values)")
# +
interact_indices = np.tril_indices(mean_interact.shape[0])
interact_values = mean_interact[interact_indices]
interact_data = {}
for i, j, interact_value in zip(*interact_indices, interact_values):
if i == j:
continue
interact_data[
tuple(
sorted(
(
shorten_features(X_train.columns[i]),
shorten_features(X_train.columns[j]),
)
)
)
] = interact_value
interact_data = pd.Series(interact_data).sort_values(ascending=False, inplace=False)
@interact_data_cache
def dummy_func():
return interact_data
dummy_func()
print(interact_data[:40].to_latex())
# -
interact_data[:20]
interact_data[-10:]
# ### Compare the approximate and 'exact' interactions
# +
length = 18
header = f"{'Feature':>{length}} : {'Approx':>{length}} {'Non Approx':>{length}}"
print(header)
print("-" * len(header))
for i in range(len(X_train.columns)):
approx = shap.common.approximate_interactions(
X_train.columns[i], shap_values, X_train[: shap_params["total_samples"]]
)[0]
non_approx = get_highest_interact_index(i, mean_interact)
features = shorten_features(
X_train.columns[index] for index in (i, approx, non_approx)
)
print(f"{features[0]:>{length}} : {features[1]:>{length}} {features[2]:>{length}}")
# -
# ### Dependence plots with the approximate dependence metric
# +
def approx_dependence_plot(
i, shap_values=shap_values, X=X_train[: shap_params["total_samples"]]
):
fig, ax = plt.subplots(figsize=(7, 5))
shap.dependence_plot(
i, shap_values, X, alpha=0.1, ax=ax,
)
figure_saver.save_figure(
fig,
f"shap_dependence_{X_train.columns[i]}_approx",
sub_directory="shap_dependence_approx",
)
with concurrent.futures.ProcessPoolExecutor(max_workers=32) as executor:
plot_fs = []
for i in range(len(X_train.columns)):
plot_fs.append(executor.submit(approx_dependence_plot, i))
for plot_f in tqdm(
concurrent.futures.as_completed(plot_fs),
total=len(plot_fs),
desc="Plotting approx SHAP dependence plots",
):
pass
# +
def dependence_plot(
i,
shap_values=shap_values,
X=X_train[: shap_params["total_samples"]],
mean_interact=mean_interact,
):
fig, ax = plt.subplots(figsize=(7, 5))
shap.dependence_plot(
i,
shap_values,
X,
interaction_index=get_highest_interact_index(i, mean_interact),
alpha=0.1,
ax=ax,
)
figure_saver.save_figure(
fig, f"shap_dependence_{X_train.columns[i]}", sub_directory="shap_dependence"
)
with concurrent.futures.ProcessPoolExecutor(max_workers=32) as executor:
plot_fs = []
for i in range(len(X_train.columns)):
plot_fs.append(executor.submit(dependence_plot, i))
for plot_f in tqdm(
concurrent.futures.as_completed(plot_fs),
total=len(plot_fs),
desc="Plotting SHAP dependence plots",
):
pass
# -
# ### SHAP force plot
# #### force_plot memory usage
(5e4 * (5e4 - 1) // 2) * 8 / 1e9
shap.initjs()
rf.n_jobs = 32
N = int(1e3)
shap.force_plot(
np.mean(rf.predict(X_train[:N])), shap_values[:N], shorten_columns(X_train[:N])
)
| analyses/seasonality_paper_st/vod_only/model_analysis_shap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Preprocessing
#
# In this lab, we will be exploring how to preprocess tweets for sentiment analysis. We will provide a function for preprocessing tweets during this week's assignment, but it is still good to know what is going on under the hood. By the end of this lecture, you will see how to use the [NLTK](http://www.nltk.org) package to perform a preprocessing pipeline for Twitter datasets.
# ## Setup
#
# You will be doing sentiment analysis on tweets in the first two weeks of this course. To help with that, we will be using the [Natural Language Toolkit (NLTK)](http://www.nltk.org/howto/twitter.html) package, an open-source Python library for natural language processing. It has modules for collecting, handling, and processing Twitter data, and you will be acquainted with them as we move along the course.
#
# For this exercise, we will use a Twitter dataset that comes with NLTK. This dataset has been manually annotated and serves to establish baselines for models quickly. Let us import them now as well as a few other libraries we will be using.
import nltk # Python library for NLP
from nltk.corpus import twitter_samples # sample Twitter dataset from NLTK
import matplotlib.pyplot as plt # library for visualization
import random # pseudo-random number generator
# ## About the Twitter dataset
#
# The sample dataset from NLTK is separated into positive and negative tweets. It contains 5000 positive tweets and 5000 negative tweets exactly. The exact match between these classes is not a coincidence. The intention is to have a balanced dataset. That does not reflect the real distributions of positive and negative classes in live Twitter streams. It is just because balanced datasets simplify the design of most computational methods that are required for sentiment analysis. However, it is better to be aware that this balance of classes is artificial.
#
# The dataset is already downloaded in the Coursera workspace. In a local computer however, you can download the data by doing:
# +
# downloads sample twitter dataset. uncomment the line below if running on a local machine.
# nltk.download('twitter_samples')
# -
# We can load the text fields of the positive and negative tweets by using the module's `strings()` method like this:
# select the set of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
# Next, we'll print a report with the number of positive and negative tweets. It is also essential to know the data structure of the datasets
# +
print('Number of positive tweets: ', len(all_positive_tweets))
print('Number of negative tweets: ', len(all_negative_tweets))
print('\nThe type of all_positive_tweets is: ', type(all_positive_tweets))
print('The type of a tweet entry is: ', type(all_negative_tweets[0]))
# -
# We can see that the data is stored in a list and as you might expect, individual tweets are stored as strings.
#
# You can make a more visually appealing report by using Matplotlib's [pyplot](https://matplotlib.org/tutorials/introductory/pyplot.html) library. Let us see how to create a [pie chart](https://matplotlib.org/3.2.1/gallery/pie_and_polar_charts/pie_features.html#sphx-glr-gallery-pie-and-polar-charts-pie-features-py) to show the same information as above. This simple snippet will serve you in future visualizations of this kind of data.
# +
# Declare a figure with a custom size
fig = plt.figure(figsize=(5, 5))
# labels for the two classes
labels = 'Positives', 'Negative'
# Sizes for each slide
sizes = [len(all_positive_tweets), len(all_negative_tweets)]
# Declare pie chart, where the slices will be ordered and plotted counter-clockwise:
plt.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
# Equal aspect ratio ensures that pie is drawn as a circle.
plt.axis('equal')
# Display the chart
plt.show()
# -
# ## Looking at raw texts
#
# Before anything else, we can print a couple of tweets from the dataset to see how they look. Understanding the data is responsible for 80% of the success or failure in data science projects. We can use this time to observe aspects we'd like to consider when preprocessing our data.
#
# Below, you will print one random positive and one random negative tweet. We have added a color mark at the beginning of the string to further distinguish the two. (Warning: This is taken from a public dataset of real tweets and a very small portion has explicit content.)
# +
# print positive in greeen
print('\033[92m' + all_positive_tweets[random.randint(0,5000)])
# print negative in red
print('\033[91m' + all_negative_tweets[random.randint(0,5000)])
# -
# One observation you may have is the presence of [emoticons](https://en.wikipedia.org/wiki/Emoticon) and URLs in many of the tweets. This info will come in handy in the next steps.
# ## Preprocess raw text for Sentiment analysis
# Data preprocessing is one of the critical steps in any machine learning project. It includes cleaning and formatting the data before feeding into a machine learning algorithm. For NLP, the preprocessing steps are comprised of the following tasks:
#
# * Tokenizing the string
# * Lowercasing
# * Removing stop words and punctuation
# * Stemming
#
# The videos explained each of these steps and why they are important. Let's see how we can do these to a given tweet. We will choose just one and see how this is transformed by each preprocessing step.
# Our selected sample. Complex enough to exemplify each step
tweet = all_positive_tweets[2277]
print(tweet)
# Let's import a few more libraries for this purpose.
# download the stopwords from NLTK
nltk.download('stopwords')
# +
import re # library for regular expression operations
import string # for string operations
from nltk.corpus import stopwords # module for stop words that come with NLTK
from nltk.stem import PorterStemmer # module for stemming
from nltk.tokenize import TweetTokenizer # module for tokenizing strings
# -
# ### Remove hyperlinks, Twitter marks and styles
#
# Since we have a Twitter dataset, we'd like to remove some substrings commonly used on the platform like the hashtag, retweet marks, and hyperlinks. We'll use the [re](https://docs.python.org/3/library/re.html) library to perform regular expression operations on our tweet. We'll define our search pattern and use the `sub()` method to remove matches by substituting with an empty character (i.e. `''`)
# +
print('\033[92m' + tweet)
print('\033[94m')
# remove old style retweet text "RT"
tweet2 = re.sub(r'^RT[\s]+', '', tweet)
# remove hyperlinks
tweet2 = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet2)
# remove hashtags
# only removing the hash # sign from the word
tweet2 = re.sub(r'#', '', tweet2)
print(tweet2)
# -
# ### Tokenize the string
#
# To tokenize means to split the strings into individual words without blanks or tabs. In this same step, we will also convert each word in the string to lower case. The [tokenize](https://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual) module from NLTK allows us to do these easily:
# +
print()
print('\033[92m' + tweet2)
print('\033[94m')
# instantiate tokenizer class
tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,
reduce_len=True)
# tokenize tweets
tweet_tokens = tokenizer.tokenize(tweet2)
print()
print('Tokenized string:')
print(tweet_tokens)
# -
# ### Remove stop words and punctuations
#
# The next step is to remove stop words and punctuation. Stop words are words that don't add significant meaning to the text. You'll see the list provided by NLTK when you run the cells below.
# +
#Import the english stop words list from NLTK
stopwords_english = stopwords.words('english')
print('Stop words\n')
print(stopwords_english)
print('\nPunctuation\n')
print(string.punctuation)
# -
# We can see that the stop words list above contains some words that could be important in some contexts.
# These could be words like _i, not, between, because, won, against_. You might need to customize the stop words list for some applications. For our exercise, we will use the entire list.
#
# For the punctuation, we saw earlier that certain groupings like ':)' and '...' should be retained when dealing with tweets because they are used to express emotions. In other contexts, like medical analysis, these should also be removed.
#
# Time to clean up our tokenized tweet!
# +
print()
print('\033[92m')
print(tweet_tokens)
print('\033[94m')
tweets_clean = []
for word in tweet_tokens: # Go through every word in your tokens list
if (word not in stopwords_english and # remove stopwords
word not in string.punctuation): # remove punctuation
tweets_clean.append(word)
print('removed stop words and punctuation:')
print(tweets_clean)
# -
# Please note that the words **happy** and **sunny** in this list are correctly spelled.
# ### Stemming
#
# Stemming is the process of converting a word to its most general form, or stem. This helps in reducing the size of our vocabulary.
#
# Consider the words:
# * **learn**
# * **learn**ing
# * **learn**ed
# * **learn**t
#
# All these words are stemmed from its common root **learn**. However, in some cases, the stemming process produces words that are not correct spellings of the root word. For example, **happi** and **sunni**. That's because it chooses the most common stem for related words. For example, we can look at the set of words that comprises the different forms of happy:
#
# * **happ**y
# * **happi**ness
# * **happi**er
#
# We can see that the prefix **happi** is more commonly used. We cannot choose **happ** because it is the stem of unrelated words like **happen**.
#
# NLTK has different modules for stemming and we will be using the [PorterStemmer](https://www.nltk.org/api/nltk.stem.html#module-nltk.stem.porter) module which uses the [Porter Stemming Algorithm](https://tartarus.org/martin/PorterStemmer/). Let's see how we can use it in the cell below.
# +
print()
print('\033[92m')
print(tweets_clean)
print('\033[94m')
# Instantiate stemming class
stemmer = PorterStemmer()
# Create an empty list to store the stems
tweets_stem = []
for word in tweets_clean:
stem_word = stemmer.stem(word) # stemming word
tweets_stem.append(stem_word) # append to the list
print('stemmed words:')
print(tweets_stem)
# -
# That's it! Now we have a set of words we can feed into to the next stage of our machine learning project.
# ## process_tweet()
#
# As shown above, preprocessing consists of multiple steps before you arrive at the final list of words. We will not ask you to replicate these however. In the week's assignment, you will use the function `process_tweet(tweet)` available in _utils.py_. We encourage you to open the file and you'll see that this function's implementation is very similar to the steps above.
#
# To obtain the same result as in the previous code cells, you will only need to call the function `process_tweet()`. Let's do that in the next cell.
# +
from utils import process_tweet # Import the process_tweet function
# choose the same tweet
tweet = all_positive_tweets[2277]
print()
print('\033[92m')
print(tweet)
print('\033[94m')
# call the imported function
tweets_stem = process_tweet(tweet); # Preprocess a given tweet
print('preprocessed tweet:')
print(tweets_stem) # Print the result
# -
# That's it for this lab! You now know what is going on when you call the preprocessing helper function in this week's assignment. Hopefully, this exercise has also given you some insights on how to tweak this for other types of text datasets.
| week 1/utf-8''NLP_C1_W1_lecture_nb_01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## <NAME> (эмпирическая функция распределения)
#
# Пусть $F_0(x)$ непрерывна на $\mathbb{R}$
#
# Статистика Колмогорова $D_{n}(X_{[n]}) = sup_{x\in \mathbb{R}}|F^*_n(x) - F_0(x)|$
#
# Если верна $H_0$, то $D_n(X_{[n]}) \xrightarrow[n \to \infty]{п.н.} 0$
#
# Если верна $H_1$, то $D_n(X_{[n]}) \xrightarrow[n \to \infty]{п.н.} sup_{x\in \mathbb{R}}|G(x) - F_0(x)| > 0$
| prac10/ostatok/kolmogorov.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tf_env]
# language: python
# name: conda-env-tf_env-py
# ---
# +
"""
This notebook allows inspecting the label frequency, as well as the
amount of utterances and tokens for a given dataset and its splits.
"""
from utils import statistics_helper as helper
# -
helper.plot_all_statistics('../datasets/clean/nps')
helper.print_all_statistics('../datasets/clean/nps')
helper.plot_all_statistics('../datasets/clean/maptask')
helper.print_all_statistics('../datasets/clean/maptask')
helper.plot_all_statistics('../datasets/clean/icsi', 7)
helper.print_all_statistics('../datasets/clean/icsi')
helper.plot_all_statistics('../datasets/clean/swda', plot_width=20, plot_height=7)
helper.print_all_statistics('../datasets/clean/swda')
| experiments/dataset_statistics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#hide
from nbdt381.core import *
# # nbdt381
#
# > Test case for nbdev issue #381
# ## Logbook nbdt_issue381
#
# This test case was assembled on a MacOS system (OS version 10.14.6).
#
# To reproduce my findings, you might want follow the steps below:
# pyenv virtualenv 3.7.9 iss381
#
# git clone https://github.com/urspb/nbdt_issue381.git
# cd nbdt_issue381
# pyenv local iss381
# pip install --upgrade pip
# pip install nbdev
# pip install matplotlib
#
# # lookup version of nbdev
# pip list --local | grep nbdev
#
# # ToDo: edit settings.ini
# nbdev_install_git_hooks
# ## Test #381
#
# For the test of `nbdev_nb2md` please look into notebook `00_core`. There are also some explanations and findings.
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (herschelhelp_internal)
# language: python
# name: helpint
# ---
# # GAMA-12 master catalogue
# ## Preparation of KIDS/VST data
#
# Kilo Degree Survey/VLT Survey Telescope catalogue: the catalogue comes from `dmu0_KIDS`.
#
# In the catalogue, we keep:
#
# - The identifier (it's unique in the catalogue);
# - The position;
# - The stellarity;
# - The aperture corrected aperture magnitude in each band (10 pixels = 2")
# - The Petrosian magnitude to be used as total magnitude (no “auto” magnitude is provided).
#
# We take 2014 as the observation year from a typical image header.
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
# +
# %matplotlib inline
# #%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
from collections import OrderedDict
import os
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, mag_to_flux, flux_to_mag
# +
OUT_DIR = os.environ.get('TMP_DIR', "./data_tmp")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "kids_ra"
DEC_COL = "kids_dec"
# -
# ## I - Column selection
# +
imported_columns = OrderedDict({
'ID': "kids_id",
'RAJ2000': "kids_ra",
'DECJ2000': "kids_dec",
'CLASS_STAR': "kids_stellarity",
'MAG_AUTO_U': "m_kids_u",
'MAGERR_AUTO_U': "merr_kids_u",
'MAG_AUTO_G': "m_kids_g",
'MAGERR_AUTO_G': "merr_kids_g",
'MAG_AUTO_R': "m_kids_r",
'MAGERR_AUTO_R': "merr_kids_r",
'MAG_AUTO_I': "m_kids_i",
'MAGERR_AUTO_I': "merr_kids_i",
'FLUX_APERCOR_10_U': "f_ap_kids_u",
'FLUXERR_APERCOR_10_U': "ferr_ap_kids_u",
'FLUX_APERCOR_10_G': "f_ap_kids_g",
'FLUXERR_APERCOR_10_G': "ferr_ap_kids_g",
'FLUX_APERCOR_10_R': "f_ap_kids_r",
'FLUXERR_APERCOR_10_R': "ferr_ap_kids_r",
'FLUX_APERCOR_10_I': "f_ap_kids_i",
'FLUXERR_APERCOR_10_I': "ferr_ap_kids_i"
})
catalogue = Table.read("../../dmu0/dmu0_KIDS/data/KIDS-DR3_GAMA-12.fits")[list(imported_columns)]
for column in imported_columns:
catalogue[column].name = imported_columns[column]
epoch = 2014 #A range of observation dates from 2011 to 2015.
# Clean table metadata
catalogue.meta = None
# -
# Adding flux and band-flag columns
for col in catalogue.colnames:
if col.startswith('m_'):
errcol = "merr{}".format(col[1:])
flux, error = mag_to_flux(np.array(catalogue[col]), np.array(catalogue[errcol]))
# Fluxes are added in µJy
catalogue.add_column(Column(flux * 1.e6, name="f{}".format(col[1:])))
catalogue.add_column(Column(error * 1.e6, name="f{}".format(errcol[1:])))
# Band-flag column
if "ap" not in col:
catalogue.add_column(Column(np.zeros(len(catalogue), dtype=bool), name="flag{}".format(col[1:])))
if col.startswith('f_'):
errcol = "ferr{}".format(col[1:])
#Convert fluxes in maggies to uJy
catalogue[col] *= 3631. * 1.e6
catalogue[col].unit = 'uJy'
catalogue[errcol] *= 3631. * 1.e6
catalogue[errcol].unit = 'uJy'
mag, mag_error = flux_to_mag(np.array(catalogue[col]) * 1.e-6,
np.array(catalogue[errcol]) * 1.e-6)
# Magnitudes are added
catalogue.add_column(Column(mag, name="m{}".format(col[1:])))
catalogue.add_column(Column(mag_error, name="m{}".format(errcol[1:])))
catalogue[:10].show_in_notebook()
# ## II - Removal of duplicated sources
# We remove duplicated objects from the input catalogues.
# +
SORT_COLS = ['merr_ap_kids_u',
'merr_ap_kids_g',
'merr_ap_kids_r',
'merr_ap_kids_i']
FLAG_NAME = 'kids_flag_cleaned'
nb_orig_sources = len(catalogue)
catalogue = remove_duplicates(catalogue, RA_COL, DEC_COL, sort_col=SORT_COLS,flag_name=FLAG_NAME)
nb_sources = len(catalogue)
print("The initial catalogue had {} sources.".format(nb_orig_sources))
print("The cleaned catalogue has {} sources ({} removed).".format(nb_sources, nb_orig_sources - nb_sources))
print("The cleaned catalogue has {} sources flagged as having been cleaned".format(np.sum(catalogue[FLAG_NAME])))
# -
# ## III - Astrometry correction
#
# We match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
gaia = Table.read("../../dmu0/dmu0_GAIA/data/GAIA_GAMA-12.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
# +
delta_ra, delta_dec = astrometric_correction(
SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]),
gaia_coords
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
# -
catalogue[RA_COL] += delta_ra.to(u.deg)
catalogue[DEC_COL] += delta_dec.to(u.deg)
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
# ## IV - Flagging Gaia objects
catalogue.add_column(
gaia_flag_column(SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]), epoch, gaia)
)
# +
GAIA_FLAG_NAME = "kids_flag_gaia"
catalogue['flag_gaia'].name = GAIA_FLAG_NAME
print("{} sources flagged.".format(np.sum(catalogue[GAIA_FLAG_NAME] > 0)))
# -
# ## V - Flagging objects near bright stars
# # VI - Saving to disk
catalogue.write("{}/KIDS.fits".format(OUT_DIR), overwrite=True)
| dmu1/dmu1_ml_GAMA-12/1.3_KIDS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Ritz-Galerkin method** - Visualisation of approximate solutions given different sets of approach functions
# <NAME>, 15.06.2020
#
#
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from sympy.parsing.sympy_parser import parse_expr
from sympy.plotting import plot
from sympy import symbols
from sympy import init_printing
init_printing()
from sympy import *
import rg_algorithm as rg
# -
# Enter DEQ here: add right hand side f in between the quotes. You might use exp(x), sin(x), cos(x)
#
#
# For the boundary conditions, np.exp(x) etc.
# If condition i is on u, p_i is False. Its True for a condition on u'.
#
#inhomogeneous part
f = parse_expr("74*cos(3*x) + x**2")
#Lu = a2*u'' + a1*u' + a0*u, (a2 = 1)
a2 = 4
a1 = 0
a0 = -1
#boundary conditions: False - on u; True - on u'
x0 = -np.pi
y0 = 10
p0 = False
x1 = np.pi
y1 = 10
p1 = False
# Enter approach functions v_i and sets of them V_i
# p_i are the desired plots.
#
# Labels of the plot are rounded on two digits.
# +
#choice of approach functions
v0 = parse_expr("x**2 - pi**2")
v1 = parse_expr("cos(3*x)+1")
v2 = parse_expr("x")
v3 = parse_expr("x**2-x")
v4 = parse_expr("cos(2*x)-1")
v5 = parse_expr("x-5")
v6 = parse_expr("sin(x)")
v7 = parse_expr("x**3")
v8 = parse_expr("x**2")
v9 = parse_expr("cos(x)-1")
V0 = [v0]
V = [v1]
V1 = [v1,v0]
V2 = [v2]
V3 = [v3]
V4 = [v2,v3,v7]
V5 = [v8]
V6 = [v2,v8,v9]
plt.figure(figsize=(18,5))
#find exact solution
u = Function('u')
x = Symbol('x')
real_sol = rg.exact(u,x,a2*u(x).diff(x,2)+a1*u(x).diff(x,1) + a0*u(x)-f, x0,y0,p0,x1,y1,p1,a2)
pr = plot(real_sol, (x,x0,x1),show = False, line_color = 'green', legend = True)
pr[0].label = "$"+latex(rg.pretty_label(real_sol))+"$ (exact)"
f_1,p_1 = rg.ritz_galerkin(x,a1,a0,x0,y0,p0,x1,y1,p1,f,V2,a2=a2)
#f_2,p_2 = rg.ritz_galerkin(x,a1,a0,x0,y0,p0,x1,y1,p1,f,V,col ='blue',a2=a2)
#f_3,p_3 = rg.ritz_galerkin(x,a1,a0,x0,y0,p0,x1,y1,p1,f,V1,col ='yellow',a2=a2)
pr.append(p_1[0])
#pr.append(p_2[0])
#pr.append(p_3[0])
pr.show() # show plot with everything inside
# -
#Example 3.2.3
#inhomogeneous part
f = parse_expr("5+8*x - 2*x**2")
#Lu = a2*u'' + a1*u' + a0*u, (a2 = 1)
a2 = 1
a1 = 2
a0 = -1
#boundary conditions: False - on u; True - on u'
x0 = 0
y0 = -1
p0 = False
x1 = 1
y1 = 1
p1 = False
#Exercise 3.2.4
#inhomogeneous part
f = parse_expr("2*exp(x)")
#Lu = a2*u'' + a1*u' + a0*u, (a2 = 1)
a2 = 1
a1 = 0
a0 = 1
#boundary conditions: False - on u; True - on u'
x0 = 0
y0 = 1
p0 = False
x1 = 2
y1 = np.exp(2)
p1 = True
| Ritz-Galerkin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import pyplot as plt
import numpy as np
# from StringIO import StringIO
from PIL import *
import shutil
O7_folder = './data/O7'
card_folder = './data/cards'
os.makedirs(O7_folder, exist_ok=True)
os.makedirs(card_folder, exist_ok=True)
# -
for f1 in os.listdir('data/Fondazione Luigi Einaudi/'):
for f2 in os.listdir(os.path.join('data/Fondazione Luigi Einaudi/',f1)):
for fn in os.listdir(os.path.join('data/Fondazione Luigi Einaudi/',f1,f2)):
path = os.path.join('data/Fondazione Luigi Einaudi/',f1,f2,fn)
if 'O7' in fn:
shutil.copy(path, os.path.join(O7_folder, fn))
else:
shutil.copy(path, os.path.join(card_folder, fn))
# +
import rawpy
import imageio
import cv2
path = './data/cards/36388_1a.arw'
with rawpy.imread(path) as raw:
rgb = raw.postprocess()
# +
rgb1 = rgb[:,:,:1].copy()
rgb2 = rgb[:,:,1:2].copy()
rgb3 = rgb[:,:,2:3].copy()
rgb_f = np.concatenate([rgb2, rgb1, rgb3], axis=2)
rgb_f.shape
# -
plt.imshow(rgb_f)
# plt.imshow(rgb)
img = plt.imread('./data/Movavi Library/36388_1b.jpg')
plt.imshow(img)
path = './data/cards/36388_1a.arw'
A = np.fromfile(path, dtype='uint8', sep="")
A = A.reshape(4024, -1, 4)
plt.imshow(A)
# +
import numpy as np
from PIL import Image
Image.open(path, 'rb')
# -
# !pip install rawkit
24967936
4024 * 6024
# !pip install rawpy
plt.imshow(scene_image)
| .ipynb_checkpoints/Exploration-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# =====================================================================
# Spectro-temporal receptive field (STRF) estimation on continuous data
# =====================================================================
#
# This demonstrates how an encoding model can be fit with multiple continuous
# inputs. In this case, we simulate the model behind a spectro-temporal receptive
# field (or STRF). First, we create a linear filter that maps patterns in
# spectro-temporal space onto an output, representing neural activity. We fit
# a receptive field model that attempts to recover the original linear filter
# that was used to create this data.
#
# References
# ----------
# Estimation of spectro-temporal and spatio-temporal receptive fields using
# modeling with continuous inputs is described in:
#
# .. [1] <NAME>. et al. Estimating spatio-temporal receptive
# fields of auditory and visual neurons from their responses to
# natural stimuli. Network 12, 289-316 (2001).
#
# .. [2] <NAME>. & <NAME>. Methods for first-order kernel
# estimation: simple-cell receptive fields from responses to
# natural scenes. Network 14, 553-77 (2003).
#
# .. [3] <NAME>., <NAME>., <NAME>. & <NAME>. (2016).
# The Multivariate Temporal Response Function (mTRF) Toolbox:
# A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli.
# Frontiers in Human Neuroscience 10, 604.
# doi:10.3389/fnhum.2016.00604
#
# .. [4] Holdgraf, <NAME>. et al. Rapid tuning shifts in human auditory cortex
# enhance speech intelligibility. Nature Communications, 7, 13654 (2016).
# doi:10.1038/ncomms13654
#
# +
# Authors: <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.decoding import ReceptiveField, TimeDelayingRidge
from scipy.stats import multivariate_normal
from scipy.io import loadmat
from sklearn.preprocessing import scale
rng = np.random.RandomState(1337) # To make this example reproducible
# -
# Load audio data
# ---------------
#
# We'll read in the audio data from [3]_ in order to simulate a response.
#
# In addition, we'll downsample the data along the time dimension in order to
# speed up computation. Note that depending on the input values, this may
# not be desired. For example if your input stimulus varies more quickly than
# 1/2 the sampling rate to which we are downsampling.
#
#
# Read in audio that's been recorded in epochs.
path_audio = mne.datasets.mtrf.data_path()
data = loadmat(path_audio + '/speech_data.mat')
audio = data['spectrogram'].T
sfreq = float(data['Fs'][0, 0])
n_decim = 2
audio = mne.filter.resample(audio, down=n_decim, npad='auto')
sfreq /= n_decim
# Create a receptive field
# ------------------------
#
# We'll simulate a linear receptive field for a theoretical neural signal. This
# defines how the signal will respond to power in this receptive field space.
#
#
# +
n_freqs = 20
tmin, tmax = -0.1, 0.4
# To simulate the data we'll create explicit delays here
delays_samp = np.arange(np.round(tmin * sfreq),
np.round(tmax * sfreq) + 1).astype(int)
delays_sec = delays_samp / sfreq
freqs = np.linspace(50, 5000, n_freqs)
grid = np.array(np.meshgrid(delays_sec, freqs))
# We need data to be shaped as n_epochs, n_features, n_times, so swap axes here
grid = grid.swapaxes(0, -1).swapaxes(0, 1)
# Simulate a temporal receptive field with a Gabor filter
means_high = [.1, 500]
means_low = [.2, 2500]
cov = [[.001, 0], [0, 500000]]
gauss_high = multivariate_normal.pdf(grid, means_high, cov)
gauss_low = -1 * multivariate_normal.pdf(grid, means_low, cov)
weights = gauss_high + gauss_low # Combine to create the "true" STRF
kwargs = dict(vmax=np.abs(weights).max(), vmin=-np.abs(weights).max(),
cmap='RdBu_r', shading='gouraud')
fig, ax = plt.subplots()
ax.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax.set(title='Simulated STRF', xlabel='Time Lags (s)', ylabel='Frequency (Hz)')
plt.setp(ax.get_xticklabels(), rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
# -
# Simulate a neural response
# --------------------------
#
# Using this receptive field, we'll create an artificial neural response to
# a stimulus.
#
# To do this, we'll create a time-delayed version of the receptive field, and
# then calculate the dot product between this and the stimulus. Note that this
# is effectively doing a convolution between the stimulus and the receptive
# field. See `here <https://en.wikipedia.org/wiki/Convolution>`_ for more
# information.
#
#
# +
# Reshape audio to split into epochs, then make epochs the first dimension.
n_epochs, n_seconds = 16, 5
audio = audio[:, :int(n_seconds * sfreq * n_epochs)]
X = audio.reshape([n_freqs, n_epochs, -1]).swapaxes(0, 1)
n_times = X.shape[-1]
# Delay the spectrogram according to delays so it can be combined w/ the STRF
# Lags will now be in axis 1, then we reshape to vectorize
delays = np.arange(np.round(tmin * sfreq),
np.round(tmax * sfreq) + 1).astype(int)
# Iterate through indices and append
X_del = np.zeros((len(delays),) + X.shape)
for ii, ix_delay in enumerate(delays):
# These arrays will take/put particular indices in the data
take = [slice(None)] * X.ndim
put = [slice(None)] * X.ndim
if ix_delay > 0:
take[-1] = slice(None, -ix_delay)
put[-1] = slice(ix_delay, None)
elif ix_delay < 0:
take[-1] = slice(-ix_delay, None)
put[-1] = slice(None, ix_delay)
X_del[ii][tuple(put)] = X[tuple(take)]
# Now set the delayed axis to the 2nd dimension
X_del = np.rollaxis(X_del, 0, 3)
X_del = X_del.reshape([n_epochs, -1, n_times])
n_features = X_del.shape[1]
weights_sim = weights.ravel()
# Simulate a neural response to the sound, given this STRF
y = np.zeros((n_epochs, n_times))
for ii, iep in enumerate(X_del):
# Simulate this epoch and add random noise
noise_amp = .002
y[ii] = np.dot(weights_sim, iep) + noise_amp * rng.randn(n_times)
# Plot the first 2 trials of audio and the simulated electrode activity
X_plt = scale(np.hstack(X[:2]).T).T
y_plt = scale(np.hstack(y[:2]))
time = np.arange(X_plt.shape[-1]) / sfreq
_, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 6), sharex=True)
ax1.pcolormesh(time, freqs, X_plt, vmin=0, vmax=4, cmap='Reds')
ax1.set_title('Input auditory features')
ax1.set(ylim=[freqs.min(), freqs.max()], ylabel='Frequency (Hz)')
ax2.plot(time, y_plt)
ax2.set(xlim=[time.min(), time.max()], title='Simulated response',
xlabel='Time (s)', ylabel='Activity (a.u.)')
mne.viz.tight_layout()
# -
# Fit a model to recover this receptive field
# -------------------------------------------
#
# Finally, we'll use the :class:`mne.decoding.ReceptiveField` class to recover
# the linear receptive field of this signal. Note that properties of the
# receptive field (e.g. smoothness) will depend on the autocorrelation in the
# inputs and outputs.
#
#
# +
# Create training and testing data
train, test = np.arange(n_epochs - 1), n_epochs - 1
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
X_train, X_test, y_train, y_test = [np.rollaxis(ii, -1, 0) for ii in
(X_train, X_test, y_train, y_test)]
# Model the simulated data as a function of the spectrogram input
alphas = np.logspace(-3, 3, 7)
scores = np.zeros_like(alphas)
models = []
for ii, alpha in enumerate(alphas):
rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=alpha)
rf.fit(X_train, y_train)
# Now make predictions about the model output, given input stimuli.
scores[ii] = rf.score(X_test, y_test)
models.append(rf)
times = rf.delays_ / float(rf.sfreq)
# Choose the model that performed best on the held out data
ix_best_alpha = np.argmax(scores)
best_mod = models[ix_best_alpha]
coefs = best_mod.coef_[0]
best_pred = best_mod.predict(X_test)[:, 0]
# Plot the original STRF, and the one that we recovered with modeling.
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(6, 3), sharey=True, sharex=True)
ax1.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax2.pcolormesh(times, rf.feature_names, coefs, **kwargs)
ax1.set_title('Original STRF')
ax2.set_title('Best Reconstructed STRF')
plt.setp([iax.get_xticklabels() for iax in [ax1, ax2]], rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
# Plot the actual response and the predicted response on a held out stimulus
time_pred = np.arange(best_pred.shape[0]) / sfreq
fig, ax = plt.subplots()
ax.plot(time_pred, y_test, color='k', alpha=.2, lw=4)
ax.plot(time_pred, best_pred, color='r', lw=1)
ax.set(title='Original and predicted activity', xlabel='Time (s)')
ax.legend(['Original', 'Predicted'])
plt.autoscale(tight=True)
mne.viz.tight_layout()
# -
# Visualize the effects of regularization
# ---------------------------------------
#
# Above we fit a :class:`mne.decoding.ReceptiveField` model for one of many
# values for the ridge regularization parameter. Here we will plot the model
# score as well as the model coefficients for each value, in order to
# visualize how coefficients change with different levels of regularization.
# These issues as well as the STRF pipeline are described in detail
# in [1]_, [2]_, and [4]_.
#
#
# +
# Plot model score for each ridge parameter
fig = plt.figure(figsize=(10, 4))
ax = plt.subplot2grid([2, len(alphas)], [1, 0], 1, len(alphas))
ax.plot(np.arange(len(alphas)), scores, marker='o', color='r')
ax.annotate('Best parameter', (ix_best_alpha, scores[ix_best_alpha]),
(ix_best_alpha, scores[ix_best_alpha] - .1),
arrowprops={'arrowstyle': '->'})
plt.xticks(np.arange(len(alphas)), ["%.0e" % ii for ii in alphas])
ax.set(xlabel="Ridge regularization value", ylabel="Score ($R^2$)",
xlim=[-.4, len(alphas) - .6])
mne.viz.tight_layout()
# Plot the STRF of each ridge parameter
for ii, (rf, i_alpha) in enumerate(zip(models, alphas)):
ax = plt.subplot2grid([2, len(alphas)], [0, ii], 1, 1)
ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
plt.xticks([], [])
plt.yticks([], [])
plt.autoscale(tight=True)
fig.suptitle('Model coefficients / scores for many ridge parameters', y=1)
mne.viz.tight_layout()
# -
# Using different regularization types
# ------------------------------------
# In addition to the standard ridge regularization, the
# :class:`mne.decoding.TimeDelayingRidge` class also exposes
# `Laplacian <https://en.wikipedia.org/wiki/Laplacian_matrix>`_ regularization
# term as:
#
# \begin{align}\left[\begin{matrix}
# 1 & -1 & & & & \\
# -1 & 2 & -1 & & & \\
# & -1 & 2 & -1 & & \\
# & & \ddots & \ddots & \ddots & \\
# & & & -1 & 2 & -1 \\
# & & & & -1 & 1\end{matrix}\right]\end{align}
#
# This imposes a smoothness constraint of nearby time samples and/or features.
# Quoting [3]_:
#
# Tikhonov [identity] regularization (Equation 5) reduces overfitting by
# smoothing the TRF estimate in a way that is insensitive to
# the amplitude of the signal of interest. However, the Laplacian
# approach (Equation 6) reduces off-sample error whilst preserving
# signal amplitude (Lalor et al., 2006). As a result, this approach
# usually leads to an improved estimate of the system’s response (as
# indexed by MSE) compared to Tikhonov regularization.
#
#
#
# +
scores_lap = np.zeros_like(alphas)
models_lap = []
for ii, alpha in enumerate(alphas):
estimator = TimeDelayingRidge(tmin, tmax, sfreq, reg_type='laplacian',
alpha=alpha)
rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=estimator)
rf.fit(X_train, y_train)
# Now make predictions about the model output, given input stimuli.
scores_lap[ii] = rf.score(X_test, y_test)
models_lap.append(rf)
ix_best_alpha_lap = np.argmax(scores_lap)
# -
# Compare model performance
# -------------------------
# Below we visualize the model performance of each regularization method
# (ridge vs. Laplacian) for different levels of alpha. As you can see, the
# Laplacian method performs better in general, because it imposes a smoothness
# constraint along the time and feature dimensions of the coefficients.
# This matches the "true" receptive field structure and results in a better
# model fit.
#
#
# +
fig = plt.figure(figsize=(10, 6))
ax = plt.subplot2grid([3, len(alphas)], [2, 0], 1, len(alphas))
ax.plot(np.arange(len(alphas)), scores_lap, marker='o', color='r')
ax.plot(np.arange(len(alphas)), scores, marker='o', color='0.5', ls=':')
ax.annotate('Best Laplacian', (ix_best_alpha_lap,
scores_lap[ix_best_alpha_lap]),
(ix_best_alpha_lap, scores_lap[ix_best_alpha_lap] - .1),
arrowprops={'arrowstyle': '->'})
ax.annotate('Best Ridge', (ix_best_alpha, scores[ix_best_alpha]),
(ix_best_alpha, scores[ix_best_alpha] - .1),
arrowprops={'arrowstyle': '->'})
plt.xticks(np.arange(len(alphas)), ["%.0e" % ii for ii in alphas])
ax.set(xlabel="Laplacian regularization value", ylabel="Score ($R^2$)",
xlim=[-.4, len(alphas) - .6])
mne.viz.tight_layout()
# Plot the STRF of each ridge parameter
xlim = times[[0, -1]]
for ii, (rf_lap, rf, i_alpha) in enumerate(zip(models_lap, models, alphas)):
ax = plt.subplot2grid([3, len(alphas)], [0, ii], 1, 1)
ax.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)
ax.set(xticks=[], yticks=[], xlim=xlim)
if ii == 0:
ax.set(ylabel='Laplacian')
ax = plt.subplot2grid([3, len(alphas)], [1, ii], 1, 1)
ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
ax.set(xticks=[], yticks=[], xlim=xlim)
if ii == 0:
ax.set(ylabel='Ridge')
fig.suptitle('Model coefficients / scores for laplacian regularization', y=1)
mne.viz.tight_layout()
# -
# Plot the original STRF, and the one that we recovered with modeling.
#
#
rf = models[ix_best_alpha]
rf_lap = models_lap[ix_best_alpha_lap]
_, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(9, 3),
sharey=True, sharex=True)
ax1.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax2.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
ax3.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)
ax1.set_title('Original STRF')
ax2.set_title('Best Ridge STRF')
ax3.set_title('Best Laplacian STRF')
plt.setp([iax.get_xticklabels() for iax in [ax1, ax2, ax3]], rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
| dev/_downloads/12e915738bb5b40ffcb1157f1c2dee72/plot_receptive_field.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Summed Likelihood Analysis with Python
#
# This sample analysis shows a way of performing joint likelihood on two data selections using the same XML model. This is useful if you want to do the following:
#
# * Coanalysis of Front and Back selections (not using the combined IRF)
# * Coanalysis of separate time intervals
# * Coanalysis of separate energy ranges
# * Pass 8 PSF type analysis
# * Pass 8 EDISP type analysis
#
# This tutorial also assumes that you've gone through the standard [binned likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html) thread using the combined front + back events, to which we will compare.
# # Get the data
#
# For this thread the original data were extracted from the [LAT data server](https://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi) with the following selections (these selections are similar to those in the paper):
#
# ```
# Search Center (RA,Dec) = (193.98,-5.82)
# Radius = 15 degrees
# Start Time (MET) = 239557417 seconds (2008-08-04T15:43:37)
# Stop Time (MET) = 302572802 seconds (2010-08-04T00:00:00)
# Minimum Energy = 100 MeV
# Maximum Energy = 500000 MeV
# ```
#
# For more information on how to download LAT data please see the [Extract LAT Data](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extract_latdata.html) tutorial.
#
# These are the event files. Run the code cell below to retrieve them:
# ```
# L181126210218F4F0ED2738_PH00.fits (5.4 MB)
# L181126210218F4F0ED2738_PH01.fits (10.8 MB)
# L181126210218F4F0ED2738_PH02.fits (6.9 MB)
# L181126210218F4F0ED2738_PH03.fits (9.8 MB)
# L181126210218F4F0ED2738_PH04.fits (7.8 MB)
# L181126210218F4F0ED2738_PH05.fits (6.6 MB)
# L181126210218F4F0ED2738_PH06.fits (4.8 MB)
# L181126210218F4F0ED2738_SC00.fits (256 MB spacecraft file)
# ```
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH00.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH01.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH02.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH03.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH04.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH05.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH06.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_SC00.fits
# !mkdir data
# !mv *.fits ./data
# You'll first need to make a file list with the names of your input event files:
# !ls ./data/*_PH*.fits > ./data/binned_events.txt
# !cat ./data/binned_events.txt
# In the following analysis we've assumed that you've named your list of data files `binned_events.txt`.
# # Perform Event Selections
#
# You could follow the unbinned likelihood tutorial to perform your event selections using **gtlike*, **gtmktime**, etc. directly from the command line, and then use pylikelihood later.
#
# But we're going to go ahead and use python. The `gt_apps` module provides methods to call these tools from within python. This'll get us used to using python.
#
# So, let's jump into python:
import gt_apps as my_apps
# We need to run **gtselect** (called `filter` in python) twice. Once, we select only the front events and the other time we select only back events. You do this with `evtype=1` (front) and `evtype=2` (back).
my_apps.filter['evclass'] = 128
my_apps.filter['evtype'] = 1
my_apps.filter['ra'] = 193.98
my_apps.filter['dec'] = -5.82
my_apps.filter['rad'] = 15
my_apps.filter['emin'] = 100
my_apps.filter['emax'] = 500000
my_apps.filter['zmax'] = 90
my_apps.filter['tmin'] = 239557417
my_apps.filter['tmax'] = 302572802
my_apps.filter['infile'] = '@./data/binned_events.txt'
my_apps.filter['outfile'] = './data/3C279_front_filtered.fits'
# Once this is done, we can run **gtselect**:
my_apps.filter.run()
# Now, we select the back events and run it again:
my_apps.filter['evtype'] = 2
my_apps.filter['outfile'] = './data/3C279_back_filtered.fits'
my_apps.filter.run()
# Now, we need to find the GTIs for each data set (front and back). This is accessed within python via the `maketime` object:
# Front
my_apps.maketime['scfile'] = './data/L181126210218F4F0ED2738_SC00.fits'
my_apps.maketime['filter'] = '(DATA_QUAL>0)&&(LAT_CONFIG==1)'
my_apps.maketime['roicut'] = 'no'
my_apps.maketime['evfile'] = './data/3C279_front_filtered.fits'
my_apps.maketime['outfile'] = './data/3C279_front_filtered_gti.fits'
my_apps.maketime.run()
# Similar for the back:
# Back
my_apps.maketime['evfile'] = './data/3C279_back_filtered.fits'
my_apps.maketime['outfile'] = './data/3C279_back_filtered_gti.fits'
my_apps.maketime.run()
# # Livetime and Counts Cubes
#
# ### Livetime Cube
#
# We can now compute the livetime cube. We only need to do this once since in this case we made the exact same time cuts and used the same GTI filter on front and back datasets.
my_apps.expCube['evfile'] = './data/3C279_front_filtered_gti.fits'
my_apps.expCube['scfile'] = './data/L181126210218F4F0ED2738_SC00.fits'
my_apps.expCube['outfile'] = './data/3C279_front_ltcube.fits'
my_apps.expCube['zmax'] = 90
my_apps.expCube['dcostheta'] = 0.025
my_apps.expCube['binsz'] = 1
my_apps.expCube.run()
# ### Counts Cube
#
# The counts cube is the counts from our data file binned in space and energy. All of the steps above use a circular ROI (or a cone, really).
#
# Once you switch to binned analysis, you start doing things in squares. Your counts cube can only be as big as the biggest square that can fit in the circular ROI you already selected.
#
# We start with front events:
my_apps.evtbin['evfile'] = './data/3C279_front_filtered_gti.fits'
my_apps.evtbin['outfile'] = './data/3C279_front_ccube.fits'
my_apps.evtbin['algorithm'] = 'CCUBE'
my_apps.evtbin['nxpix'] = 100
my_apps.evtbin['nypix'] = 100
my_apps.evtbin['binsz'] = 0.2
my_apps.evtbin['coordsys'] = 'CEL'
my_apps.evtbin['xref'] = 193.98
my_apps.evtbin['yref'] = -5.82
my_apps.evtbin['axisrot'] = 0
my_apps.evtbin['proj'] = 'AIT'
my_apps.evtbin['ebinalg'] = 'LOG'
my_apps.evtbin['emin'] = 100
my_apps.evtbin['emax'] = 500000
my_apps.evtbin['enumbins'] = 37
my_apps.evtbin.run()
# And then for the back events:
my_apps.evtbin['evfile'] = './data/3C279_back_filtered_gti.fits'
my_apps.evtbin['outfile'] = './data/3C279_back_ccube.fits'
my_apps.evtbin.run()
# # Exposure Maps
#
# The binned exposure map is an exposure map binned in space and energy.
#
# We first need to import the python version of `gtexpcube2`, which doesn't have a gtapp version by default. This is easy to do (you can import any of the command line tools into python this way). Then, you can check out the parameters with the `pars` function.
from GtApp import GtApp
expCube2= GtApp('gtexpcube2','Likelihood')
expCube2.pars()
# Here, we generate exposure maps for the entire sky.
expCube2['infile'] = './data/3C279_front_ltcube.fits'
expCube2['cmap'] = 'none'
expCube2['outfile'] = './data/3C279_front_BinnedExpMap.fits'
expCube2['irfs'] = 'P8R3_SOURCE_V2'
expCube2['evtype'] = '1'
expCube2['nxpix'] = 1800
expCube2['nypix'] = 900
expCube2['binsz'] = 0.2
expCube2['coordsys'] = 'CEL'
expCube2['xref'] = 193.98
expCube2['yref'] = -5.82
expCube2['axisrot'] = 0
expCube2['proj'] = 'AIT'
expCube2['ebinalg'] = 'LOG'
expCube2['emin'] = 100
expCube2['emax'] = 500000
expCube2['enumbins'] = 37
expCube2.run()
expCube2['infile'] = './data/3C279_front_ltcube.fits'
expCube2['outfile'] = './data/3C279_back_BinnedExpMap.fits'
expCube2['evtype'] = '2'
expCube2.run()
# # Compute Source Maps
#
# The source maps step convolves the LAT response with your source model, generating maps for each source in the model for use in the likelihood calculation.
#
# We use the same [XML](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_input_model.xml) file as in the standard [binned likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html) analysis.
#
# You should also download the recommended models for a normal point source analysis `gll_iem_v07.fits` and `iso_P8R3_SOURCE_V2_v1.txt`.
#
# These three files can be downloaded by running the code cell below:
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/gll_iem_v07.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/iso_P8R3_SOURCE_V2_v1.txt
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_input_model.xml
# !mv *.xml ./data
# Note that the files `gll_iem_v07.fits` and `iso_P8R3_SOURCE_V2_v1.txt` must be in your current working directory for the next steps to work.
#
# We compute the front events:
my_apps.srcMaps['expcube'] = './data/3C279_front_ltcube.fits'
my_apps.srcMaps['cmap'] = './data/3C279_front_ccube.fits'
my_apps.srcMaps['srcmdl'] = './data/3C279_input_model.xml'
my_apps.srcMaps['bexpmap'] = './data/3C279_front_BinnedExpMap.fits'
my_apps.srcMaps['outfile'] = './data/3C279_front_srcmap.fits'
my_apps.srcMaps['irfs'] = 'P8R3_SOURCE_V2'
my_apps.srcMaps['evtype'] = '1'
my_apps.srcMaps.run()
# And similarly, the back events:
my_apps.srcMaps['expcube'] = './data/3C279_front_ltcube.fits'
my_apps.srcMaps['cmap'] = './data/3C279_back_ccube.fits'
my_apps.srcMaps['srcmdl'] = './data/3C279_input_model.xml'
my_apps.srcMaps['bexpmap'] = './data/3C279_back_BinnedExpMap.fits'
my_apps.srcMaps['outfile'] = './data/3C279_back_srcmap.fits'
my_apps.srcMaps['irfs'] = 'P8R3_SOURCE_V2'
my_apps.srcMaps['evtype'] = '2'
my_apps.srcMaps.run()
# # Run the Likelihood Analysis
#
# First, import the BinnedAnalysis and SummedAnalysis libraries. Then, create a likelihood object for both the front and the back datasets. For more details on the pyLikelihood module, check out the [pyLikelihood Usage Notes](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/python_usage_notes.html).
# +
import pyLikelihood
from BinnedAnalysis import *
from SummedLikelihood import *
front = BinnedObs(srcMaps='./data/3C279_front_srcmap.fits',binnedExpMap='./data/3C279_front_BinnedExpMap.fits',expCube='./data/3C279_front_ltcube.fits',irfs='CALDB')
likefront = BinnedAnalysis(front,'./data/3C279_input_model.xml',optimizer='NewMinuit')
back = BinnedObs(srcMaps='./data/3C279_back_srcmap.fits',binnedExpMap='./data/3C279_back_BinnedExpMap.fits',expCube='./data/3C279_front_ltcube.fits',irfs='CALDB')
likeback = BinnedAnalysis(back,'./data/3C279_input_model.xml',optimizer='NewMinuit')
# -
# Then, create the summedlikelihood object and add the two likelihood objects, one for the front selection and the second for the back selection.
summed_like = SummedLikelihood()
summed_like.addComponent(likefront)
summed_like.addComponent(likeback)
# Perform the fit and print out the results:
summedobj = pyLike.NewMinuit(summed_like.logLike)
summed_like.fit(verbosity=0,covar=True,optObject=summedobj)
# Print TS for 3C 279 (4FGL J1256.1-0547):
summed_like.Ts('4FGL J1256.1-0547')
# We can now compare to the standard [binned likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html) analysis that uses only one data set containing both Front and Back event types that are represented by a single, combined IRF set. You will need to download the files created in that analysis thread or rerun this python tutorial with the combined dataset `(evtype=3)`.
#
# For your convenience, the files can be obtained from the code cell below:
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_srcmaps.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_allsky_expcube.fits
# !wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_ltcube.fits
# !mv *.fits ./data
all = BinnedObs(srcMaps='./data/3C279_binned_srcmaps.fits',binnedExpMap='./data/3C279_binned_allsky_expcube.fits',expCube='./data/3C279_binned_ltcube.fits',irfs='CALDB')
likeall = BinnedAnalysis(all,'./data/3C279_input_model.xml',optimizer='NewMinuit')
# Perform the fit and print out the results:
likeallobj = pyLike.NewMinuit(likeall.logLike)
likeall.fit(verbosity=0,covar=True,optObject=likeallobj)
# Print TS for 3C 279 (4FGL J1256.1-0547):
likeall.Ts('4FGL J1256.1-0547')
# The TS for the front + back analysis is 29261.558, a bit lower than what we found for the separate front and back analysis 30191.550.
#
# The important difference is that in the separated version of the analysis each event type has a dedicated response function set instead of using the averaged Front+Back response. This should increase the sensitivity, and therefore, the TS value.
| SourceAnalysis/4.SummedPythonLikelihood/4.SummedPythonLikelihood.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Scanners
# +
from ib_insync import *
util.startLoop()
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=17)
# -
# ## Basic Scanner
#
# To create a scanner, create a `ScannerSubscription` to submit to the `reqScannerData` method. For any scanner to work, at least these three fields must be filled: `instrument` (the what), `locationCode` (the where), and `scanCode` (the ranking).
#
# For example, to find the top ranked US stock percentage gainers:
# +
sub = ScannerSubscription(
instrument='STK',
locationCode='STK.US.MAJOR',
scanCode='TOP_PERC_GAIN')
scanData = ib.reqScannerData(sub)
print(f'{len(scanData)} results, first one:')
print(scanData[0])
# -
# *The displayed error 162 can be ignored*
#
# The scanner returns a list of contract details, without current market data (this can be obtained via seperate market data requests).
#
# ## Filtering scanner results, the old way
#
# The `ScannerSubscription` object has addional parameters that can be set to filter the results, such as `abovePrice`, `aboveVolume`, `marketCapBelow` or `spRatingAbove`.
#
# For example, to reuse the previous `sub` and query only for stocks with a price above 200 dollar:
# +
sub.abovePrice = 200
scanData = ib.reqScannerData(sub)
symbols = [sd.contractDetails.contract.symbol for sd in scanData]
print(symbols)
# -
# ## Filtering, the new way
#
# In the new way there is a truly vast number of parameters available to use for filtering.
# These new scanner parameters map directly to the options available through the TWS "Advanced Market Scanner." The parameters
# are dynamically available from a huge XML document that is returned by ``reqScannerParameters``:
# +
xml = ib.reqScannerParameters()
print(len(xml), 'bytes')
# -
# To view the XML in a web browser:
# +
path = 'scanner_parameters.xml'
with open(path, 'w') as f:
f.write(xml)
import webbrowser
webbrowser.open(path)
# +
# parse XML document
import xml.etree.ElementTree as ET
tree = ET.fromstring(xml)
# find all tags that are available for filtering
tags = [elem.text for elem in tree.findall('.//AbstractField/code')]
print(len(tags), 'tags:')
print(tags)
# -
# Notice how ``abovePrice`` is now called ``priceAbove``...
#
# Using three of these filter tags, let's perform a query to find all US stocks that went up 20% and have a current price between 5 and 50 dollar, sorted by percentage gain:
# +
sub = ScannerSubscription(
instrument='STK',
locationCode='STK.US.MAJOR',
scanCode='TOP_PERC_GAIN')
tagValues = [
TagValue("changePercAbove", "20"),
TagValue('priceAbove', 5),
TagValue('priceBelow', 50)]
# the tagValues are given as 3rd argument; the 2nd argument must always be an empty list
# (IB has not documented the 2nd argument and it's not clear what it does)
scanData = ib.reqScannerData(sub, [], tagValues)
symbols = [sd.contractDetails.contract.symbol for sd in scanData]
print(symbols)
# -
# Any scanner query that TWS can do can alse be done through the API. The `scanCode` parameter maps directly to the "Parameter" window in the TWS "Advanced Market Scanner." We can verify this by printing out the `scanCode` values available:
# +
scanCodes = [e.text for e in tree.findall('.//scanCode')]
print(len(scanCodes), 'scan codes, showing the ones starting with "TOP":')
print([sc for sc in scanCodes if sc.startswith('TOP')])
# -
# Queries are not limited to stocks. To get a list of all supported instruments:
instrumentTypes = set(e.text for e in tree.findall('.//Instrument/type'))
print(instrumentTypes)
# To find all location codes:
locationCodes = [e.text for e in tree.findall('.//locationCode')]
print(locationCodes)
ib.disconnect()
| notebooks/scanners.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# #### Loss (error) function:
# $$L(\widehat{y},y)=-(ylog(\widehat{y})+(1-y)log(1-\widehat{y}))$$
# #### Cost function:
# $$J(w,b)=\frac{1}{m}\sum_{i=1}^{m}L(\widehat{y}^{(i)},y^{(i)})=-\frac{1}{m}\sum_{i=1}^{m}[{y}^{(i)}log(\widehat{y}^{(i)})+(1-{y}^{(i)})log(1-\widehat{y}^{(i)})]$$
# #### Gradient Descent:
# To find $w, b$ that minimize $J(w,b)$, we use gradient descent method:
# $$w:=w-\alpha \cdot \frac{\partial J(w,b)}{\partial w}$$
# $$b:=b-\alpha \cdot \frac{\partial J(w,b)}{\partial b}$$
#
# #### Activation functions:
# Activation functions characteristic of non-linear. Here f(z) refers to the activation functions. We can choose various activation function in neural network.
# 1. Tanh(z):
# $$tanh(z)=\frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$
# and its derivative is
# $$f'(z)=1-(f(z))^2$$
# Tanh(z) is often used in hidden layer.
#
# 2. Sigmoid(z):
# $$Sigmoid(z)=\frac{1}{1+exp(-z)}$$
# and its derivative is
# $$f'(z)=f(z)(1-f(z))$$
# Never use this except you are doing binary classification.
#
# 3. ReLU:
# $$ReLU(z)=max(0,z)$$
# $$f'(z) = \left\{ \begin{array}{ll}
# 1 & \textrm{if $z>0$}\\
# 0 & \textrm{if $z<0$}\\
# undefiend & \textrm{if $z=0$}
# \end{array} \right .$$
# But we often include z=0 into region z>0, so
# $$f'(z) = \left\{ \begin{array}{ll}
# 1 & \textrm{if $z>0$}\\
# 0 & \textrm{if $z\le0$}\\
# \end{array} \right .$$
# Faster converge velocity than others and trimmed larger than zero. Usually default.
#
# 3. Leaky ReLU
# Similar with ReLU, but not exactly equal to zero when z<0
# $$ReLU(z)=max(0.01z,z)$$
# $$f'(z) = \left\{ \begin{array}{ll}
# 1 & \textrm{if $z>0$}\\
# 0.01 & \textrm{if $z<0$}\\
# undefiend & \textrm{if $z=0$}
| WangYi_public_Ng/Neural_Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import os
from googletrans import Translator
translator = Translator()
# PANDAS DISPLAY PARAMETERS
# Setting the number of maximum columns and rows to display
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 100)
# pd.set_option('display.width', None)
# pd.set_option('display.max_colwidth', 1)
path_to_cc_data = '../data/raw_data/Consolidated Consumer Feedback Data'
os.listdir(path_to_cc_data)
# +
# cc = pd.DataFrame()
# for file in os.listdir(path_to_cc_data):
# temp = pd.read_excel(os.path.join(path_to_cc_data, file))
# cc = pd.concat([cc, temp])
# cc.to_csv(os.path.join(path_to_cc_data, 'consolidated_cc_14_20.csv'), index=False)
# -
cc = pd.read_csv(os.path.join(path_to_cc_data, 'consolidated_cc_14_20.csv'))
cc.columns
# +
products = [
'E0KK01',
'E0KK02',
'EFAA01',
'EFX401',
'EH9A05',
'EJ0P01',
'EJ0R01',
'EH2110',
'EH2119']
product_filter = None
for prod in products:
product_filter = (product_filter) | (cc['Product Code'].str.startswith(prod))
# -
r1_filter = cc['R1']=='Complaints'
date_filer = cc['Date Month'] >= '2017'
r3_list = [
'Alleged Adverse Event',
'Alleged Adverse Event',
'Miscellaneous',
'Packaging']
r3_filter = cc['R3'].isin(r3_list)
filtered_cc = cc[product_filter & r1_filter & date_filer].dropna(subset=['Verbatim'])
filtered_cc.groupby('R4')['R4'].count()
for index, row in filtered_cc.iterrows():
print('\n')
print(row['Case ID'])
print(translator.translate(row['Verbatim']).text)
| ratings_and_reviews_wrangling/old_files/adhoc_reports.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from f_drosophila_infer import *
from f_train import *
from f_data_prep import *
from sklearn import linear_model
# +
data_all = np.loadtxt('../data_complete.txt')
# data_all = data_all - np.median(data_all, axis=0)
all_bin = np.vsplit(data_all, 6)
all_init = np.vstack([all_bin[i] for i in range(5)])
all_diff = np.vstack([all_bin[i+1]-all_bin[i] for i in range(5)])
complete_all = ([int(x) - 1 for x in open('../indices_complete.txt','r').readline().split()])
comp_ind = list(map(int, list((np.array(complete_all)[::6]-3)/6)))
data_comp = np.copy(data_all[:, comp_ind])
comp_bin = np.vsplit(data_comp, 6)
comp_init = np.vstack([comp_bin[i] for i in range(5)])
comp_diff = np.vstack([comp_bin[i+1] - comp_bin[i] for i in range(5)])
all_init, all_diff, comp_init, comp_diff = shuffle(all_init, all_diff, comp_init, comp_diff)
# +
# def ER_cv_quad(X, y, gene_comp=comp_ind, kf=10, power_=0):
# quad = np.copy(X)
# kfold = KFold(n_splits=kf, shuffle=False, random_state=1)
# error_list=[]
# if len(gene_comp) == 0:
# for i in range(X.shape[1]-1):
# for j in range(i+1, X.shape[1]):
# quad = np.hstack((quad, (X[:,i]*X[:,j])[:,None]))
# else:
# for i in range(len(comp_ind)-1):
# for j in range(i+1, len(comp_ind)):
# quad = np.hstack((quad, (X[:,comp_ind[i]]*X[:,comp_ind[j]])[:,None]))
# for (tr, te) in (kfold.split(y)):
# X_tr, quad_tr, y_tr = X[tr], quad[tr], y[tr]
# X_te, quad_te, y_te = X[te], quad[te], y[te]
# if y.shape[1] == len(gene_comp):
# X_init = np.copy(X_te[:, comp_ind])
# else:
# X_init = np.copy(X_te)
# y_actual = X_init + y_te
# y_actual_sum_sq = np.sum(np.abs(y_actual)**2, axis=0)
# w,sigma,bias = infer_drosophila(quad_tr, y_tr, power=power_, l=10)
# noise = sigma*npr.normal(size=(1,w.shape[1]))
# if power_ == 0:
# y_pred = X_init + (bias + quad_te.dot(w) + noise)
# if power_ == 1:
# y_pred = X_init + (np.tanh(bias + quad_te.dot(w)) + noise)
# if power_ == 3:
# y_pred = X_init + (odd_power(bias + quad_te.dot(w), power_) + noise)
# error = (np.sum(np.abs(y_pred - y_actual)**2, axis=0)/y_actual_sum_sq)**(1/2)
# error_list.append(error)
# return [np.mean(error_list, axis=0), np.std(error_list, axis=0)]
# def infer_all_ER(X_all, X_comp, y_all, y_comp, func=0):
# res=[]
# results = ER_cv_quad(X_all, y_all, gene_comp=comp_ind, kf=10, power_=func)
# res.append(results)
# results = ER_cv_quad(X_all, y_comp, gene_comp=comp_ind, kf=10, power_=func)
# res.append(results)
# results = ER_cv_quad(X_comp, y_comp, gene_comp=[], kf=10, power_=func)
# res.append(results)
# return res
# +
def skl_cv_quad(X, y, gene_comp=comp_ind, kf=10):
kfold = KFold(n_splits=kf, shuffle=False)
if len(gene_comp) > 0:
X_in = np.copy(X[:, comp_ind])
else:
X_in = np.copy(X)
quad = np.copy(X_in)
error_list=[]
w_list = []
bias_list = []
yp_list = []
if len(gene_comp) > 0:
for i in range(X_in.shape[1]-1):
for j in range(i+1, X_in.shape[1]):
quad = np.hstack((quad, (X_in[:,i]*X_in[:,j])[:,None]))
else:
for i in range(len(comp_ind)-1):
for j in range(i+1, len(comp_ind)):
quad = np.hstack((quad, (X_in[:,comp_ind[i]]*X_in[:,comp_ind[j]])[:,None]))
for (cell_tr, cell_te) in (kfold.split(range(6078))):
te = np.hstack([cell_te+(6078*i) for i in range(5)])
tr = np.hstack([cell_tr+(6078*i) for i in range(5)])
X_tr, quad_tr, y_tr = X[tr], quad[tr], y[tr]
X_te, quad_te, y_te = X[te], quad[te], y[te]
regr = linear_model.LinearRegression()
regr.fit(quad_tr, y_tr)
w_list.append(regr.coef_.T)
bias_list.append(regr.intercept_)
if y.shape[1] == 27:
X_init = np.copy(X[:, comp_ind][te])
else:
X_init = np.copy(X[te])
y_actual = X_init + y_te
y_actual_sum_sq = np.sum(np.abs(y_actual)**2, axis=0)
y_pred = X_init + regr.predict(quad_te)
# yp_list.append(np.copy(y_pred))
y_pred[y_pred < 0] = 0
error = (np.sum(np.abs(y_pred - y_actual)**2, axis=0)/y_actual_sum_sq)**(1/2)
error_list.append(error)
print(X_init.shape[1], quad_te.shape[1], regr.coef_.shape)
return [np.mean(error_list, axis=0), np.std(error_list, axis=0), w_list, bias_list]
def infer_all_skl(X_all, y_all, y_comp):
res=[]
results = skl_cv_quad(X_all, y_all, gene_comp=[], kf=10)
res.append(results)
results = skl_cv_quad(X_all, y_comp, gene_comp=[], kf=10)
res.append(results)
results = skl_cv_quad(X_all, y_all, gene_comp=comp_ind, kf=10)
res.append(results)
results = skl_cv_quad(X_all, y_comp, gene_comp=comp_ind, kf=10)
res.append(results)
return res
# +
# ER_quad = infer_all_ER(all_init, comp_init, all_diff, comp_diff, func=1)
# skl_quad = infer_all_skl(all_init, all_diff, comp_diff)
# +
# with open('./pickles/ER_quad.pkl', 'wb') as f:
# pickle.dump(ER_quad, f)
# with open('./pickles/skl_quad.pkl', 'wb') as f:
# pickle.dump(skl_quad, f)
# -
# with open('./pickles/ER_quad.pkl', 'rb') as f:
# ER_quad = pickle.load(f)
with open('./pickles/skl_quad.pkl', 'rb') as f:
skl_quad = pickle.load(f)
# +
# %matplotlib inline
labels=['(99,27) to 99', '(99,27) to 27', '(27,27) to 99', '(27,27) to 27']
for i in range(4):
plt.figure(figsize=(12,6))
if i == 0 or i == 2:
# plt.plot(comp_ind, ER_quad[i][0][comp_ind], 'o--', label='ER')
plt.plot(comp_ind, skl_quad[i][0][comp_ind], 'o--', label='skl')
else:
# plt.plot(comp_ind, ER_quad[i][0], 'o--', label='ER')
plt.plot(comp_ind, skl_quad[i][0], 'o--', label='skl')
plt.title(labels[i])
plt.legend(bbox_to_anchor=(1,0.5))
plt.xlabel('gene')
plt.ylabel('fractional error')
# plt.ylim(0,1)
plt.show()
plt.figure(figsize=(12,6))
# plt.plot(range(99), ER_quad[0][0], 'o--', label='ER')
plt.plot(range(99), skl_quad[0][0], 'o--', label='skl')
plt.legend(bbox_to_anchor=(1,0.5))
plt.title('(99,27) to 99')
plt.xlabel('gene')
plt.ylabel('fractional error')
# plt.ylim(0,1)
plt.show()
# -
def skl_cv_quad(X, y, kf=10):
kfold = KFold(n_splits=kf, shuffle=False)
quad = np.zeros((int(X.shape[0]), int(X.shape[1]+(X.shape[1]*(X.shape[1]-1))/2)))
quad[:,:X.shape[1]] = np.copy(X)
col = 99
for i in range(X.shape[1]-1):
for j in range(i+1, X.shape[1]):
quad[:,col] = (X[:,i]*X[:,j])
col += 1
error_list=[]
w_list = []
bias_list = []
yp_list = []
for (cell_tr, cell_te) in (kfold.split(range(6078))):
te = np.hstack([cell_te+(6078*i) for i in range(5)])
tr = np.hstack([cell_tr+(6078*i) for i in range(5)])
X_tr, quad_tr, y_tr = X[tr], quad[tr], y[tr]
X_te, quad_te, y_te = X[te], quad[te], y[te]
regr = linear_model.LinearRegression()
regr.fit(quad_tr, y_tr)
w_list.append(regr.coef_.T)
bias_list.append(regr.intercept_)
X_init = np.copy(X[te])
y_actual = X_init + y_te
y_actual_sum_sq = np.sum(np.abs(y_actual)**2, axis=0)
y_pred = X_init + regr.predict(quad_te)
# yp_list.append(np.copy(y_pred))
y_pred[y_pred < 0] = 0
error = (np.sum(np.abs(y_pred - y_actual)**2, axis=0)/y_actual_sum_sq)**(1/2)
error_list.append(error)
print(X_init.shape[1], quad_te.shape[1], regr.coef_.T.shape)
return [np.mean(error_list, axis=0), np.std(error_list, axis=0), w_list, bias_list]
# +
# results = skl_cv_quad(all_init, all_diff, kf=10)
# with open('./pickles/skl_(99,99)_tt.pkl', 'wb') as f:
# pickle.dump(results, f)
# -
with open('./pickles/validation_cells.pkl', 'rb') as f:
cells_v = pickle.load(f)
# + code_folding=[]
data_all = np.loadtxt('../data_complete.txt')
# data_all = data_all - np.median(data_all, axis=0)
all_bin = np.vsplit(data_all, 6)
all_init = np.vstack([all_bin[i] for i in range(5)])
all_diff = np.vstack([all_bin[i+1]-all_bin[i] for i in range(5)])
def skl_cv_quad(X, y, kf=10):
kfold = KFold(n_splits=kf, shuffle=False)
val = np.hstack([cells_v+(6078*i) for i in range(5)])
cells_tt = np.delete(range(6078), cells_v)
tr_te = np.hstack([cells_tt+(6078*i) for i in range(5)])
quad = np.zeros((int(X.shape[0]), int(X.shape[1]+(X.shape[1]*(X.shape[1]-1))/2)))
quad[:,:X.shape[1]] = np.copy(X)
col = 99
for i in range(X.shape[1]-1):
for j in range(i+1, X.shape[1]):
quad[:,col] = (X[:,i]*X[:,j])
col += 1
X_v = X[val]
y_v = y[val]
quad_v = quad[val]
quad_tt = quad[tr_te]
y_tt = y[tr_te]
error_list=[]
w_list = []
bias_list = []
yp_list = []
for (cell_tr, cell_te) in (kfold.split(range(5470))):
te = np.hstack([cell_te+(5470*i) for i in range(5)])
tr = np.hstack([cell_tr+(5470*i) for i in range(5)])
quad_tr, y_tr = quad_tt[tr], y_tt[tr]
quad_te, y_te = quad_tt[te], y_tt[te]
regr = linear_model.LinearRegression()
regr.fit(quad_tr, y_tr)
w_list.append(regr.coef_.T)
bias_list.append(regr.intercept_)
w = np.mean(w_list, axis=0)
bias = np.mean(bias_list, axis=0)
yp = X_v + bias + quad_v.dot(w)
yp[yp<0] = 0
ya = X_v + y_v
ferror = (np.sum(np.abs(yp - ya)**2, axis=0)/np.sum(np.abs(ya)**2, axis=0))**(1/2)
error1 = np.abs(yp - ya)
dic = {
'ferror': ferror,
'error': error1,
'w': w,
'bias': bias
}
return dic
# result = skl_cv_quad(all_init, all_diff, kf=10)
# with open('./pickles/skl_(99,99)_ttv.pkl', 'wb') as f:
# pickle.dump(result, f)
# -
with open('./pickles/skl_(99,99)_tv.pkl', 'rb') as f:
skl_tv = pickle.load(f)
with open('./pickles/skl_(99,99)_ttv.pkl', 'rb') as f:
skl_ttv = pickle.load(f)
plt.figure(figsize=(12,6))
plt.plot(range(99), skl_tv['ferror'], 'o-', label='tv')
plt.plot(range(99), skl_ttv['ferror'], 'o-', label='ttv')
plt.legend()
plt.show()
| skl +quad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# +
# !wget https://physionet.org/files/challenge-2012/1.0.0/set-a.zip
# !wget https://physionet.org/files/challenge-2012/1.0.0/set-b.zip
# !wget https://physionet.org/files/challenge-2012/1.0.0/Outcomes-a.txt
# !wget https://physionet.org/files/challenge-2012/1.0.0/Outcomes-b.txt
# -
# !unzip -u set-a.zip
# !unzip -u set-b.zip
# pick a set
# dataset = 'set-a'
dataset = 'set-b'
# +
# load in outcomes
if dataset == 'set-a':
y = pd.read_csv('Outcomes-a.txt')
elif dataset == 'set-b':
y = pd.read_csv('Outcomes-b.txt')
# y.set_index('RecordID', inplace=True)
# y.index.name = 'recordid'
# -
y
# +
# load all files into list of lists
txt_all = list()
for f in os.listdir(dataset): #iterating through list of all .txt files in the folder
with open(os.path.join(dataset, f), 'r') as fp:
txt = fp.readlines()
# get recordid to add as a column via txt parsing
recordid = txt[1].rstrip('\n').split(',')[-1]
txt = [t.rstrip('\n').split(',') + [int(recordid)] for t in txt]
txt_all.extend(txt[1:])
# convert to pandas dataframe
df = pd.DataFrame(txt_all, columns=['time', 'parameter', 'value', 'recordid'])
# -
df.head()
pd.DataFrame(df).to_csv('../data/{}_predictors.csv'.format(dataset), index=False )
pd.DataFrame(y).to_csv('../data/{}_outcomes.csv'.format(dataset),index=False )
pd.DataFrame(y).to_csv('../data/{}_outcomes.csv'.format(dataset),index=False )
| making_data/data_extract.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import the modules
# -
# ### Load the dataset
#
# - Load the train data and using all your knowledge try to explore the different statistical properties of the dataset.
# +
# import the modules
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
# +
# Code starts here
train = pd.read_csv("train.csv")
# drop serial number
train.drop(columns=['customerID', 'Id'],inplace=True)
print(train.head())
# Code ends here
# -
# ### Visualize the data
#
# - Replace the missing values and modify some column values(as required by you).
# - Check out the best plots for plotting between categorical target and continuous features and try making some inferences from these plots.
# - Clean the data, apply some data preprocessing and engineering techniques.
# +
# Code starts here
# Split the data into X and y
X = train.drop(columns = ['Churn'])
y = train[['Churn']]
#Replacing spaces with 'NaN' in train dataset
X['TotalCharges'].replace(' ',np.NaN, inplace=True)
#Converting the type of column from X_train to float
X['TotalCharges'] = X['TotalCharges'].astype(float)
#Filling missing values
X['TotalCharges'].fillna(X['TotalCharges'].mean(),inplace=True)
# test['TotalCharges'].fillna(train['TotalCharges'].mean(), inplace=True)
#Check value counts
print(X.isnull().sum())
cat_cols = X.select_dtypes(include='O').columns.tolist()
#Label encoding train data
for x in cat_cols:
le = LabelEncoder()
X[x] = le.fit_transform(X[x])
#Encoding target data
y = y.replace({'No':0, 'Yes':1})
# -
# ### Model building
#
# - Try to predict the churning of customers using AdaBoost
# - Try and implement XGBoost for our customer churn problem and see how it performs in comparision to AdaBoost. Use different techniques you have learned to imporove the performance of the model.
# - Try improving upon the `accuracy_score` ([Accuracy Score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html))
# +
# Code Starts here
# Split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=3)
# Initialising AdaBoostClassifier model
ada_model = AdaBoostClassifier(random_state=0)
#Fitting the model on train data
ada_model.fit(X_train,y_train)
#Making prediction on test data
y_pred = ada_model.predict(X_test)
#Finding the accuracy score
ada_score = accuracy_score(y_test,y_pred)
print("Accuracy: ",ada_score)
#Finding the confusion matrix
ada_cm=confusion_matrix(y_test,y_pred)
print('Confusion matrix: \n', ada_cm)
#Finding the classification report
ada_cr=classification_report(y_test,y_pred)
print('Classification report: \n', ada_cr)
# Code ends here
# +
# Let's see if pruning of decision tree improves its accuracy. We will use grid search to do the optimum pruning.
#Parameter list
parameters={'learning_rate':[0.1,0.15,0.2,0.25,0.3],
'max_depth':range(1,3)}
# Code starts here
#Initializing the model
xgb_model = XGBClassifier(random_state=0)
#Fitting the model on train data
xgb_model.fit(X_train,y_train)
#Making prediction on test data
y_pred = xgb_model.predict(X_test)
#Finding the accuracy score
xgb_score = accuracy_score(y_test,y_pred)
print("Accuracy: ",xgb_score)
#Finding the confusion matrix
xgb_cm=confusion_matrix(y_test,y_pred)
print('Confusion matrix: \n', xgb_cm)
#Finding the classification report
xgb_cr=classification_report(y_test,y_pred)
print('Classification report: \n', xgb_cr)
### GridSearch CV
#Initialsing Grid Search
clf = GridSearchCV(xgb_model, parameters)
#Fitting the model on train data
clf.fit(X_train,y_train)
#Making prediction on test data
y_pred = clf.predict(X_test)
#Finding the accuracy score
clf_score = accuracy_score(y_test,y_pred)
print("Accuracy: ",clf_score)
#Finding the confusion matrix
clf_cm=confusion_matrix(y_test,y_pred)
print('Confusion matrix: \n', clf_cm)
#Finding the classification report
clf_cr=classification_report(y_test,y_pred)
print('Classification report: \n', clf_cr)
# -
# ### Prediction on the test data and creating the sample submission file.
#
# - Load the test data and store the `Id` column in a separate variable.
# - Perform the same operations on the test data that you have performed on the train data.
# - Create the submission file as a `csv` file consisting of the `Id` column from the test data and your prediction as the second column.
# +
# Code Starts here
# Prediction on test data
# Read the test data
test = pd.read_csv('test.csv')
# Storing the id from the test file
id_ = test['Id']
# Apply the transformations on test
test.drop(columns=['customerID', 'Id'],inplace=True)
#Replacing spaces with 'NaN' in test dataset
test['TotalCharges'].replace(' ',np.NaN, inplace=True)
#Converting the type of column from X_test to float
test['TotalCharges'] = test['TotalCharges'].astype(float)
#Label encoding test data
for x in cat_cols:
le = LabelEncoder()
test[x] = le.fit_transform(test[x])
# Predict on the test data
y_pred_test = clf.predict(test)
y_pred_test = y_pred_test.flatten()
# Create a sample submission file
sample_submission = pd.DataFrame({'Id':id_,'Churn':y_pred_test})
print(sample_submission.head())
# Replacing the values of sample_submission
sample_submission.replace({1:'Yes', 0: 'No'},inplace=True)
# Convert the sample submission file into a csv file
# sample_submission.to_csv('sample_submission_test.csv',index=False)
# Code ends here
| Boosting/Telecom-Churn-Prediction_Student_Template.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-3db1b7eb6aade580", "locked": true, "schema_version": 1, "solution": false}
# # BLU06 - Exercise Notebook
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-d9b262915f7cb1e5", "locked": true, "schema_version": 1, "solution": false}
import pandas as pd
import matplotlib.pyplot as plt
import warnings
idx = pd.IndexSlice
warnings.simplefilter(action='ignore', category=FutureWarning)
from random import seed
from sklearn.metrics import mean_absolute_error
from sklearn.linear_model import LinearRegression
import numpy as np
from sklearn.metrics import mean_squared_error
import math
from sklearn.ensemble import GradientBoostingRegressor
import itertools
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa import stattools
import hashlib # for grading purposes
from sklearn.model_selection import ParameterGrid
from pandas.plotting import lag_plot
plt.rcParams['figure.figsize'] = (12, 4)
from utils import *
# %matplotlib inline
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-c9a3e566cc624451", "locked": true, "schema_version": 1, "solution": false}
# # Let's predict wind power production!
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-c89f8456e9e70392", "locked": true, "schema_version": 1, "solution": false}
df = pd.read_csv('data/wind_power.csv')
df['date'] = pd.to_datetime(df['date'])
df = df.set_index('date')
df = df.sort_index()
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-5db0587a41ba75ab", "locked": true, "schema_version": 1, "solution": false}
# ##### Plot the series to get an idea of what's going on
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-846a66e59e645d8e", "locked": true, "schema_version": 1, "solution": false}
df.plot(figsize=(16, 4));
plt.ylabel('wind power')
plt.title('Wind power production for the initial months of 2010')
plt.show()
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-ef1184e061dcce01", "locked": true, "schema_version": 1, "solution": false}
# ### Q1: Formulate it as time series one-step-ahead prediction
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-52ebd77841c0b4ce", "locked": true, "schema_version": 1, "solution": false}
# ### Q1.1 Create the target, the relevant lags and drop the missing values.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-b2bfb6431053bbb1", "locked": false, "schema_version": 1, "solution": true}
# Note: By relevant lags we expect you to assume the top 3 lags from the PACF (including negatively correlated ones).
# Remember from the previous BLU to look at the PACF you only need to run plot_pacf(df)
#df_features =
#df_features['lag_a'] =
#df_features['lag_b'] =
#df_features['lag_c'] =
#df_features['target'] =
#df_features =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-ec83e113b13f6716", "locked": true, "points": 2, "schema_version": 1, "solution": false}
expected_hash = '6e7c3f337212b8e01b67b3d63a2597b35850076095277e4cd323c07a33d02b93'
assert hashlib.sha256(str(df_features.shape).encode()).hexdigest() == expected_hash
expected_hash = 'e5351d4cf12cd01d360938a341d09da9cf5728ac1902133344147fbcab933fdc'
assert hashlib.sha256(str(df_features.iloc[0]).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-7ab428009d677b3c", "locked": true, "schema_version": 1, "solution": false}
# ### Q1.2 Separate the training and test set. The test set consists of the last 24 values, while the training set consists of the rest.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-0aa5b9495555cd7d", "locked": false, "schema_version": 1, "solution": true}
# note: this is a very straightforward question. But you may think: "isn't this one-step-ahead forecasting?
# Why does the test have 24 values" Well, basically this just means we are doing 24 one-step-ahead forecasts.
# This way we obtain a better estimate of how our one-step-ahead model would perform in the real world.
# df_train =
# df_test =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-46d22f610161d58e", "locked": true, "points": 1, "schema_version": 1, "solution": false}
expected_hash = '4645890e7eab6608a2037bae3d98cba5a58ed02bf55ca030aa721ed1c68f7403'
assert hashlib.sha256(str(df_train.index[-1]).encode()).hexdigest() == expected_hash
expected_hash = '49a4f69687c306f42ebbc05f79ad78ab8c963cbe20e98fc11cb77629fa6b7259'
assert hashlib.sha256(str(df_test.index[0]).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-328a6be750d6b545", "locked": true, "schema_version": 1, "solution": false}
# ### Q1.3 Fit a linear regression to the training set
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-c10406518994a44d", "locked": false, "schema_version": 1, "solution": true}
# X_df_train =
# y_df_train =
# model =
# model.fit()
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-ba2d8a4a2f417e25", "locked": true, "points": 2, "schema_version": 1, "solution": false}
expected_hash = '1f7448e98bdb7d0ec574b34ef1478a0a170a96448be5ef3dfa3ba426fd2a4e9d'
assert hashlib.sha256(str(X_df_train.shape).encode()).hexdigest() == expected_hash
expected_hash = '0e39cad2ee7a31366699e40633f45b99f0211f1734d40dab9d2c5fb461ab7aa7'
assert hashlib.sha256(str(y_df_train.shape).encode()).hexdigest() == expected_hash
expected_hash = '2211e7e3e265cdb6e1337bbc7deac40ed9cd0c3203e70fded127c801e783b5f2'
assert hashlib.sha256(str(np.round(model.coef_,1)).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-f91888ec4ca37707", "locked": true, "schema_version": 1, "solution": false}
# ### Q1.4 Predict the test set and calculate the MAE
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-c702242ee9013f2d", "locked": false, "schema_version": 1, "solution": true}
# X_df_test =
# y_df_test =
# y_predict =
# test_mae =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-fa53ac00c701b6ad", "locked": true, "points": 2, "schema_version": 1, "solution": false}
expected_hash = '4b259a01aeaa99e670a865121f78c1d76f029cd5158b51c72089c0bd254d17a1'
assert hashlib.sha256(str(X_df_test.shape).encode()).hexdigest() == expected_hash
expected_hash = '46ff29167351b465ba887b52b52ebddfd407ee7c8781d11225ad285541d89be5'
assert hashlib.sha256(str(y_df_test.shape).encode()).hexdigest() == expected_hash
expected_hash = '64b1c7c2dc78c85eb25fc8ebbbe7b3f27095df61f7b0621c64be58a55a042cc6'
assert hashlib.sha256(str(np.round(test_mae,5)).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-1b64863f837f65e9", "locked": true, "schema_version": 1, "solution": false}
# ### Q2 Let's go into multi-step prediction!
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-6cecdca4023a9b09", "locked": true, "schema_version": 1, "solution": false}
# ### Q2.1 Separate into train, val and test. Test corresponds to the last 24 values and Val corresponds to the 24 steps before test.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-b4676deaf08fb5b8", "locked": false, "schema_version": 1, "solution": true}
# df_multistep_train =
# df_multistep_val =
# df_multistep_test =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-144aa010340df42d", "locked": true, "points": 1, "schema_version": 1, "solution": false}
expected_hash = 'b9202ef2e8ac88d956f6a0fc0b5c83540a0568fc2a19b5941796f3b140d84419'
assert hashlib.sha256(str(df_multistep_train.shape).encode()).hexdigest() == expected_hash
expected_hash = '17511d2ef2dbcb5be2ccb6880d9c364c156770484a5cf8292dfe283ea2405125'
assert hashlib.sha256(str(df_multistep_val.shape).encode()).hexdigest() == expected_hash
expected_hash = '17511d2ef2dbcb5be2ccb6880d9c364c156770484a5cf8292dfe283ea2405125'
assert hashlib.sha256(str(df_multistep_test.shape).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-a252157fed7db039", "locked": true, "schema_version": 1, "solution": false}
# ### Q2.2 Let's test some hyperparameter optimization!
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-16a7b564ac712979", "locked": false, "schema_version": 1, "solution": true}
# %%time
# Create a parameter grid with the following conditions:
# - Include the linear regression and gradient boosting regressor models.
# For the gradient boosting regressor use n_estimators=20 and random_state=10
# - Test using 3 and 26 lags
# - Test using 0 or 15 periods diffed
# - Set the weekday, month and holidays to False. These shouldn't affect wind power.
# - Don't use rollings.
# Use a for cycle to find the group of params that minimizes the MAE on the validation set.
# hint: to have no rollings in the predict_n_periods you should put an empty lists of lists in the param grid: [[]]
#param_grid =
# grid =
# for params in grid:
# predictions =
# best_params =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-27e82c63c7934897", "locked": true, "points": 3, "schema_version": 1, "solution": false}
expected_hash = '0c7f50ff83b085cd342a206e2c60d557601bddd1f378de106098f9350640d61e'
assert hashlib.sha256(str(best_params['model']).encode()).hexdigest() == expected_hash
expected_hash = '4e07408562bedb8b60ce05c1decfe3ad16b72230967de01f640b7e4729b49fce'
assert hashlib.sha256(str(best_params['num_periods_lagged']).encode()).hexdigest() == expected_hash
expected_hash = '5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9'
assert hashlib.sha256(str(best_params['num_periods_diffed']).encode()).hexdigest() == expected_hash
expected_hash = '1eafc175324ea36dbf3d3b7887e3887739219034fb5e3a9fb62759d6e9262b09'
assert hashlib.sha256(str(best_params).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-cd3355c62f859eeb", "locked": true, "schema_version": 1, "solution": false}
# ### Q2.3 Train a model with the best combination and predict the test set. Calculate the corresponding MAE.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-a8fdbd5c2eaf09dc", "locked": false, "schema_version": 1, "solution": true}
# We expect you to train the final model with train and val together.
# df_multistep_train_val =
# predictions =
# test_mae =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-cc1f893477248c54", "locked": true, "points": 3, "schema_version": 1, "solution": false}
expected_hash = '389b9a7d9112bcd9a1b488224b030633d4e420fcabb1ebf9c839072046c2c4c4'
assert hashlib.sha256(str(np.round(test_mae,5)).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-daf726703636e01a", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Q3 Finally, we'll add exogenous features to improve model performance!
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-6885d1d366eaf8b6", "locked": true, "schema_version": 3, "solution": false, "task": false}
exog = pd.read_csv('data/wind_speed_forecast.csv')
exog['date'] = pd.to_datetime(exog['date'])
exog = exog.set_index('date')
exog = exog.sort_index()
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-e286550a8feef7b7", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Q3.1 Add the exogenous feature to the dataset and the corresponding leads
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-9af98b0cd68282c7", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Remember you want to build a number of leads corresponding to the 24 values of the next day
# df["exog"] =
# df =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-af91c0f26c838bb3", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
expected_hash = 'dc81d4b4ac58b278e08f7c97d999d52f0b6b2416fa6ee85c37371b65194b92e1'
assert hashlib.sha256(str(df.shape).encode()).hexdigest() == expected_hash
expected_hash = 'c958a8701a36c5c8de6af0e4be7643e2a11f603d1c7e08da1cbcc0717955b278'
assert hashlib.sha256(str(df).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-c86dc8a19a1e1d45", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Q3.2 Separate into train, val and test. Test corresponds to the last 24 values and Val corresponds to the 24 steps before test.
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-2f59883c88de8c94", "locked": false, "schema_version": 3, "solution": true, "task": false}
# df_multistep_train =
# df_multistep_val =
# df_multistep_test =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-354c4b243c40059e", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
expected_hash = '2837155d68248a6d1b420e270ceed5dc0a593693ad57948e1fc4df01ecf97695'
assert hashlib.sha256(str(df_multistep_train.shape).encode()).hexdigest() == expected_hash
expected_hash = '87dc6044b1bf391af8f8cd3b7bc9022b8d2a4df90c38ca4b30cae58843788eb8'
assert hashlib.sha256(str(df_multistep_val.shape).encode()).hexdigest() == expected_hash
expected_hash = '87dc6044b1bf391af8f8cd3b7bc9022b8d2a4df90c38ca4b30cae58843788eb8'
assert hashlib.sha256(str(df_multistep_test.shape).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-a02c44df99f817f2", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Q3.3 Let's test some hyperparameter optimisation (again)!
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-f2026723066117d0", "locked": false, "schema_version": 3, "solution": true, "task": false}
# %%time
# Create a parameter grid with the following conditions:
# - Include the linear regression and gradient boosting regressor models.
# For the gradient boosting regressor use n_estimators=20 and random_state=10
# - Test using 3 and 26 lags
# - Set the periods diffed to 0
# - Set the weekday, month and holidays to False. These shouldn't affect wind power.
# - Don't use rollings.
# Use a for cycle to find the group of params that minimizes the MAE on the validation set.
# hint: This should be pretty much a copy paste from exercise 2.2. Remember that now our dataset has
# exogenous features so we want to understand if the hyperparameter optimisation result changes.
#param_grid =
# grid =
# for params in grid:
# predictions =
# best_params =
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-8843fc085fd17f70", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
expected_hash = '4a4e3897467aca700e8b7f6bad8af525326b1299498c701bc3313b8056750bbb'
assert hashlib.sha256(str(best_params['model']).encode()).hexdigest() == expected_hash
expected_hash = '4e07408562bedb8b60ce05c1decfe3ad16b72230967de01f640b7e4729b49fce'
assert hashlib.sha256(str(best_params['num_periods_lagged']).encode()).hexdigest() == expected_hash
expected_hash = '5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9'
assert hashlib.sha256(str(best_params['num_periods_diffed']).encode()).hexdigest() == expected_hash
expected_hash = '36ea53ab868bf8f53bf8cb79c37e4174c942c1c9a8fa847c8c9bb5dc338c057f'
assert hashlib.sha256(str(best_params).encode()).hexdigest() == expected_hash
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-a825d87a0ecbb67f", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Q3.4 Train a model with the best combination and predict the test. Did the exogenous feature improve the MAE?
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-ba499840582e0370", "locked": false, "schema_version": 3, "solution": true, "task": false}
# We expect you to train the final model with train and val together.
# df_multistep_train_val =
# predictions =
# test_mae =
# hint: Again, this should be mostly a copy paste from the exercise 2.3
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-ea25874a3f4da77f", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
expected_hash = '5b0c19587a858f320eda9f4a1e31017027fb7f5a970424aa7e00b6e38eeac273'
assert hashlib.sha256(str(np.round(test_mae,5)).encode()).hexdigest() == expected_hash
| S03 - Time Series/BLU06 - Machine Learning for Time Series/Exercise notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MNIST数据集
# ## 1. 加载数据集
# +
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# -
# ## 2. 数据探索
# +
import matplotlib.pyplot as plt
def plot_figure(im, interp = False): # 线性插值
f = plt.figure(figsize = (3, 6))
plt.gray()
plt.imshow(im, interpolation = None if interp else 'none')
# -
plot_figure(train_images[0])
train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)
test_labels
# ## 3. 数据预处理
# ### 3.1 数据标准化(Normalization)
# +
train_images_dense = train_images.reshape((60000, 28 * 28))
train_images_dense = train_images_dense.astype('float32') / 255
test_images_dense = test_images.reshape((10000, 28 * 28))
test_images_dense = test_images_dense.astype('float32') / 255
# +
train_images_conv = train_images.reshape((60000, 28, 28, 1))
train_images_conv = train_images_conv.astype('float32') / 255
test_images_conv = test_images.reshape((10000, 28, 28, 1))
test_images_conv = test_images_conv.astype('float32') / 255
# -
# ### 3.2 one-hot encoding
# +
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# -
train_labels[0]
print(train_labels.shape)
# ## 4. 构建模型
# ### 4.1 构建全连接网络
# +
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
model.add(layers.Dense(10, activation='softmax'))
# -
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# ### 4.1.1 训练网络
history = model.fit(train_images_dense, train_labels, epochs=10, batch_size=128)
# ### 4.1.2 验证训练好的模型
test_loss, test_acc = model.evaluate(test_images_dense, test_labels)
print('test_acc:', test_acc)
# ### 4.1.3 在训练时加入验证集()
history = model.fit(train_images_dense, train_labels,
epochs=10,
batch_size=128,
validation_data=(test_images_dense, test_labels))
# Our test set accuracy turns out to be 97.8% -- that's quite a bit lower than the training set accuracy.
# This gap between training accuracy and test accuracy is an example of "overfitting",
# the fact that machine learning models tend to perform worse on new data than on their training data.
# ### 4.1.4 画出训练集和验证集上的表现
# +
import matplotlib.pyplot as plt
# %matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
# -
val_loss_min = val_loss.index(min(val_loss))
val_acc_max = val_acc.index(max(val_acc))
print('validation set min loss: ', val_loss_min)
print('validation set max accuracy: ', val_acc_max)
# ## 4.2 构建卷积神经网路
# +
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
# -
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# ### 4.2.1 训练模型(有验证集)
history = model.fit(train_images_conv, train_labels,
epochs=10,
batch_size=64,
validation_data=(test_images_conv, test_labels))
# ### 4.2.2 验证模型
test_loss, test_acc = model.evaluate(test_images_conv, test_labels)
test_acc
# +
import matplotlib.pyplot as plt
# %matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# -
val_loss_min = val_loss.index(min(val_loss))
val_acc_max = val_acc.index(max(val_acc))
print('validation set min loss: ', val_loss_min)
print('validation set max accuracy: ', val_acc_max)
# +
import itertools
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.optimizers import SGD
from keras.utils import np_utils
from keras.datasets import mnist
import keras.backend as K
import matplotlib.pyplot as plt
from matplotlib.colors import colorConverter, ListedColormap
np.random.seed(1)
# %matplotlib inline
# -
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
# Flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
num_pixels
X_train.shape
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255.0
X_test = X_test / 255.0
y_train[:10]
# one-hot
num_classes = len(set(y_train))
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
y_train
# ### 全连接网络
# +
model = Sequential()
model.add(Dense(num_pixels, input_dim = num_pixels, activation = 'relu'))
model.add(Dense(num_classes, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy',
optimizer = 'adam',
metrics = ['accuracy'])
# -
model.fit(X_train, y_train,
validation_data = (X_test, y_test),
epochs = 10,
batch_size = 128,
verbose = True)
model.summary()
# 784*10再加上常数项是7850
# ### 卷积网络
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')
X_train.shape
# +
model = Sequential()
model.add(Conv2D(32, 3, input_shape = (28, 28, 1), activation = 'relu'))
model.add(Conv2D(32, 3, activation = 'relu'))
model.add(MaxPooling2D(pool_size = 2))
model.add(Conv2D(64, 3, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
model.add(MaxPooling2D(pool_size = 2))
model.add(Flatten())
model.add(Dense(128, activation = 'relu'))
model.add(Dense(num_classes, activation = 'softmax'))
# Compile model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# +
# Fit the model
model.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs = 10,
batch_size = 128)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Large CNN Error: %.2f%%" % (100 - scores[1] * 100))
# -
model.summary()
# 卷积深度深,参数还少
| NN&CNN/Part II MNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_tensorflow_p27)
# language: python
# name: conda_tensorflow_p27
# ---
# !ls
# !git clone https://github.com/eriknes/delta-gans.git
# +
import os
import zipfile
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# !pip install tqdm
# !pip install keras
os.chdir('delta-gans')
zip_ref = zipfile.ZipFile('fluvialStylesData.csv.zip')
zip_ref.extractall('.')
zip_ref.close()
# !ls
# -
# %run deltas_dcgan_3_0.py
| aws_run.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploratory Data Analysis (EDA) and Machine Learning Model Applications of Telco-Customer Dataset
# <a id=section101></a>
# ### 1. Introduction
# + active=""
# ... This involves predicting behavior to retain customers and can be used to develop focused customer retention programs.
# -
# <a id=section2></a>
# ### 2. Load the packages and data
import numpy as np
import pandas as pd
import statsmodels.api as sm
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale, StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
from sklearn.metrics import confusion_matrix, accuracy_score, mean_squared_error, r2_score, roc_auc_score, roc_curve, classification_report, f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
import warnings
warnings.filterwarnings("ignore")
import lux
# <a id=section3></a>
# ### 3. Data Profiling
# Read Dataset
df = pd.read_csv("WA_Fn-UseC_-Telco-Customer-Churn.csv")
# Review the data types and sample data to understand what variables we are dealing with?
# Which variables need to be transformed in some way before they can be analyzed?
df.head()
df.isnull().sum() # Display the sum of null values
df.info() # Display the data type for each variable
df.dtypes # Display the data type for each variable
df.describe() # Descriptive statistics for the numerical variables
df.describe().T
# ### 4. Observations
fig, ax = plt.subplots(3, 1, figsize=(10,7))
ax[0].set_title("SeniorCitizen")
ax[1].set_title("tenure")
ax[2].set_title("MonthlyCharges")
sns.boxplot(data = df.SeniorCitizen, orient="v", ax=ax[0], color = "b")
sns.boxplot(data = df.tenure, orient="v", ax=ax[1], color = "r")
sns.boxplot(data = df.MonthlyCharges, orient="v", ax=ax[2], color = "g")
plt.tight_layout()
for i in df.columns:
print(df[i].value_counts())
print("---------------------------------------------------------------------------------------------")
# <a id=section307></a>
# ### 5. Data Processing
#
# - Variables to transform prior to analysis:
# - Standardize all column headers to lower case (to prevent typos!)
# - Resolve duplicate records
# - Split Combination of multiple columns
# - We will need to decide how to handle Null values.
#
# <a id=section403></a>
#
#
#
# ### 5.1. Missing data and its imputation
df = df.drop(labels="customerID", axis=1)
df.info()
# +
# df[["TotalCharges"]] = df[["TotalCharges"]].apply(pd.to_numeric)
# Although there are numeric values in the total charges column, it appears as an object.
# -
a = df[df["TotalCharges"] == " "].index
a
df.drop(index = a, inplace = True)
df[["TotalCharges"]] = df[["TotalCharges"]].apply(pd.to_numeric)
X = df.drop(["Churn"], axis=1)
y = pd.DataFrame(df["Churn"], columns = ["Churn"])
X = pd.get_dummies(X, drop_first=True)
X.head()
y.head()
df = pd.concat([X,y], axis=1)
df.head()
df.info()
plt.figure(figsize = (20,20))
sns.heatmap(df.corr(), annot = False, cmap = "viridis")
df["Churn"] = df["Churn"].apply(lambda x: 1 if x == "Yes" else 0)
df
df.corr()["Churn"].sort_values()
plt.figure(figsize = (20,20))
df.corr()["Churn"].sort_values().plot.barh()
# # Machine Learning Model
# ## Random Forest
# +
X = df.drop(["Churn"], axis=1)
y = df["Churn"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, stratify=y, random_state=42)
rf_model=RandomForestClassifier().fit(X_train, y_train)
y_pred = rf_model.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# -
# ## Random forest tune
rf=RandomForestClassifier()
rf_params = {"n_estimators":[50, 100, 300], 'max_depth':[3,5,7],'max_features': [2,4,6,8],'min_samples_split': [2,4,6]}
rf_cv_model = GridSearchCV(rf, rf_params, cv = 5, n_jobs = -1, verbose = 2).fit(X_train, y_train)
rf_cv_model.best_params_
rf_tuned = RandomForestClassifier(max_depth = 7, max_features = 6, min_samples_split = 6, n_estimators = 300).fit(X_train, y_train)
y_pred = rf_tuned.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
rf_tuned = RandomForestClassifier(max_depth = 7, max_features = 8, min_samples_split = 2, n_estimators = 400).fit(X_train, y_train)
y_pred = rf_tuned.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# ## XGBOOST
# +
from xgboost import XGBClassifier
xgb_model = XGBClassifier().fit(X_train, y_train)
y_pred = xgb_model.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# -
# ## Tuning XGBoost
# +
xgb = XGBClassifier()
xgb_params = {"n_estimators": [50, 100, 300], "subsample":[0.5,0.8,1], "max_depth":[3,5,7], "learning_rate":[0.1,0.01,0.3]}
xgb_cv_model = GridSearchCV(xgb, xgb_params, cv = 3, n_jobs = -1, verbose = 2).fit(X_train, y_train)
xgb_cv_model.best_params_
# +
xgb_tuned = XGBClassifier(learning_rate= 0.01, max_depth= 5, n_estimators= 450, subsample= 0.5).fit(X_train, y_train)
y_pred = xgb_tuned.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# -
# ## Support Vector Classifier
# +
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_s = sc.fit_transform(X_train)
X_test_s = sc.transform(X_test)
svc_model_sc = SVC().fit(X_train_s, y_train)
y_pred = svc_model_sc.predict(X_test_s)
cnf_matrix = confusion_matrix(y_test, y_pred)
print(confusion_matrix(y_test, y_pred))
sns.heatmap(cnf_matrix, annot=True, cmap="YlGnBu",fmt='d')
plt.ylabel('Actual Label')
plt.xlabel('Predicted Label')
plt.show()
print(classification_report(y_test, y_pred))
# -
param_grid = {'C': [0.1,1, 10, 100, 1000], 'gamma': [1,0.1,0.01,0.001,0.0001], 'kernel': ['rbf', 'linear']}
from sklearn.model_selection import GridSearchCV
svc_tuned = GridSearchCV(SVC(),param_grid, verbose=3, refit=True)
svc_tuned.fit(X_train_s, y_train)
print(svc_tuned.best_params_)
print(svc_tuned.best_estimator_)
y_pred = svc_tuned.predict(X_test_s)
cnf_matrix = confusion_matrix(y_test, y_pred)
print(cnf_matrix)
sns.heatmap(cnf_matrix, annot=True, cmap="YlGnBu",fmt='d')
plt.ylabel('Actual Label')
plt.xlabel('Predicted Label')
plt.show()
print(classification_report(y_test,y_pred))
# ## Logistic Regression
log_model = LogisticRegression()
log_model.fit(X_train, y_train)
y_pred = log_model.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, stratify=y, random_state=42)
X_train_s = sc.fit_transform(X_train)
X_test_s = sc.transform(X_test)
log_model_sc = LogisticRegression()
log_model_sc.fit(X_train_s, y_train)
y_pred = log_model_sc.predict(X_test_s)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# ## KNN
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, stratify=y, random_state=42)
X_train_s = sc.fit_transform(X_train)
X_test_s = sc.transform(X_test)
knn_model_sc = KNeighborsClassifier(n_neighbors=1).fit(X_train_s, y_train)
y_pred = knn_model_sc.predict(X_test_s)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# -
error_rate = []
for i in range(1, 40):
model = KNeighborsClassifier(n_neighbors = i)
model.fit(X_train_s, y_train)
y_pred_i = model.predict(X_test_s)
error_rate.append(np.mean(y_pred_i != y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed', marker='o',
markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, stratify=y, random_state=42)
X_train_s = sc.fit_transform(X_train)
X_test_s = sc.transform(X_test)
knn_model_sc_tuned = KNeighborsClassifier(n_neighbors=38).fit(X_train_s, y_train)
y_pred = knn_model_sc_tuned.predict(X_test_s)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
# -
import sweetviz
import pandas as pd
train = pd.read_csv("WA_Fn-UseC_-Telco-Customer-Churn.csv")
test = pd.read_csv("WA_Fn-UseC_-Telco-Customer-Churn.csv")
my_report = sweetviz.compare([train, "Train"], [test, "Test"], "Churn")
my_report.show_html("Report.html") # Not providing a filename will default to SWEETVIZ_REPORT.html
| 14-ML_TelcoCustomerChurn_project/Telco-Customer-Churn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Requirements:
# 2 players should be able to play the game (both sitting at the same computer)
# The board should be printed out every time a player makes a move
# You should be able to accept input of the player position and then place a symbol on the board
# use numpad input to match numbers to cells
# 1 2 3
# 4 5 6
# 7 8 9
# +
# init
from IPython.display import clear_output
from random import randint
def display_board(board):
clear_output()
print(f' {board[1]}|{board[2]}|{board[3]}')
print('-------')
print(f' {board[4]}|{board[5]}|{board[6]}')
print('-------')
print(f' {board[7]}|{board[8]}|{board[9]}')
def player_select():
global p1
global p2
while p1 != 'O' and p1 != 'X':
p1 = input("Please pick a Marker 'X' or 'O': ")
if p1.upper() == 'X':
p1 = 'X'
p2 = 'O'
elif p1.upper() == 'O':
p1 = 'O'
p2 = 'X'
else:
print("Invalid input.")
def place_marker(board, marker, position):
board[position] = marker
def win_check(board,mark):
#for a board and marker, check if that marker has won on that board
if (board[1]==board[2]==board[3]==mark) or \
(board[1]==board[5]==board[9]==mark) or \
(board[1]==board[4]==board[7]==mark) or \
(board[2]==board[5]==board[8]==mark) or \
(board[3]==board[5]==board[7]==mark) or \
(board[3]==board[6]==board[9]==mark) or \
(board[4]==board[5]==board[6]==mark) or \
(board[7]==board[8]==board[9]==mark):
return True
else:
return False
def space_check(board,position):
# return boolean indicating if a space on the board is available
return board[position] == ' '
def full_board_check(board):
# return bool indicating the board is completely full
for x in range(1,10):
if space_check(board,x):
return False
return True
def player_choice(board):
# ask for player's next move
# space_check, then if valid, return chosen move (1-9)
move = ''
while True:
move = input('Select your move: ')
try:
move = int(move)
if move not in range(1,10):
print('Invalid input.')
elif space_check(board,move):
break
else:
print('Invalid input.')
except ValueError:
print('Invalid input.')
return move
def replay():
# asks players if they want to play again, returns boolean
while True:
again = input('Do you want to play again (y or n)? ')
if again.lower() == 'y':
return True
elif again.lower() == 'n':
return False
else:
print('Invalid input.')
# +
# run the game
print('Welcome to Tic Tac Toe!')
while True:
# game instance variables
board = ['#',' ',' ',' ',' ',' ',' ',' ',' ',' ']
p1 = ''
p2 = ''
turn = 1
# run the game steps
player_select()
#the game loop
while True:
if turn%2==0:
print('Player 2: ')
place_marker(board, p2, int(player_choice(board)))
else:
print('Player 1: ')
place_marker(board, p1, int(player_choice(board)))
display_board(board)
if win_check(board,p1):
print('Player 1 wins!')
break
elif win_check(board,p2):
print('Player 2 wins!')
break
elif full_board_check(board):
print('Board is full. Tie game!')
break
#increment turn counter for next player's turn
turn += 1
#play again?
if not replay():
break
# -
| Exercises/Tic Tac Toe - 2 player.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # This notebook produces Figure 2, 3 and 4:
# **Figure 2: CDF of pairwise cosine similarity of traffic profiles across <span style="color:blue; font-size:large">device types</span> (vertical lines denote medians.)
# Figure 3: CDF of pairwise cosine similarity of traffic profiles in <span style="color:blue; font-size:large">weekdays and weekends</span> (vertical lines denote medians).
# Figure 4: CDF of pairwise cosine similarity of traffic profiles with different <span style="color:blue; font-size:large">encounter durations</span> (vertical lines denote medians).**
# __Note: Since this notebook works with a sample of anonymized data, the numbers and figures are bound to be slightly different from the paper.__
# Code License: [Apache License 2.0](https://spdx.org/licenses/Apache-2.0.html)
# #### IMPORTS:
import sys, os, math, re, warnings, gc
from collections import defaultdict
import pandas as pd
import numpy as np
from scipy import stats as st
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
from tqdm import tqdm, tqdm_notebook
from sklearn.feature_extraction.text import TfidfTransformer
# %matplotlib inline
print("pandas v" + pd.__version__)
print("numpy v" + np.__version__)
print("Python v" + sys.version)
# On test machine:
# pandas v0.23.4
# numpy v1.15.3
# Python v3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 17:14:51)
# [GCC 7.2.0]
# %load_ext blackcellmagic
# #### VARIABLES:
random_state = 1000 #seed for sampling
np.random.seed(random_state)
# Ignoring Friday 06, because it's not complete and last few days that fall in exams
weekdays = ["09", "10", "11", "12", "13", "16", "17", "18", "19", "20", "23", "24", "25"] #, "26", "27", "30"]
weekdays_int = [9, 10, 11, 12, 13, 16, 17, 18, 19, 20, 23, 24, 25] #, "26", "27", "30"]
weekends = ["07", "08", "14", "15", "21", "22" ] # , "28", "29"]
weekends_int = [7, 8, 14, 15, 21, 22] # , "28", "29"]
# Set of weekdays and weekends.
wd = set(weekdays_int)
we = set(weekends_int)
n_min_count = 20
# Returns cosine similiarity of two input vectors.
# Enable @jit if numbda is present for a modest speedup.
# @jit
def getCosineSim(u1, u2):
return np.dot(u1, u2) / (np.linalg.norm(u1, 2) * np.linalg.norm(u2, 2))
# # Input Processing:
# ###### Load traffic profiles of encounters:
# %%time
df = pd.read_parquet(
"../Data/Traffic/encounter_profiles_2k_public.br.parquet",
engine="pyarrow",
)
# Use fastparquet or pyarrow engine. pyarrow is faster here (pip install -U pyarrow).
print("Number of users in sample:", len(df["uid"].unique()))
print("Number of data points in sample:", "{:,}".format(len(df)))
# Ignore the traffic profiles that are not in the available set of buildings:
df.head()
# #### Explanation of columns in encounter profiles:
# | Column name | Description |
# |:------------: |:-----------------------------------------: |
# | day | day of month |
# | bucket | time bucket in the day |
# | ip | 32-bit IPv4 address |
# | bytes | total bytes exchanged with ip |
# | uid | user ID |
# | device | device type (f for Flute and c for Cello) |
# | apid | access point ID |
# | bldgid | building ID |
# **Create sets of flutes and cellos:**
flutes = set(df[df["device"] == "f"]["uid"])
cellos = set(df[df["device"] == "c"]["uid"])
print("Flutes:", len(flutes))
print("Cellos:", len(cellos))
len(cellos.intersection(flutes))
# # Generate the TF-IDF matrix
# Create mapping from ip->index for user profiles
# +
ips = np.sort(df['ip'].unique())
ip_to_index = {}
i = 0;
for ip in ips:
ip_to_index[ip] = i
i = i + 1
# -
print("Number of IPs in this sample:", len(ips))
# #### Create user profiles i.e. a vector of bytes transferred to/from websites for each tuple:
# **Profile is create for each user, at each building, on every day**
groupingKey = ["uid", "bldgid", "day"]
grouped = df.groupby(by = groupingKey)
len(grouped.groups)
# +
## Big loop, avg ~700it/s on test machine.
# keep track of each groupingKey->row index in the matrix.
key2index = {}
n_ips = len(ips)
n_groups = len(grouped.groups)
print("n_ips", n_ips)
print("n_groups", n_groups)
# Create an empty matrix, one row for each groupingKey, one column for each IP.
tfidf = np.zeros((n_groups, n_ips), dtype=np.float32)
idx = 0
for g in tqdm_notebook(grouped):
# Populate the non-zero elements of current row.
for row in zip(g[1]['ip'], g[1]['bytes']):
tfidf[idx, ip_to_index[row[0]]] = tfidf[idx, ip_to_index[row[0]]] + row[1]
# Create mapping from key to row idx and update idx.
key2index[g[0]] = idx; # g[0] corresponds to groupingKey.
idx = idx + 1;
# -
# Apply log
tfidf = np.log(tfidf, where=(tfidf>0))
tfidf = tfidf.astype(np.float32)
# Apply TF-IDF transformation.
tfidf_transformer = TfidfTransformer(norm='l1')
tfidf = tfidf_transformer.fit_transform(tfidf)
tfidf = tfidf.astype(np.float32) # TfidfTransformer automatically changes to float64, bring it to float32 to save RAM.
tfidf.dtype # it is now float32.
# Some examples of the keys, 3-tuples of (user, building, day).
list(key2index.keys())[:2]
# Convert sparse matrix to dense ndarray for easy processing with numpy.
# keep dense ndarray.
tfidf = tfidf.toarray()
tfidf.shape # Should be (n_groups, n_ips).
# ##### Read the encounters:
# Parquet files already have **day, bucket and prefix** columns.
# Read the list of encounter files, each filename is a building, each building can have multiple prefixes:
encounters_dir = "../Data/Encounters_bldgs_noTSnoAP_day_dur_parquet_FOR_PUBLIC/"
encounter_files = sorted(
[os.path.join(encounters_dir,f)
for f in os.listdir(encounters_dir) if os.path.isfile(os.path.join(encounters_dir,f))])
encounter_files[:10]
# ##### Populate encounter pairs' stats.
# Big loop, runtime ~2 minutes.
tfidf.shape
# +
# %%time
encountered = {} # u1, u2, day, bldg.
# HashMap for Encountered pairs' stats.
enc_cos = defaultdict(list)
# Populate encounter pairs' stats.
for f in tqdm_notebook(encounter_files):
gc.collect()
### INPUT:
enc = pd.read_parquet(f, engine="pyarrow")
# Take the filename only, remove '.br.parquet'.
bldg = int(f.split("/")[-1][:-11])
### PROCESSING:
ff = 0
cc = 0
fc = 0
aa = 0
# Get cosine sim stats for encountered pairs.
for row in zip(
enc["uid1"], enc["uid2"], enc["day"], enc["uid1_device"], enc["uid2_device"]
):
# Unpack the row for readability.
uid1 = row[0]
uid2 = row[1]
day = row[2]
dev1 = row[3]
dev2 = row[4]
# An encounter between u1,u2 in this day and building has already been counted, skip it.
memo_key = str(uid1) + "#" + str(uid2) + "#" + str(day) + "#" + str(bldg)
if memo_key in encountered:
continue
# Determine device types, devType is a number from 0 to 3.
if dev1 == "f" and dev2 == "f": # (flute, flute) case.
devType = 0
ff += 1
elif dev1 == "c" and dev2 == "c": # (cellos, cellos) case.
devType = 1
cc += 1
elif (dev1 == "f" and dev2 == "c") or (
dev1 == "c" and dev2 == "f"
): # (f, c) or (c, f)
devType = 2
fc += 1
else: # One or both devices are neither in cellos nor flutes (any, any), shouldn't happen in sample.
devType = 3
aa += 1
tuple1 = (uid1, bldg, day)
tuple2 = (uid2, bldg, day)
if tuple1 in key2index and tuple2 in key2index:
u1 = tfidf[key2index[tuple1],]
u2 = tfidf[key2index[tuple2],]
enc_cos[(bldg, day, devType)].append(getCosineSim(u1, u2))
encountered[memo_key] = True
# -
all_users = sorted(df["uid"].unique())
uc = len(all_users) # user count.
days = sorted(df["day"].unique())
# ##### Populate non-encountered pairs' stats.
# +
# %%time
# HashMap for All (non-encountered) pairs' stats.
not_enc_cos = defaultdict(list)
# Populate non-encountered pairs' stats.
for f in tqdm_notebook(encounter_files):
# Take the filename only, remove '.br.parquet'.
bldg = int(f.split("/")[-1][:-11])
for day in days:
ss = 0; ll = 0; sl = 0; aa = 0;
for i in range(0, uc):
tuple1 = (all_users[i], bldg, day)
if tuple1 in key2index:
u1 = tfidf[key2index[tuple1], ]
for j in range(i+1, uc):
tuple2 = (all_users[j], bldg, day)
if tuple2 in key2index:
u2 = tfidf[key2index[tuple2], ]
# If u1, u2 have NOT encountered in this day and building
uid1 = all_users[i]
uid2 = all_users[j]
memo_key = str(uid1) + "#" + str(uid2) + "#" + str(day) + "#" + str(bldg)
if memo_key not in encountered:
# Determine device types.
if (uid1 in flutes and uid2 in flutes): # (f, f) case.
devType = 0;
ss += 1
elif (uid1 in cellos and uid2 in cellos): # (c, c) case.
devType = 1;
ll += 1
elif ( (uid1 in flutes and uid2 in cellos) or (uid1 in cellos and uid2 in flutes) ): # (f, c) or (c, f)
devType = 2;
sl += 1
else: # One or both devices are neither in flutes nor cellos.
devType = 3;
aa += 1
# Update the according key.
key = (bldg, day, devType)
not_enc_cos[key].append(getCosineSim(u1, u2))
# -
# ##### Turn both stats into dataframes for ease of filtering:
keys_0 = [key[0] for key in enc_cos.keys()]
keys_1 = [key[1] for key in enc_cos.keys()]
keys_2 = [key[2] for key in enc_cos.keys()]
vals = [val for val in enc_cos.values()]
enc_df = pd.DataFrame({'name': keys_0, 'day': keys_1, 'devType': keys_2, 'cos_sims_list': vals})
enc_df.head()
keys_0 = [key[0] for key in not_enc_cos.keys()]
keys_1 = [key[1] for key in not_enc_cos.keys()]
keys_2 = [key[2] for key in not_enc_cos.keys()]
vals = [val for val in not_enc_cos.values()]
not_enc_df = pd.DataFrame({'name': keys_0, 'day': keys_1, 'devType': keys_2, 'cos_sims_list': vals})
not_enc_df.head()
enc_df.devType.unique()
print("#keys in enc group:", len(enc_cos))
print("#keys in not_enc group:", len(not_enc_cos))
intersect = set(not_enc_cos.keys()).intersection(set(enc_cos.keys()))
print("#keys shared between enc & not_enc:", len(intersect))
# # Compare average cosine similarity:
results = {}
# For every building, day:
for key in intersect:
enc = np.array(enc_cos[key])
nenc = np.array(not_enc_cos[key])
results[key] = (np.mean(enc), np.mean(nenc))
# +
bigger_count_wd = 0
count_wd = 0
bigger_count_we = 0
count_we = 0
for k, v in results.items():
if (k[1] in wd):
count_wd += 1;
elif (k[1] in we):
count_we += 1;
if v[0] >= v[1]:
if (k[1] in wd):
bigger_count_wd += 1
elif (k[1] in we):
bigger_count_we += 1
print("#WEEKDAY keys:", count_wd)
print("%WEEKDAY keys where avg(enc) > avg(nenc):", bigger_count_wd/count_wd)
print("#WEEKEND keys:", count_we)
print("%WEEKEND keys where avg(enc) > avg(nenc):", bigger_count_we/count_we)
# +
# What if we ONLY CONSIDER AN ITEM WHEN THERE ARE > n_min_count values in the cos list.
from collections import defaultdict
count_days_bigger_per_bldg = {}
days_bigger_per_bldg = defaultdict(list)
bigger_count = 0
total_count = 0
for k, v in results.items():
# IGNORE IF NOT ENOUGH SAMPLES.
enc = np.array(enc_cos[k])
nenc = np.array(not_enc_cos[k])
if len(enc) < n_min_count or len(nenc) < n_min_count:
continue;
# IGNORE IF NOT IN OUR WD/WE RANGE.
if k[1] not in wd and k[1] not in we:
continue;
if v[0] >= v[1]:
bigger_count += 1
count_days_bigger_per_bldg[k[0]] = count_days_bigger_per_bldg.get(k[0], 0) + 1;
days_bigger_per_bldg[k[0]].append(k[1])
else:
print(k , len(enc), len(nenc))
total_count += 1;
# -
print(
"Percentage of keys where average similarity of encountered is higher than non-encountered: \n",
bigger_count / total_count,
"\n (this ignores the cases where there are less than n_min_count samples)"
)
# +
bigger_count_wd = 0
count_wd = 0
bigger_count_we = 0
count_we = 0
for k, v in results.items():
enc = np.array(enc_cos[k])
nenc = np.array(not_enc_cos[k])
if len(enc) < n_min_count or len(nenc) < n_min_count:
continue;
# IGNORE IF NOT IN OUR WD/WE RANGE.
if k[1] not in wd and k[1] not in we:
continue;
if (k[1] in wd):
count_wd += 1;
elif (k[1] in we):
count_we += 1;
if v[0] >= v[1]:
if (k[1] in wd):
bigger_count_wd += 1
elif (k[1] in we):
bigger_count_we += 1
print("with both enc/nenc at least n_min_count=" + str(n_min_count) + " samples:")
print("#WEEKDAY keys:", count_wd)
print("%WEEKDAY keys where avg(enc) > avg(nenc):", bigger_count_wd/count_wd)
print("#WEEKEND keys:", count_we)
print("%WEEKEND keys where avg(enc) > avg(nenc):", bigger_count_we/count_we)
# -
# # Separate different devTypes
# (f, f) -> 0, (c, c) -> 1, (f, c) -> 2, at least one device is unclassified -> 3
def getAverageOfList(l):
return sum(l)/len(l)
# +
# %%time
enc_df['avg_cos'] = enc_df.cos_sims_list.apply(getAverageOfList)
not_enc_df['avg_cos'] = not_enc_df.cos_sims_list.apply(getAverageOfList)
enc_df['med_cos'] = enc_df.cos_sims_list.apply(np.median)
not_enc_df['med_cos'] = not_enc_df.cos_sims_list.apply(np.median)
# -
enc_df.head()
ff_enc = enc_df[enc_df.devType==0]
cc_enc = enc_df[enc_df.devType==1]
fc_enc = enc_df[enc_df.devType==2]
aa_enc = enc_df[enc_df.devType==3]
ff_enc.describe()
cc_enc.describe()
fc_enc.describe()
ff_nenc = not_enc_df[not_enc_df.devType==0]
cc_nenc = not_enc_df[not_enc_df.devType==1]
fc_nenc = not_enc_df[not_enc_df.devType==2]
aa_nenc = not_enc_df[not_enc_df.devType==3]
## Valid line styles: '-' | '--' | '-.' | ':' ,'steps'
def cdf_plot(ser,ax=None,figsize=(7,5), label=None, fontSize = 15, lineWidth=2, lineStyle='-', ylabel='CDF'):
print(len(ser))
ser = ser.sort_values()
cum_dist = np.linspace(0.,1.,len(ser))
ser_cdf = pd.Series(cum_dist, index=ser)
ax = ser_cdf.plot(drawstyle='steps',figsize=figsize,yticks=np.arange(0.,1.001,0.1),ax=ax, label=label,
linewidth=lineWidth, linestyle=lineStyle)
## Change x axis font size
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(fontSize)
## Change y axis font size
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(fontSize)
ax.set_ylabel(ylabel, fontsize=18)
return ax
def plotEncAndNencAvg(enc, nenc, legend):
ax = cdf_plot(enc.avg_cos)
ax = cdf_plot(nenc.avg_cos, ax = ax)
ax.set_xlim([0, 0.3])
ax.legend(legend)
plt.show()
plotEncAndNencAvg(enc_df, not_enc_df, ["enc", "nenc"]);
plotEncAndNencAvg(ff_enc, ff_nenc, ["ff_enc", "ff_nenc"])
plotEncAndNencAvg(cc_enc, cc_nenc, ["cc_enc", "cc_nenc"])
plotEncAndNencAvg(fc_enc, fc_nenc, ["fc_enc", "fc_nenc"])
def flatten(df, field):
return list(itertools.chain.from_iterable(df[field]))
field = "cos_sims_list"
# Just keep all. --> No difference in getting significant results, either way p-value is <<< 0.05
FF = pd.Series(flatten(ff_enc, field))
CC = pd.Series(flatten(cc_enc, field))
FC = pd.Series(flatten(fc_enc, field))
NN = pd.Series(flatten(not_enc_df, field)) # NOT ENC GROUP.
# NN is big, subsample it.
NN = NN.sample(frac = 0.2, random_state = random_state)
print("FF vs *:")
print(st.mannwhitneyu(FF, CC).pvalue)
print(st.mannwhitneyu(FF, FC).pvalue)
print(st.mannwhitneyu(FF, NN).pvalue)
print("CC vs *:")
print(st.mannwhitneyu(CC, FF).pvalue)
print(st.mannwhitneyu(CC, FC).pvalue)
print(st.mannwhitneyu(CC, NN).pvalue)
print("FC vs *:")
print(st.mannwhitneyu(FC, FF).pvalue)
print(st.mannwhitneyu(FC, CC).pvalue)
print(st.mannwhitneyu(FC, NN).pvalue)
# ## Fig 2, Device Types:
upper = pd.concat([FF, CC, FC, NN]).quantile(q = 0.99)
print(upper)
# +
# %%time
# sns.set(font_scale=1.6)
# sns.set_style("white")
sns.reset_defaults()
sample_size = 1000
ax = cdf_plot(FF.sample(n = sample_size, random_state = random_state), lineStyle='-')
ax = cdf_plot(CC.sample(n = sample_size, random_state = random_state), ax = ax, lineStyle='--')
ax = cdf_plot(FC.sample(n = sample_size, random_state = random_state), ax = ax, lineStyle='-.')
ax = cdf_plot(NN.sample(n = sample_size, random_state = random_state), ax = ax, lineStyle=':',
ylabel = 'Fraction of pairs')
upper_xlim = 0.3;
ax.set_xlim([0, upper_xlim])
ax.legend(["Flute-Flute $(FF)$", "Cello-Cello $(CC)$",
'Flute-Cello $(FC)$', 'Non-encountered $pair$'], fontsize = 16)
# plt.axhspan(0.5, 0.51, facecolor='blue', alpha = 0.5)
# plt.axvspan(FF.quantile(q = 0.5), FF.quantile(q = 0.51), facecolor='gray', alpha = 0.5)
ymax = 0.5
# Color values come from inspection of values pandas plotting uses!
xmax = FF.quantile(q = ymax)
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=xmax/upper_xlim) # set_xlim messes this up, have to scale...!
plt.axvline(x = xmax, color='#1f77b4', ymin=0, ymax=ymax, lineStyle='-')
xmax = CC.quantile(q = ymax)
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=xmax/upper_xlim) # set_xlim messes this up, have to scale...!
plt.axvline(x = xmax, color='#ff7f0e', ymin=0, ymax=ymax, lineStyle='--')
xmax = FC.quantile(q = ymax)
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=xmax/upper_xlim) # set_xlim messes this up, have to scale...!
plt.axvline(x = xmax, color='#2ca02c', ymin=0, ymax=ymax, lineStyle='-.')
xmax = NN.quantile(q = ymax)
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=xmax/upper_xlim) # set_xlim messes this up, have to scale...!
plt.axvline(x = xmax, color='#dc4647', ymin=0, ymax=ymax, lineStyle=':')
ax.set_xlabel("Pairwise cosine similarity of traffic profiles", fontsize=18)
plt.tight_layout()
plt.savefig("Fig2_cdf_pairwise_devType_v501.pdf", dpi=160)
plt.show()
# -
def getAngularSimFromCos(cos):
return 1 - ( math.acos(cos - 2.2e-16) / math.pi );
# +
# %%time
# Angular version of the above!
sns.reset_defaults()
sample_size = 2000
ax = cdf_plot(FF.sample(n = sample_size, random_state = random_state).apply(getAngularSimFromCos), lineStyle='-')
ax = cdf_plot(CC.sample(n = sample_size, random_state = random_state).apply(getAngularSimFromCos), ax = ax, lineStyle='--')
ax = cdf_plot(FC.sample(n = sample_size, random_state = random_state).apply(getAngularSimFromCos), ax = ax, lineStyle='-.')
ax = cdf_plot(NN.sample(n = sample_size, random_state = random_state).apply(getAngularSimFromCos), ax = ax, lineStyle=':',
ylabel = 'Fraction of pairs')
upper_xlim = getAngularSimFromCos(0.3);
ax.set_xlim([0.5, upper_xlim])
ax.legend(['Flute-Flute', 'Cello-Cello', 'Flute-Cello', 'Non-encountered pair'], prop={'size': 14})
# plt.axhspan(0.5, 0.51, facecolor='blue', alpha = 0.5)
# plt.axvspan(FF.quantile(q = 0.5), FF.quantile(q = 0.51), facecolor='gray', alpha = 0.5)
ymax = 0.5
# Color values come from inspection of values pandas plotting uses!
xmax = getAngularSimFromCos(FF.quantile(q = ymax))
plt.axvline(x = xmax, color='#1f77b4', ymin=0, ymax=ymax, lineStyle='-')
xmax = getAngularSimFromCos(CC.quantile(q = ymax))
plt.axvline(x = xmax, color='#ff7f0e', ymin=0, ymax=ymax, lineStyle='--')
xmax = getAngularSimFromCos(FC.quantile(q = ymax))
plt.axvline(x = xmax, color='#2ca02c', ymin=0, ymax=ymax, lineStyle='-.')
xmax = getAngularSimFromCos(NN.quantile(q = ymax))
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=upper_xlim)
plt.axvline(x = xmax, color='#dc4647', ymin=0, ymax=ymax, lineStyle=':')
ax.set_xlabel("Pairwise angular similarity of traffic profiles", fontsize=18)
plt.tight_layout()
plt.show()
# -
# # Separate Weekdays and Weekends:
# +
wd_enc = enc_df[enc_df['day'].isin(wd)]
wd_nenc = not_enc_df[not_enc_df['day'].isin(wd)]
we_enc = enc_df[enc_df['day'].isin(we)]
we_nenc = not_enc_df[not_enc_df['day'].isin(we)]
# +
field = "cos_sims_list"
print("Weekday FF", len(pd.Series(flatten(wd_enc[wd_enc['devType']==0], field))))
print("Weekday CC", len(pd.Series(flatten(wd_enc[wd_enc['devType']==1], field))))
print("Weekday FC", len(pd.Series(flatten(wd_enc[wd_enc['devType']==2], field))))
print("Weekend FF", len(pd.Series(flatten(we_enc[we_enc['devType']==0], field))))
print("Weekend CC", len(pd.Series(flatten(we_enc[we_enc['devType']==1], field))))
print("Weekend FC", len(pd.Series(flatten(we_enc[we_enc['devType']==2], field))))
# -
field = "cos_sims_list"
flat_wd_enc = pd.Series(flatten(wd_enc, field))
flat_wd_nenc = pd.Series(flatten(wd_nenc, field))
flat_we_enc = pd.Series(flatten(we_enc, field))
flat_we_nenc = pd.Series(flatten(we_nenc, field)) # NOT ENC GROUP.
# wd_nenc is big, subsample it.
flat_wd_nenc = flat_wd_nenc.sample(frac = 0.05, random_state = random_state)
np.median(flat_wd_enc)
np.median(flat_wd_nenc)
np.median(flat_we_enc)
np.median(flat_we_nenc)
len(flat_wd_nenc)
print("flat_wd_enc vs *:")
print(st.mannwhitneyu(flat_wd_enc, flat_wd_nenc).pvalue)
print(st.mannwhitneyu(flat_wd_enc, flat_we_enc).pvalue)
print(st.mannwhitneyu(flat_wd_enc, flat_we_nenc).pvalue)
print("flat_wd_nenc vs *:")
print(st.mannwhitneyu(flat_wd_nenc, flat_wd_enc).pvalue)
print(st.mannwhitneyu(flat_wd_nenc, flat_we_enc).pvalue)
print(st.mannwhitneyu(flat_wd_nenc, flat_we_nenc).pvalue)
print("flat_we_enc vs *:")
print(st.mannwhitneyu(flat_we_enc, flat_wd_enc).pvalue)
print(st.mannwhitneyu(flat_we_enc, flat_wd_nenc).pvalue)
print(st.mannwhitneyu(flat_we_enc, flat_we_nenc).pvalue)
# ## Fig 3, Weekday vs Weekend:
# +
# %%time
sample_size = 2000
ax = cdf_plot(flat_wd_enc.sample(n = sample_size, random_state = random_state), lineStyle='-')
ax = cdf_plot(flat_wd_nenc.sample(n = sample_size, random_state = random_state), ax = ax, lineStyle='--')
ax = cdf_plot(flat_we_enc.sample(n = sample_size, random_state = random_state), ax = ax, lineStyle='-.')
ax = cdf_plot(flat_we_nenc.sample(n = sample_size, random_state = random_state), ax = ax, lineStyle=':',
ylabel = 'Fraction of pairs')
upper_xlim = 0.3;
ax.set_xlim([0, upper_xlim])
ax.legend(['Weekday encountered pair', 'Weekday non-encountered pair',
'Weekend encountered pair', 'Weekend non-encountered pair'], prop={'size': 13})
# plt.axhspan(0.5, 0.51, facecolor='blue', alpha = 0.5)
# plt.axvspan(FF.quantile(q = 0.5), FF.quantile(q = 0.51), facecolor='gray', alpha = 0.5)
ymax = 0.5
# Color values come from inspection of values pandas plotting uses!
xmax = flat_wd_enc.quantile(q = ymax)
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=xmax/upper_xlim) # set_xlim messes this up, have to scale...!
plt.axvline(x = xmax, color='#1f77b4', ymin=0, ymax=ymax, lineStyle='-')
xmax = flat_wd_nenc.quantile(q = ymax)
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=xmax/upper_xlim) # set_xlim messes this up, have to scale...!
plt.axvline(x = xmax, color='#ff7f0e', ymin=0, ymax=ymax, lineStyle='--')
xmax = flat_we_enc.quantile(q = ymax)
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=xmax/upper_xlim) # set_xlim messes this up, have to scale...!
plt.axvline(x = xmax, color='#2ca02c', ymin=0, ymax=ymax, lineStyle='-.')
xmax = flat_we_nenc.quantile(q = ymax)
plt.axhline(y = ymax, color='#D0D0D0', xmin=0, lineStyle='-', xmax=xmax/upper_xlim) # set_xlim messes this up, have to scale...!
plt.axvline(x = xmax, color='#dc4647', ymin=0, ymax=ymax, lineStyle=':')
ax.set_xlabel("Pairwise cosine similarity of traffic profiles", fontsize=18)
plt.tight_layout()
plt.savefig("Fig3_cdf_pairwise_wdwe_v501.pdf", dpi=160)
plt.show()
# -
# # Separate Encounter durations
# Re-populate the encountered pairs' stats, this time create encounter duration buckets.
# Non-encountered pairs' stats does not need updating, since not_enc group doesn't have duration category.
# +
encountered = {} # u1, u2, day, bldg.
# HashMap for Encountered pairs' stats.
enc_cos = defaultdict(list)
# Populate encounter pairs' stats.
for f in tqdm_notebook(encounter_files):
gc.collect()
### INPUT:
enc = pd.read_parquet(f, engine="pyarrow")
# Take the filename only, remove '.br.parquet'.
bldg = int(f.split("/")[-1][:-11])
### PROCESSING:
# devType counters
ff = 0
cc = 0
fc = 0
aa = 0
# Duration counters
short = 0
med = 0
long = 0
# Get cosine sim stats for encountered pairs.
for row in zip(
enc["uid1"],
enc["uid2"],
enc["day"],
enc["uid1_device"],
enc["uid2_device"],
enc["dur"],
):
# Unpack the row for readability.
uid1 = row[0]
uid2 = row[1]
day = row[2]
dev1 = row[3]
dev2 = row[4]
dur = row[5]
# An encounter between u1,u2 in this day and building has already been counted, skip it.
memo_key = str(uid1) + "#" + str(uid2) + "#" + str(day) + "#" + str(bldg)
if memo_key in encountered:
continue
# Determine device types, devType is a number from 0 to 3.
if dev1 == "f" and dev2 == "f": # (flute, flute) case.
devType = 0
ff += 1
elif dev1 == "c" and dev2 == "c": # (cellos, cellos) case.
devType = 1
cc += 1
elif (dev1 == "f" and dev2 == "c") or (
dev1 == "c" and dev2 == "f"
): # (f, c) or (c, f)
devType = 2
fc += 1
else: # One or both devices are neither in cellos nor flutes (any, any), shouldn't happen in sample.
devType = 3
aa += 1
# Determine duration category, the cut-offs are based on separate analysis of encounter durations.
if dur < 38:
durCat = 0; # Short encounter.
short += 1;
elif dur >= 38 and dur < 317:
durCat = 1; # Medium encounter.
med += 1;
elif dur >= 317:
durCat = 2; # Long encounter.
long += 1;
tuple1 = (uid1, bldg, day)
tuple2 = (uid2, bldg, day)
if tuple1 in key2index and tuple2 in key2index:
u1 = tfidf[key2index[tuple1],]
u2 = tfidf[key2index[tuple2],]
enc_cos[(bldg, day, devType, durCat)].append(getCosineSim(u1, u2))
encountered[memo_key] = True
# +
keys_name = [key[0] for key in enc_cos.keys()]
keys_day = [key[1] for key in enc_cos.keys()]
keys_devType = [key[2] for key in enc_cos.keys()]
keys_durCat = [key[3] for key in enc_cos.keys()]
vals = [val for val in enc_cos.values()]
enc_df = pd.DataFrame(
{
"name": keys_name,
"day": keys_day,
"devType": keys_devType,
"durCat": keys_durCat,
"cos_sims_list": vals,
}
)
enc_df['avg_cos'] = enc_df.cos_sims_list.apply(getAverageOfList)
enc_df['med_cos'] = enc_df.cos_sims_list.apply(np.median)
enc_df.head()
# -
not_enc_df.head()
# ###### Separate the different duration categories:
s_enc = enc_df[enc_df.durCat==0]
m_enc = enc_df[enc_df.durCat==1]
l_enc = enc_df[enc_df.durCat==2]
print("short, medium, long counts:")
print(len(s_enc), len(m_enc), len(l_enc), sep=",")
# Let's plot the averages:
ax = cdf_plot(s_enc.avg_cos)
ax = cdf_plot(m_enc.avg_cos, ax = ax)
ax = cdf_plot(l_enc.avg_cos, ax = ax)
ax = cdf_plot(not_enc_df.avg_cos, ax = ax, lineStyle='--', ylabel = 'Fraction of pairs')
ax.set_xlim([0, 0.3])
ax.set_xlabel("Average cosine similarity of pairs across all buildings and days")
ax.legend(["Short encounter", "Medium encounter", "Long encounter", "Non-encountered pair"])
plt.tight_layout()
plt.show()
# One could also look at individual buildings:
ax = cdf_plot(s_enc[s_enc.name==17].avg_cos)
ax = cdf_plot(m_enc[m_enc.name==17].avg_cos, ax = ax)
ax = cdf_plot(l_enc[l_enc.name==17].avg_cos, ax = ax)
ax = cdf_plot(not_enc_df[not_enc_df.name==17].avg_cos, ax = ax, lineStyle='--')
ax.set_xlim([0, 0.35])
ax.set_xlabel("Average cosine similarity of pairs in a big building across all days")
ax.legend(["short_enc", "med_enc", "long_enc", "nenc"])
plt.show()
# Flatten the list of similarities and analyze them:
# Define groups of encounter duration.
N = 5000
short = pd.Series(flatten(s_enc, 'cos_sims_list')).sample(n = N, random_state = random_state)
medium = pd.Series(flatten(m_enc, 'cos_sims_list')).sample(n = N, random_state = random_state)
long = pd.Series(flatten(l_enc, 'cos_sims_list')).sample(n = N, random_state = random_state)
nenc = pd.Series(flatten(not_enc_df, 'cos_sims_list')).sample(n = N, random_state = random_state)
print("short vs *:")
print(st.mannwhitneyu(short, medium).pvalue)
print(st.mannwhitneyu(short, long).pvalue)
print(st.mannwhitneyu(short, nenc).pvalue)
print("medium vs *:")
print(st.mannwhitneyu(medium, short).pvalue)
print(st.mannwhitneyu(medium, long).pvalue)
print(st.mannwhitneyu(medium, nenc).pvalue)
print("long vs *:")
print(st.mannwhitneyu(long, short).pvalue)
print(st.mannwhitneyu(long, medium).pvalue)
print(st.mannwhitneyu(long, nenc).pvalue)
# ## Fig 4, Encounter Duration:
# +
# %%time
ax = cdf_plot(short, lineStyle="-")
ax = cdf_plot(medium, ax=ax, lineStyle="--")
ax = cdf_plot(long, ax=ax, lineStyle="-.")
ax = cdf_plot(nenc, ax=ax, lineStyle=":", ylabel="Fraction of pairs")
upper_xlim = 0.3
ax.set_xlim([0, upper_xlim])
ax.legend(
["Short encounter", "Medium encounter", "Long encounter", "Non-encountered pair"],
fontsize=15,
)
# plt.axhspan(0.5, 0.51, facecolor='blue', alpha = 0.5)
# plt.axvspan(FF.quantile(q = 0.5), FF.quantile(q = 0.51), facecolor='gray', alpha = 0.5)
ymax = 0.5
# Color values come from inspection of values pandas plotting uses!
xmax = short.quantile(q=ymax)
plt.axhline(
y=ymax, color="#D0D0D0", xmin=0, lineStyle="-", xmax=xmax / upper_xlim
) # set_xlim messes this up, have to scale...!
plt.axvline(x=xmax, color="#1f77b4", ymin=0, ymax=ymax, lineStyle="-")
xmax = medium.quantile(q=ymax)
plt.axhline(
y=ymax, color="#D0D0D0", xmin=0, lineStyle="-", xmax=xmax / upper_xlim
) # set_xlim messes this up, have to scale...!
plt.axvline(x=xmax, color="#ff7f0e", ymin=0, ymax=ymax, lineStyle="--")
xmax = long.quantile(q=ymax)
plt.axhline(
y=ymax, color="#D0D0D0", xmin=0, lineStyle="-", xmax=xmax / upper_xlim
) # set_xlim messes this up, have to scale...!
plt.axvline(x=xmax, color="#2ca02c", ymin=0, ymax=ymax, lineStyle="-.")
xmax = nenc.quantile(q=ymax)
plt.axhline(
y=ymax, color="#D0D0D0", xmin=0, lineStyle="-", xmax=xmax / upper_xlim
) # set_xlim messes this up, have to scale...!
plt.axvline(x=xmax, color="#dc4647", ymin=0, ymax=ymax, lineStyle=":")
ax.set_xlabel("Pairwise cosine similarity of traffic profiles", fontsize=18)
plt.tight_layout()
plt.savefig("Fig4_cdf_pairwise_encDurCat_v601.pdf", dpi=160)
plt.show()
# -
# ###### Mannwhitney U test of short_enc/med_enc/long_enc vs nenc on same (bldg, day):
bldgs = sorted( set(enc_df.name).intersection(set(not_enc_df.name)) )
days = sorted( set(enc_df.day).intersection(set(not_enc_df.day)) )
# +
not_enough_samples = [0, 0, 0, 0] # short, med, long, nenc.
total = len(bldgs) * len(days)
mann_bldgs = []
mann_days = []
mann_durCat = [] # 0, 1, 2 for short, med, long.
# pval of comparison vs nenc.
mann_pval = []
for bldg in tqdm_notebook(bldgs):
for day in days:
short = list(itertools.chain.from_iterable(s_enc[( (s_enc.name==bldg) & (s_enc.day==day) )]['cos_sims_list']))
med = list(itertools.chain.from_iterable(m_enc[( (m_enc.name==bldg) & (m_enc.day==day) )]['cos_sims_list']))
long = list(itertools.chain.from_iterable(l_enc[( (l_enc.name==bldg) & (l_enc.day==day) )]['cos_sims_list']))
nenc = list(itertools.chain.from_iterable(not_enc_df[( (not_enc_df.name==bldg) & (not_enc_df.day==day) )]['cos_sims_list']))
if (len(nenc) < n_min_count):
not_enough_samples[3] += 1;
continue;
#
if (len(short) < n_min_count):
not_enough_samples[0] += 1
else:
mann_bldgs.append(bldg)
mann_days.append(day)
mann_durCat.append(0)
t, p = st.mannwhitneyu(short, nenc)
mann_pval.append(p)
#
if (len(med) < n_min_count):
not_enough_samples[1] += 1
else:
mann_bldgs.append(bldg)
mann_days.append(day)
mann_durCat.append(1)
t, p = st.mannwhitneyu(med, nenc)
mann_pval.append(p)
#
if (len(long) < n_min_count):
not_enough_samples[2] += 1
else:
mann_bldgs.append(bldg)
mann_days.append(day)
mann_durCat.append(2)
t, p = st.mannwhitneyu(long, nenc)
mann_pval.append(p)
print("total, short, med, long, nenc not enough samples : ")
print(total, not_enough_samples, sep=",")
print("in % of total with not enough samples: ")
print(np.array(not_enough_samples)/total*100)
# -
comparison_df = pd.DataFrame(
{"bldg": mann_bldgs, "day": mann_days, "durCat": mann_durCat, "pval": mann_pval}
)
total = len(comparison_df)
significance_level = 0.05
print(
"%Significant pval of all (bldg, day, durCat):",
len(comparison_df[comparison_df.pval < significance_level]) / total * 100,
"%",
)
tmp = comparison_df[comparison_df.durCat==0]
print("%Significant pval of all (bldg, day, durCat==0 or short) tuples:",
len(tmp[tmp.pval < significance_level])/len(tmp)*100,"%")
tmp = comparison_df[comparison_df.durCat==1]
print("%Significant pval of all (bldg, day, durCat==1 or med) tuples:",
len(tmp[tmp.pval < significance_level])/len(tmp)*100,"%")
tmp = comparison_df[comparison_df.durCat==2]
print("%Significant pval of all (bldg, day, durCat==2 or long) tuples:",
len(tmp[tmp.pval < significance_level])/len(tmp)*100,"%")
| Notebooks/Figs2&3&4_DeviceType_WeekdayVsWeekend_Duration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Exception handling
#Syntax
#Runtime error
x = 20
if x==20
print('The value of x is', x)
# -
#Runtime error
print(20/0)
print(10/'a')
x = int(input('Enter some number'))
print(x)
# +
#What is an exception?
#An unwanted and unexpected event which is going to disturb our regular flow of the program
# +
#What is the meaning of exception handling?
#Defining alternative way to continue rest of the program normally
# -
try:
read data from the Nagpur server
except:
read data from GGN
# +
#If exception handling is not there then Python terminate the program abnormally , print exeception error message
# -
print('Hello')
print(20/0)
print('Hi')
print('Hello')
try:
print(20/3)
except:
print('Error occured')
print('Hi')
# # Python Exception Hierarchy
# 
try:
statement1
statement2
statement3
except:
statement4
statement5
# +
#Case1 - Statement1, statement2, Statement3, Statement5
# +
#Case2 - Statement1, Statement4, Statement5
# +
#Case3 - Statement1 , Statement2, Statement4 (something wrong happened in statement4 as well)
#Abnormal termination
# -
# 
try:
print('Outer Try Block')
try:
print('Inner Try Block')
print(20/0)
except:
print('Error occured')
print('Inner Except Block')
print('Statement2')
except:
print('Outer Except block')
print('Statement 3')
try:
print('statement1')
print(10/2)
except:
print('Statement2')
finally:
print('Statement3')
try:
statement1
statement2
statement3
try:
statement4
statement5
statement6
except:
statement7
finally:
statement8
statement9
print()
except:
statement10
finally:
statement11
statement12
# +
#Case1 - When there will not be any exception what all statements will execute?
#statement 1,2,3,4,5,6,8,9,11,12, Normal
# +
#Case2 - If an exception raised at statement2 and corresponding except block matched
#Statement 1,10,11,12, Normal
# +
#Case3 - If an exception raised at statement2 and corresponding except block not matched
#statement1,11 and abnormal termination
# +
#Case4 - If an exception raised at statement5 and corresponding except block matched
#1,2,3,4,7,8,9,11,12 - Normal
# +
#Case5 - If an exception raised at statement5 and corresponding except block not matched but outer except block matched
#statement 1,2,3,4,8,10,11,12 and normal termination
# +
#Case6 - If an exception raised at statement5 and correponding inner and outer block not matched
#Statement 1,2,3,4,8,11 - Abnormal termination
# +
#Case7 - If an exception raised at statement7 and corresponding except block matched
#Statement 1,2,3,(4,5,6),8,10,11,12 and normal termination
# +
#Case8 - If an exception at statement7 and corresponding except block not matched
#Statement 1,2,3,8,11 and abnormal termination
# +
#Case9 - If an exception at statement8 and correponding except block matched
#Statement 1,2,3,4(MMT),5(MMT),6(MMT),7(MMT),8,10,11,12 and normal termination
# +
#Case10 - If an exception at statement8 and correponding except block not matched
#Statement 1,2,3,4(MMT),5(MMT),6(MMT),7(MMT),8,11 and abnormal termination
# +
#Case11 - If an exception at statement9 and corresponding except block matched
#Statement 1,2,3,4(mmt),5(mmt),6(mmt),7(mmt), 8, 10,11,12 and normal termination
# +
#Case12 - If an exception at statement9 and corresponding except block not matched
#Statement 1,2,3,4(mmt),5(mmt),6(mmt),7(mmt), 8, 10,11 and abnormal termination
# +
#Case13 - If an exception raised in statement10
#Statement11 and abnormal termination
# +
#Case14 - If an exception raised in statement11 or statement12
#Abnormal termination
# -
#else
try:
risky code
except:
Will executed if exception will occur in try
else:
Will executed if no exception inside the try
finally:
always run
try-except-else-finally
try:
print('try')
print(10/2)
except:
print('Cannot divide by 0')
else:
print('Calculation completed successfully')
finally:
print('The job ends')
try:
print('Try')
finally:
print('finally')
#Case 1
try:
print('try')
except:
print('Except')
finally:
print('Finally')
#Case 2
#Without try only except block is invalid
except:
print('except')
#Case 3
else:
print('Hello')
#Case 4
finally:
print('Finally')
#Case 5
try:
print('Hi')
except:
print('Hello')
#case 6
try:
print('Hello')
print(10/0)
finally:
print('Hi')
#Try should have atleast except or finally block
#Case 7
try:
print('try')
except:
print('except')
else:
print('else')
#Case 8
try:
print('try')
else:
print('else')
#Case 9
try:
print('try')
else:
print('else')
finally:
print('finally')
#Case 10
try:
print('try')
print(10/'a')
except ZeroDivisionError:
print('Except1')
except TypeError:
print('Except2')
#Case 11
try:
print('try')
except ZeroDivisionError:
print('Except1')
elif:
print('Else1')
else:
print('Else2')
#Case 12
try:
print('try')
except:
print('Except')
finally:
print('F1')
finally:
print('F2')
10/0
#Case 13
try:
print('try')
print('Hello')
except:
print('Except')
#Cannot write any statement between try and except
#Case 14
try:
print('try')
except ZeroDivisionError:
print('Except1')
print('Hello')
except TypeError:
print('Except2')
#Case 15
try:
print('try')
except ZeroDivisionError:
print('Except1')
print('Hello')
finally:
print('Finally')
#Case 16
try:
print('try')
except ZeroDivisionError:
print('Except1')
print('Hello')
else:
print('Else')
#Case 17
try:
print('try1')
except:
print('Except1')
try:
print('Try2')
except:
print('Except2')
#Case 18
try:
print('Try')
except:
print('except')
try:
print('try')
print(10/0)
finally:
print('Finally')
# +
#Case 19
try:
print('try')
except:
print('Except1')
if 10>20:
print('if')
else:
print('else')
# -
#Case 20
try:
print('try')
try:
print('Inner try')
except:
print('Inner except')
finally:
print('Inner finally')
except:
print('Except')
#Csae 21
try:
print('try')
except:
print('Except')
try:
print('Inner Try')
except:
print('Inner Except')
finally:
print('Inner finally')
#Case 22
try:
print('try')
except:
print('Except')
finally:
print('Finally')
try:
print('Inner try')
except:
print('Inner Except')
finally:
print('Inner Finally')
#Case 23
try:
print('try')
except:
print('Except')
try:
print('Try')
else:
print('Else')
#Case 24:
try:
print('try')
try:
print('Inner Try')
except:
print('Except')
finally:
print('Finally')
#Case 25
try:
print('Try')
try:
print('Inner try')
except:
print('Except')
finally:
print('Finally')
#Case 26
try:
print('try')
except:
print('Except')
else:
print('Else')
try:
print('Try')
except:
print('Except')
finally:
print('Finally')
print
num1=10
num2=0
try:
print("first no is {}".format(num1))
print("first no is {}".format(num2))
try:
print('hi')
result = num1 / num2
print("Result is {}".format(result))
except ZeroDivisionError:
print("Division by zero is error !!")
finally:
print("Done with inner try and catch")
print("Done")
except SyntaxError:
print("Comma is missing. Enter numbers separated by comma like this 1, 2")
finally:
print("This will execute no matter what")
print("finally Doneeeeeee")
# +
#Types of exceptions
#Predefined Exceptions / Inbuilt exceptions
#User defined exceptions / Customized exceptions
# -
10/'a'
#User defined functions
#Bank ATM example
#18ADC54355443FGHF54566
#18ADC54355443FGHF54566
#Adult Movie
# +
class TooYoungException(Exception):
def __init__(self,arg):
self.msg = arg
class TooOldException(Exception):
def __init__(self,arg):
self.msg = arg
age = int(input('Enter Age'))
if age >60:
raise TooOldException('Please do not see the movie else it will be ur last movie')
elif age < 18:
raise TooYoungException('You are too young to see this movie')
else:
print('Your ticket is confirmed')
# -
result = None
y = int(input('Number 2:'))
try:
x = int(input('Number 1:'))
result = x/y
except:
print('Error in your code')
print('Result', result)
# +
result = None
try:
x = int(input('Number 1:'))
y = int(input('Number 2:'))
result = x/y
except ZeroDivisionError as e:
print(type(e))
print('Error in your code')
except ValueError as e:
print(type(e))
print('Result', result)
# +
result = None
try:
x = int(input('Number 1:'))
y = int(input('Number 2:'))
result = x/y
except Exception as e:
print(type(e))
print('Error in your code')
print('Result', result)
# +
class TooYoungException(Exception):
def __init__(self,arg):
self.msg = arg
class TooOldException(Exception):
def __init__(self,arg):
self.msg = arg
age = int(input('Enter Age'))
if age >60:
raise TooOldException('Please do not see the movie else it will be ur last movie')
elif age < 18:
raise TooYoungException('You are too young to see this movie')
else:
print('Your ticket is confirmed')
# +
result18 = None
try:
Age = int(input('Enter your age:'))
result18 < 1
10/0
except Exception as e:
print(type(e))
print('Error in your code')
print('Result', result)
# -
class TeaException(Exception):
def __init__(self,arg):
self.msg = arg
temp=1
if temp==0:
raise TeaException('Its too cold to drink')
else:
print('You can have your tea')
#Logging priority Levels
#CRITICAL - 50 - Serious problem that needs high attention
#ERROR - 40 - Problem is serious but not that much critical
#WARNING - 30 - Caution must be required , alert to the programmer
#INFO - 20 - Message with some important information
#DEBUG - 10 - With debugging information
#NOTSET - 0 - Logging level not set
#How to implement
#Need to create the file to store the messages
#Level messages
import logging
logging.basicConfig(filename = 'log.txt', level = logging.DEBUG)
print('Python Logging Demo')
logging.debug('This is a debug message')
logging.info('This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')
import os
os.chdir(r'C:\Users\Admin\Desktop\LoggingDemo')
os.getcwd()
logging.basicConfig(filename = 'log.txt', level = logging.DEBUG)
logging.info('A new request came')
try:
x = int(input('Enter first number'))
y = int(input('Enter second number'))
print(x/y)
except ZeroDivisionError as e:
print('Cannot divide with zero')
logging.exception(e)
logging.critical('This is a critical error')
except ValueError as e:
print('Enter only int value')
logging.exception(e)
logging.info('Request processing completed')
# +
#Debugging
...
...
...
...
..
..
...
..
..
...
....
..
print('x')
..
print('x')
...
..
..
# +
x = 10
if x>10:
print(x*0.25)
elif x<10:
print(x*0.75)
else:
# -
#assert statements
#Types of assert statement
#simple assert
#very simple (Augmented version)
x = 10
x=9
assert x ==10 , 'value changed'
print(x)
# +
def square(x):
return x**x
assert square(2)==4 , 'The square of 2 should be 4'
assert square(3)==9 , 'The square of 3 should be 9'
assert square(4)==64 , 'The square of 4 should be 64'
print(square(2))
print(square(3))
print(square(4))
# -
| Python - Exception Handling - CA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 2 - Parallel Data Processing with Task Dependencies
#
# **GOAL:** The goal of this exercise is to show how to pass object IDs into remote functions to encode dependencies between tasks.
#
# In this exercise, we construct a sequence of tasks each of which depends on the previous mimicking a data parallel application. Within each sequence, tasks are executed serially, but multiple sequences can be executed in parallel.
#
# In this exercise, you will use Ray to parallelize the computation below and speed it up.
#
# ### Concept for this Exercise - Task Dependencies
#
# Suppose we have two remote functions defined as follows.
#
# ```python
# @ray.remote
# def f(x):
# return x
# ```
#
# Arguments can be passed into remote functions as usual.
#
# ```python
# >>> x1_id = f.remote(1)
# >>> ray.get(x1_id)
# 1
#
# >>> x2_id = f.remote([1, 2, 3])
# >>> ray.get(x2_id)
# [1, 2, 3]
# ```
#
# **Object IDs** can also be passed into remote functions. When the function actually gets executed, **the argument will be a retrieved as a regular Python object**.
#
# ```python
# >>> y1_id = f.remote(x1_id)
# >>> ray.get(y1_id)
# 1
#
# >>> y2_id = f.remote(x2_id)
# >>> ray.get(y2_id)
# [1, 2, 3]
# ```
#
# So when implementing a remote function, the function should expect a regular Python object regardless of whether the caller passes in a regular Python object or an object ID.
#
# **Task dependencies affect scheduling.** In the example above, the task that creates `y1_id` depends on the task that creates `x1_id`. This has the following implications.
#
# - The second task will not be executed until the first task has finished executing.
# - If the two tasks are scheduled on different machines, the output of the first task (the value corresponding to `x1_id`) will be copied over the network to the machine where the second task is scheduled.
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import ray
import time
# -
ray.init(num_cpus=4, redirect_output=True)
# These are some helper functions that mimic an example pattern of a data parallel application.
#
# **EXERCISE:** You will need to turn these functions into remote functions. When you turn these functions into remote function, you do not have to worry about whether the caller passes in an object ID or a regular object. In both cases, the arguments will be regular objects when the function executes. This means that even if you pass in an object ID, you **do not need to call `ray.get`** inside of these remote functions.
# +
@ray.remote
def load_data(filename):
time.sleep(0.1)
return np.ones((1000, 100))
@ray.remote
def normalize_data(data):
time.sleep(0.1)
return data - np.mean(data, axis=0)
@ray.remote
def extract_features(normalized_data):
time.sleep(0.1)
return np.hstack([normalized_data, normalized_data ** 2])
@ray.remote
def compute_loss(features):
num_data, dim = features.shape
time.sleep(0.1)
return np.sum((np.dot(features, np.ones(dim)) - np.ones(num_data)) ** 2)
# -
# **EXERCISE:** The loop below takes too long. Parallelize the four passes through the loop by turning `load_data`, `normalize_data`, `extract_features`, and `compute_loss` into remote functions and then retrieving the losses with `ray.get`.
#
# **NOTE:** You should only use **ONE** call to `ray.get`. For example, the object ID returned by `load_data` should be passed directly into `normalize_data` without needing to be retrieved by the driver.
# +
# Sleep a little to improve the accuracy of the timing measurements below.
time.sleep(2.0)
start_time = time.time()
losses = []
for filename in ['file1', 'file2', 'file3', 'file4']:
data = load_data.remote(filename)
normalized_data = normalize_data.remote(data)
features = extract_features.remote(normalized_data)
loss = compute_loss.remote(features)
losses.append(loss)
loss = sum(ray.get(losses))
end_time = time.time()
duration = end_time - start_time
# -
# **VERIFY:** Run some checks to verify that the changes you made to the code were correct. Some of the checks should fail when you initially run the cells. After completing the exercises, the checks should pass.
# +
assert loss == 4000
assert duration < 0.8, ('The loop took {} seconds. This is too slow.'
.format(duration))
assert duration > 0.4, ('The loop took {} seconds. This is too fast.'
.format(duration))
print('Success! The example took {} seconds.'.format(duration))
# -
# **EXERCISE:** Use the UI to view the task timeline and to verify that the relevant tasks were executed in parallel. After running the cell below, you'll need to click on **View task timeline**".
# - Using the **second** button, you can click and drag to **move** the timeline.
# - Using the **third** button, you can click and drag to **zoom**. You can also zoom by holding "alt" and scrolling.
#
# In the timeline, click on **View Options** and select **Flow Events** to visualize tasks dependencies.
import ray.experimental.ui as ui
ui.task_timeline()
| exercise02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Nearest neighbor for spine injury classification
# In this homework notebook we use **nearest neighbor classification** to classify back injuries for patients in a hospital, based on measurements of the shape and orientation of their pelvis and spine.
#
# The data set contains information from **310** patients. For each patient, there are: six measurements (the x) and a label (the y). The label has **3** possible values, `’NO’` (normal), `’DH’` (herniated disk), or `’SL’` (spondilolysthesis).
# **Note:** Before attempting this homework, please go through the <font color="magenta">*Nearest neighbor for handwritten digit recognition*</font> notebook.
# # 1. Setup notebook
# We import all necessary packages for the homework. Notice that we do **NOT** import any of the `sklearn` packages. This is because we want you to implement a nearest neighbor classifier **manually**, as in the <font color="magenta">*Nearest neighbor for handwritten digit recognition*</font> notebook.
#
import numpy as np
# We now load the dataset. We divide the data into a training set of 248 patients and a separate test set of 62 patients. The following arrays are created:
#
# * **`trainx`** : The training data's features, one point per row.
# * **`trainy`** : The training data's labels.
# * **`testx`** : The test data's features, one point per row.
# * **`testy`** : The test data's labels.
#
# We will use the training set (`trainx` and `trainy`), with nearest neighbor classification, to predict labels for the test data (`testx`). We will then compare these predictions with the correct labels, `testy`.
# Notice that we code the three labels as `0. = ’NO’, 1. = ’DH’, 2. = ’SL’`.
# +
# Load data set and code labels as 0 = ’NO’, 1 = ’DH’, 2 = ’SL’
labels = [b'NO', b'DH', b'SL']
data = np.loadtxt('column_3C.dat', converters={6: lambda s: labels.index(s)} )
# Separate features from labels
x = data[:,0:6]
y = data[:,6]
# Divide into training and test set
training_indices = list(range(0,20)) + list(range(40,188)) + list(range(230,310))
test_indices = list(range(20,40)) + list(range(188,230))
trainx = x[training_indices,:]
trainy = y[training_indices]
testx = x[test_indices,:]
testy = y[test_indices]
# -
# ## 2. Nearest neighbor classification with L2 distance
# In this exercise we will build a nearest neighbor classifier based on L2 (*Euclidean*) distance.
#
# <font color="magenta">**For you to do:**</font> Write a function, **NN_L2**, which takes as input the training data (`trainx` and `trainy`) and the test points (`testx`) and predicts labels for these test points using 1-NN classification. These labels should be returned in a `numpy` array with one entry per test point. For **NN_L2**, the L2 norm should be used as the distance metric.
#
# <font style="color:blue"> **Code**</font>
# ```python
# # test function
# testy_L2 = NN_L2(trainx, trainy, testx)
# print( type( testy_L2) )
# print( len(testy_L2) )
# print( testy_L2[40:50] )
# ```
#
# <font style="color:magenta"> **Output**</font>
# ```
# <class 'numpy.ndarray'>
# 62
# [ 2. 2. 1. 0. 0. 0. 0. 0. 0. 0.]
# ```
#
# +
# L2 (Euclidean) distance function (metric)
def eucl_dist(x,y):
return np.sum(np.square(x-y))
# 1-NN classifier
def NN_L2(trainx, trainy, testx):
# inputs: trainx, trainy, testx <-- as defined above
# output: an np.array of the predicted values for testy
predict = list()
def find_NN(x):
# Compute distances from x to every row in train_data
distances = [eucl_dist(x,trainx[i,]) for i in range(len(trainy))]
# Get the index of the smallest distance
return np.argmin(distances)
for x_test in testx:
index = find_NN(x_test)
predict.append(trainy[index])
return np.array(predict)
# -
# After you are done, run the cell below to check your function. If an error is triggered, you should go back and revise your function.
# +
testy_L2 = NN_L2(trainx, trainy, testx)
assert( type( testy_L2).__name__ == 'ndarray' )
assert( len(testy_L2) == 62 )
assert( np.all( testy_L2[50:60] == [ 0., 0., 0., 0., 2., 0., 2., 0., 0., 0.] ) )
assert( np.all( testy_L2[0:10] == [ 0., 0., 0., 1., 1., 0., 1., 0., 0., 1.] ) )
# -
# # 3. Nearest neighbor classification with L1 distance
# We now compute nearest neighbors using the L1 distance (sometimes called *Manhattan Distance*).
#
# <font color="magenta">**For you to do:**</font> Write a function, **NN_L1**, which again takes as input the arrays `trainx`, `trainy`, and `testx`, and predicts labels for the test points using 1-nearest neighbor classification. For **NN_L1**, the L1 distance metric should be used. As before, the predicted labels should be returned in a `numpy` array with one entry per test point.
#
# Notice that **NN_L1** and **NN_L2** may well produce different predictions on the test set.
#
# <font style="color:blue"> **Code**</font>
# ```python
# # test function
# testy_L2 = NN_L2(trainx, trainy, testx)
# testy_L1 = NN_L1(trainx, trainy, testx)
#
# print( type( testy_L1) )
# print( len(testy_L1) )
# print( testy_L1[40:50] )
# print( all(testy_L1 == testy_L2) )
# ```
#
# <font style="color:magenta"> **Output**</font>
# ```
# <class 'numpy.ndarray'>
# 62
# [ 2. 2. 0. 0. 0. 0. 0. 0. 0. 0.]
# False
# ```
#
# +
def manh_dist(x,y):
return np.sum(np.absolute(x-y))
# 1-NN classifier
def NN_L1(trainx, trainy, testx):
# inputs: trainx, trainy, testx <-- as defined above
# output: an np.array of the predicted values for testy
predict = list()
def find_NN(x):
# Compute distances from x to every row in train_data
distances = [manh_dist(x,trainx[i,]) for i in range(len(trainy))]
# Get the index of the smallest distance
return np.argmin(distances)
for x_test in testx:
index = find_NN(x_test)
predict.append(trainy[index])
return np.array(predict)
# -
# Again, use the following cell to check your code.
# +
testy_L1 = NN_L1(trainx, trainy, testx)
testy_L2 = NN_L2(trainx, trainy, testx)
assert( type( testy_L1).__name__ == 'ndarray' )
assert( len(testy_L1) == 62 )
assert( not all(testy_L1 == testy_L2) )
assert( all(testy_L1[50:60]== [ 0., 2., 1., 0., 2., 0., 0., 0., 0., 0.]) )
assert( all( testy_L1[0:10] == [ 0., 0., 0., 0., 1., 0., 1., 0., 0., 1.]) )
# -
# # 4. Test errors and the confusion matrix
# Let's see if the L1 and L2 distance functions yield different error rates for nearest neighbor classification of the test data.
# +
def error_rate(testy, testy_fit):
return float(sum(testy!=testy_fit))/len(testy)
print("Error rate of NN_L1: ", error_rate(testy,testy_L1) )
print("Error rate of NN_L2: ", error_rate(testy,testy_L2) )
# -
# We will now look a bit more deeply into the specific types of errors made by nearest neighbor classification, by constructing the <font color="magenta">*confusion matrix*</font>.
#
# Since there are three labels, the confusion matrix is a 3x3 matrix whose rows correspond to the true label and whose columns correspond to the predicted label. For example, the entry at row DH, column SL, contains the number of test points whose correct label was DH but which were classified as SL.
#
# <img style="width:200px" src="confusion_matrix.png">
#
#
#
# Write a function, **confusion**, which takes as input the true labels for the test set (that is, `testy`) as well as the predicted labels and returns the confusion matrix. The confusion matrix should be a `np.array` of shape `(3,3)` .
# <font style="color:blue"> **Code**</font>
# ```python
# L2_neo = confusion(testy, testy_L2)
# print( type(L2_neo) )
# print( L2_neo.shape )
# print( L2_neo )
# ```
#
# <font style="color:magenta"> **Output**</font>
# ```
# <class 'numpy.ndarray'>
# (3, 3)
# [[ 17. 1. 2.]
# [ 10. 10. 0.]
# [ 0. 0. 22.]]
# ```
#
print((testy[61]))
print(testy_L1)
# +
# Modify this cell 0 = ’NO’, 1 = ’DH’, 2 = ’SL’
def confusion(testy,testy_fit):
# inputs: the correct labels, the fitted NN labels
# output: a 3x3 np.array representing the confusion matrix as above
errMatrix = np.zeros((3, 3))
for i in range(len(testy)):
errMatrix[int(testy[i]), int(testy_fit[i])] += 1
return errMatrix
# -
# Now check your code by running the following cell.
# +
# Test Function
L1_neo = confusion(testy, testy_L1)
assert( type(L1_neo).__name__ == 'ndarray' )
assert( L1_neo.shape == (3,3) )
assert( np.all(L1_neo == [[ 16., 2., 2.],[ 10., 10., 0.],[ 0., 0., 22.]]) )
L2_neo = confusion(testy, testy_L2)
assert( np.all(L2_neo == [[ 17., 1., 2.],[ 10., 10., 0.],[ 0., 0., 22.]]) )
# -
# # 5. Some questions for you
#
# *Note down the answers to these, since you will need to enter them as part of this week's assignment.*
#
# * In the test set, which class (NO, DH, or SL) was **most frequently** misclassified by the L1-based nearest neighbor classifier?
# * In the test set, which class (NO, DH, or SL) was **never** misclassified by the L2-based nearest neighbor classifier?
# * On **how many** of the test points did the two classification schemes (based on L1 and L2 distance) yield *different* predictions?
#
L1_neo = confusion(testy, testy_L1)
print( type(L1_neo) )
print( L1_neo.shape )
print( L1_neo )
L2_neo = confusion(testy, testy_L2)
print( type(L2_neo) )
print( L2_neo.shape )
print( L2_neo )
x = sum([1 if not i else 0 for i in testy_L1 == testy_L2])
print("On how many of the test points did the two classification schemes (based on L1 and L2 distance) yield different predictions?")
print(x)
| DSE220X_2019/Week1/NN_spine.Injury/Nearest_neighbor_spine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear SVC
# +
# Bibliotecas usadas no código
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
# +
# Dados para treino
# Features [Tem chifre ?; Relincha?; Tem ferradura?] (0 = Não; 1 = Sim)
moose1 = [1, 0, 1]
moose2 = [1, 0, 0]
moose3 = [0, 0 , 0]
moose4 = [1, 1 , 0]
moose5 = [0, 0, 1]
horse1 = [0, 1, 1]
horse2 = [0, 1, 0]
horse3 = [1, 1, 0]
horse4 = [0, 0, 1]
horse5 = [0, 1, 1]
# Resultado dos dados para treino (0 = Moose; 1 = Horse)
train_x = [moose1, moose2, moose3, moose4, moose5, horse1, horse2, horse3, horse4, horse5]
train_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
# +
# Treino
model = LinearSVC()
model.fit(train_x, train_y)
# +
# Dados para teste
mistery1 = [1, 1, 1]
mistery2 = [0, 0, 1]
mistery3 = [1, 0, 1]
mistery4 = [0, 1, 1]
mistery5 = [0, 0, 0]
test_x = [mistery1, mistery2, mistery3, mistery4, mistery5]
test_y = [1, 0, 0, 1, 0]
predict = model.predict(test_x)
print(predict)
# +
# Medindo nosso teste
scores = accuracy_score(test_y, predict)
scores
# -
# # CSV Files
# +
# Bibliotecas que serão utilizadas
import pandas as pd
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
# Criando os cursores
model = LinearSVC()
# +
# Carregar o dadabase
uri = "https://gist.githubusercontent.com/guilhermesilveira/2d2efa37d66b6c84a722ea627a897ced/raw/10968b997d885cbded1c92938c7a9912ba41c615/tracking.csv"
db = pd.read_csv(uri)
rename = {
'home': 'principal',
'how_it_works': 'como_funciona',
'contact': 'contatos',
'bought': 'compras'
}
db_alterated = db.rename(columns = rename)
db_alterated
db.describe
# +
# Vamos dividir as features X e Y para poder treinar
db_x = db[['home', 'how_it_works', 'contact']]
db_y = db['bought']
print(db_x.shape)
print(db_y.shape)
# +
# Treinar nossa máquina para predizer se um cliente irá ou não comprar algo baseado em seus acessos ao site
train_x = db_x[:30]
train_y = db_y[:30]
test_x = db_x[30:]
test_y = db_y[30:]
model.fit(train_x, train_y)
# +
# Teste da nossa máquina
predict = model.predict(test_x)
# +
# Medindo o resultado da máquina
accuracy = accuracy_score(test_y, predict) * 100
print('accuracy: %.2f' % (accuracy))
# -
# # Auto Split
# +
# Biliotecas para auto split do treino e teste
from sklearn.model_selection import train_test_split
# +
# Como separar treino_x/y e test_x/y automaticamente através do train_test_split
# Ordem correta = train_x, test_x, train_y, test_y
train_x, test_x, train_y, test_y = train_test_split(db_x, db_y, test_size = 0.25,
random_state = 57, stratify = db_y)
# +
# Treinando a maquina
model.fit(train_x, train_y)
# +
# Testando nossa máquina
predict = model.predict(test_x)
# +
# Medindo o resultado da nossa máquina
accuracy = accuracy_score(test_y, predict) * 100
print('Accuracy: %2.f' % accuracy)
# -
# # Dimensions
# +
# Bibliotecas que serão utilizadas
import pandas as pd
import numpy as np
import seaborn as sns
import collections
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# Cursores
model = LinearSVC()
# +
# Abrindo nosso data frame
uri = 'https://gist.githubusercontent.com/guilhermesilveira/1b7d5475863c15f484ac495bd70975cf/raw/16aff7a0aee67e7c100a2a48b676a2d2d142f646/projects.csv'
df = pd.read_csv(uri)
# Modificando nosso data frame
change_infos = {
0: 1,
1: 0
}
df['finished'] = df.unfinished.map(change_infos)
# Separar dados de teste e treino
df_x = df[['expected_hours', 'price']]
df_y = df['finished']
# Analisando graficamente os resultados
sns.scatterplot (x = 'expected_hours', y = 'price', hue = 'finished', data = df)
sns.relplot (x = 'expected_hours', y = 'price', hue = 'finished', col = 'finished', data = df)
# +
# Split dados
train_x, test_x, train_y, test_y = train_test_split(df_x, df_y, test_size = 0.8, random_state = 57)
# Treinando nossa máquina
model.fit(train_x, train_y)
# Testar nossa máquina
predict = model.predict(test_x)
# Baseline
predict_guilherme = np.ones(1726)
# Medindo nossa máquina
accuracy = accuracy_score(test_y, predict) * 100
accuracy_guilherme = accuracy_score(test_y, predict_guilherme) * 100
print('Accuracy da máquina foi de %.1f\n' %accuracy)
print('Accuracy do Guilherme foi de %.1f\n' %accuracy_guilherme)
# Contagem de 0's e 1's
print(collections.Counter(predict_guilherme))
print(collections.Counter(predict))
# +
# Decisin Boundery
x_min = test_x.expected_hours.min()
x_max = test_x.expected_hours.max()
y_min = test_x.price.min()
y_max = test_x.price.max()
pixels = 100
axle_x = np.arange(x_min, x_max, (x_max - x_min) / pixels)
axle_y = np.arange(y_min, y_max, (y_max - y_min) / pixels)
xx, yy = np.meshgrid(axle_x, axle_y)
pontos = np.c_[xx.ravel(), yy.ravel()]
Z = model.predict(pontos)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha = 0.3)
plt.scatter(test_x.expected_hours, test_x.price, c = test_y, s = 1)
# +
# !!! O SISTEMA ACIMA ESTÁ UTILIZANDO UM MODELO LINEAR, OU SEJA, ELE NÃO É CAPAZ DE APRENDER BASEADO EM
# UMA CURVA COMO É O CASO AQUI
# -
# # Curvy Models
# +
# Bibliotecas que serão utilizadas
import pandas as pd
import numpy as np
import seaborn as sns
import collections
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
# Cursores
model = SVC()
scaler = StandardScaler()
seed = 57
# Modify
np.random.seed(seed)
# +
# Abrindo nosso data frame
uri = 'https://gist.githubusercontent.com/guilhermesilveira/1b7d5475863c15f484ac495bd70975cf/raw/16aff7a0aee67e7c100a2a48b676a2d2d142f646/projects.csv'
df = pd.read_csv(uri)
# Modificando nosso data frame
change_infos = {
0: 1,
1: 0
}
df['finished'] = df.unfinished.map(change_infos)
# Separar dados de teste e treino
df_x = df[['expected_hours', 'price']]
df_y = df['finished']
# Analisando graficamente os resultados
#sns.scatterplot (x = 'expected_hours', y = 'price', hue = 'finished', data = df)
#sns.relplot (x = 'expected_hours', y = 'price', hue = 'finished', col = 'finished', data = df)
# +
# Vamos splitar nossos dados de treino e teste
raw_train_x, raw_test_x, train_y, test_y = train_test_split(df_x, df_y, test_size = 0.8)
# Reescalar nosso X
scaler.fit(raw_train_x)
train_x = scaler.transform(raw_train_x)
test_x = scaler.transform(raw_test_x)
# Treinar nossa máquina
model.fit(train_x, train_y)
# Testar nossa máquina
predict = model.predict(test_x)
# Medir nossa máquina
predict_guilherme = np.ones(1726) # BaseLine
accuracy = accuracy_score(test_y, predict) * 100
accuracy_guilherme = accuracy_score(test_y, predict_guilherme) * 100
print('Accuracy da máquina foi de %.1f' %accuracy, '%\n')
print('Accuracy do Guilherme foi de %.1f' %accuracy_guilherme, '%\n')
# Contagem de 0's e 1's
print(collections.Counter(predict_guilherme))
print(collections.Counter(predict))
# +
# Decision Boundery
data_x = test_x[:,0]
data_y = test_x[:,1]
x_min = data_x.min()
x_max = data_x.max()
y_min = data_y.min()
y_max = data_y.max()
pixels = 100
axle_x = np.arange(x_min, x_max, (x_max - x_min) / pixels)
axle_y = np.arange(y_min, y_max, (y_max - y_min) / pixels)
xx, yy = np.meshgrid(axle_x, axle_y)
pontos = np.c_[xx.ravel(), yy.ravel()]
Z = model.predict(pontos)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha = 0.3)
plt.scatter(data_x, data_y, c = test_y, s = 1)
# -
| Jupyter Lab/library's/sklearn_library/.ipynb_checkpoints/scikit-learn_library-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:metpy-workshop]
# language: python
# name: conda-env-metpy-workshop-py
# ---
# 
# # Introduction to MetPy Prerequisite Lessons
# ## Foundations in Data Access Activity Notebook
#
# <br>
# <p><b>How to use this Notebook:</b><br>
# This notebook pairs with the <i>Foundations in Data Access</i> lesson. <br>
# Follow along with the instructions presented in the lesson, then <br>
# return to this notebook when prompted. After an activity, you <br>
# will be prompted to return to the lesson to proceed. </p>
# ### Activity 0: Import required packages
# +
## CELL 0A
## INSTRUCTIONS: Run this cell
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# Here is where we import the TDSCatalog class from Siphon for obtaining our data
from siphon.catalog import TDSCatalog
# -
# ### Activity 1: Getting started with the THREDDS Data Server
# We can easily view a THREDDS Data Server (TDS) Catalog in a browser. For this activity, we'll examine Unidata's TDS catalog of case studies.
#
# <a href="https://thredds.ucar.edu/thredds/casestudies/catalog.html" target="blank">https://thredds.ucar.edu/thredds/casestudies/catalog.html</a>
# +
## CELL 1A
## INSTRUCTIONS: Open the TDS link above in a new tab in your browser.
## Then browse the folders to find catalog URL to:
## Hurricane Harvey GOES-16 imagery
## Mesoscale-1 extent
## Channel 02
## on August 26, 2017
# Paste the URL here as a string:
url = "https://thredds.ucar.edu/thredds/catalog/casestudies/harvey/goes16/Mesoscale-1/Channel02/20170826/catalog.html"
# Change the URL above to be an xml document using Python's built-in replace module
xmlurl = url.replace(".html", ".xml")
print(xmlurl)
# -
# Now we have the catalog located, it's time to create and examine the TDSCatalog object
# +
## CELL 1B
## INSTRUCTIONS: Run this cell
# Create the TDS Catalog object, satcat
satcat = TDSCatalog(xmlurl)
# The catalog itself isn't very useful to us.
# What `is` useful is the datasets property of the object.
# There are a LOT of items in the datasets property, so
# let's just examine the first 10.
satcat.datasets[0:10]
# -
# The `datasets` property of the `satcat` object shows us a list of the .nc4 files that contain the data we'll use.
# +
## CELL 1C
## INSTRUCTIONS: Determine how many total items are in satcat.datasets
# Type your code below:
#answer:
len(satcat.datasets)
# -
# We now have a list of all files available in the catalog, but the data are not yet pulled into memory for visualization or analysis. For this, we need to use the `remote_access()` method from Siphon.
# +
## CELL 1D
## INSTRUCTIONS: Run this cell
# We will arbitrarily choose the 1000th file in the list to explore
# In the next section, we will discuss the use of xarray here
satdata = satcat.datasets[1000].remote_access(use_xarray=True)
# Print the type of object that satdata is
type(satdata)
# -
# Now we have an xarray `Dataset` that we can work with. However, we have not yet pulled back the layers enough to expose a single array we can visualize or do analysis with. To do any further work, we'll need to parse not only the data, but the metadata as well. In the next section, we'll explore this type of multi-dimensional dataset.
# ## When the above activity is complete, save this notebook and return to the course tab
# ### Activity 2: Explore Multi-dimensional data structures
# #### xarray HTML formatted summary tool
# Xarray has an HTML-formatted interactive summary tool for examing datasets. Simply execute the variable name to create the summary.
# +
## CELL 2A
## INSTRUCTIONS: Run this cell to create a formatted exploration tool for the xarray dataset
satdata
# -
# We now see an interactive summary of the dimensions, coordinates, variables, attributes for the dataset. This information can help with plotting, analysis, and generally understanding the data you are working with. Answer the questions below given the information in the HTML formatted summary table above.
# +
## CELL 2B
## INSTRUCTIONS: Find the following information about the dataset:
# 1. The title, or full description of the dataset
# answer: Sectorized Cloud and Moisture Imagery for the TMESO region.
# 2. The name of the variable that contains the satellite imagery
# answer: Sectorized_CMI
# 3. The coordinate system the data were collected in
# answer: Lambert Conformal
# 4. The size of the array (# cells in x and y)
# answer: x=2184 y=2468
# 5. The metadata conventions the dataset uses
# answer: CF-1.6
# -
# <div class="admonition alert alert-info">
# <p class="admonition-title" style="font-weight:bold">More Info</p>
# You may see the CF (Climate and Forecasting) metadata conventions in many popular atmospheric datasets. These conventions provide standardized variable names and units and recommendations on metadata such as projection information and coordinate information. You can read more about CF conventions here: <a href="cfconventions.org" target="blank">https://cfconventions.org/</a>
# </div>
# #### Get the data array
# There are several ways to extract the array containing the satellite imagery from the xarray dataset depending on your specific use case. The method we'll use in this example uses MetPy and the parse_cf() method.
# +
## CELL 2C
## INSTRUCTIONS: set `var` as the name of the data variable from number 2 above as a string
var = "Sectorized_CMI"
# import metpy
import metpy
# extract the data array from the xarray dataset
satarray = satdata.metpy.parse_cf(var)
type(satarray)
# -
# #### Plot on projected axes with Cartopy
# Now we have an array that we can do analysis with or plot. Let's now pull the projection information from the dataset and plot it on projected axes with Cartopy.
# +
## CELL 2D
## INSTRUCTIONS: Set the projection for the data array
# given the information in the satdata object
# Use the Cartopy documentation for syntax
# https://scitools.org.uk/cartopy/docs/latest/crs/projections.html
# Or refer to the Foundations in Cartopy lesson
# Set the projection of the data
proj = ccrs.LambertConformal()
# Plot the data
fig = plt.figure()
ax = fig.add_subplot(projection=proj)
ax.imshow(satarray,transform=proj)
# -
# ## When the above activity is complete, save this notebook and return to the course tab
| notebooks/solutions/dataaccess_solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pytorch - Numpy Learning
# ## Torch
# <hr/>
#
# +
# libraries
import torch
import numpy as np
# -
# +
# numpy
zeros = np.zeros((4, 4))
print(zeros)
print()
ones = np.ones((4, 4))
print(ones)
print()
random = np.random.random((4, 4))
print(random)
# +
# Pytorch
zeros = torch.zeros(4, 4)
print(zeros)
print()
ones = torch.ones(4, 4)
print(ones)
print()
random = torch.rand(4, 4)
print(random)
# -
# # Play
tensor = torch.rand(4, 4)
tensor
tensor = torch.tensor([[3,5],[9,6]])
tensor
# +
# create an arary
arrayB = np.array([[ 3 , 5 ], [9, 6]])
arrayB
# -
81 / (3 * 9)
54 / 6
30
[3, 5] [3, 9]T =
[9, 6] [5, 6]
arrayB
# # Matrix multiplication - recreate in 3D graphic -
# +
#Matrix multiplication - recreate in 3D graphic -
arrayB @ arrayB.T
# +
# elemnt wise
arrayB * arrayB
# +
# Revisit numpy ladder challenge - > apply to torch. -> Watch Linear Regression
# +
torchA = torch.tensor([[1, 2,],[ 3, 4]])
torchB = torch.tensor([[11,12],[13, 14]])
tensorA = torch.tensor([[1, 2,],[ 3, 4]])
tensorB = torch.tensor([[11,12],[13, 14]])
# -
a = torch.tensor([3,5])
b = torch.tensor([9,6])
m = torch.tensor([7,5,8])
j = torch.tensor([3,9,6])
torchA
torchB
torchA @ torchB
3*4 + 1 *2
[1, 2] [11, 12] [11, 13] T
[3, 4] [13, 14] [12, 14]
x1, y1 m1, j1
x2, y2 m2, j2 =
torchB.shape
torchB.size()
torchB.dim()
a
b
# ## dot product = Scalor Product
print(m)
j
m @ j
# +
[x1, y1] [x2, y2] = (x1 * x2) + (y1 * y1)
3 * 9 + 5 * 6
57
# -
m @ j # [x1, y1, z1] * [x1, y1, z1]
"""
what does the dot product represent in light?
dot = a "scalar" not a vector
|a| × |b| × cos(θ) = |a| × cos(θ) × |b|
cos(90) = 0
OR
But what is |a|:
|a| = √(4^2 + 8^2 + 10^2)
hypotenuse
"""
# +
# DOT product ::
# a · b = |a| × |b| × cos(θ)
a
# a · b = ax × bx + ay × by
a @ b # dot product
# -
# ## cross product = Vector product
m * j
"""
a × b = |a| |b| sin(θ) n
cx = aybz − azby
cy = azbx − axbz
cz = axby − aybx
"""
# ---
arrayB
arrayC = np.array( [[1,1,1],[2,2,2]] )
arrayC
arrayD = arrayB @ arrayC
arrayD
# + jupyter={"outputs_hidden": true}
# Dot product
"""
[3, 5] [1 [2
[9, 6] 1 2
1], 2]
"""
# -
# +
# shape
# +
# shape and dimentions
num_dim = array.ndim
num_dim
# -
shape = array.shape
shape
arrayB
arrayB.reshape((2,2))
torchB.view((2,2))
# ## Determinant
# +
# -0-0-0-0-0-
# -
np.linalg.det(arrayB)
# ## Inverse and moore-pensore inverse
# +
# inverse
np.linalg.inv(arrayB)
# +
# moore Pensore inverse
np.linalg.pinv(arrayB)
# +
#Inverse and Moore-Pensore inverse
# Inverse
print(np.linalg.inv(array))
print()
# Moore Pensore inverse
np.linalg.pinv(array)
# +
# Inverse
tensor.inverse()
# Moore Pensore inverse
tensor.pinverse()
# +
# .T -->> .t
# -
# # save to disk
# +
np.save('file.npy', arrayB)
print(np.load('file.npy'))
print()
torch.save(tensor, 'file')
torch.load('file')
# -
# Pushing Tensor to GPU
# +
# GPU - GPU - GPU
# put the tensor in GPU memory
gpu_tensor = tensor.gpu()
# -
# ### It has been firmly established that my_tensor.detach().numpy() is the correct way to get a numpy array from a torch tensor.
# +
# example of ::
import torch
tensor1 = torch.tensor([1.0,2.0],requires_grad=True)
print(tensor1)
print(type(tensor1))
tensor1 = tensor1.numpy()
print(tensor1)
print(type(tensor1))
# -
# # hmmm
# <hr/>
z = torch.rand(6,6)
z
x = torch.rand(3, 6)
x
y = torch.rand(6, 3)
y
x @ y
y @ x
z @ z
x @ z
z @ y
y @ z
#
x * z
| pytorch/.ipynb_checkpoints/notes-np_torch_1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercises
#
# The first exercise was to read in the IR spectra data for toluene using the `np.loadtxt` function.
# First, we need to check the structure of the file, with the `!head` command.
# !head toluene_ir.csv
# It is a column-order orders, comma-separated values file.
# Therefore, we import this data with the following.
import numpy as np
wavenumber, transmittance = np.loadtxt('toluene_ir.csv', unpack=True, delimiter=',')
# We can then plot the data, using the information in the file header.
import matplotlib.pyplot as plt
plt.plot(wavenumber, transmittance)
plt.xlabel('Transmittance')
plt.ylabel('Wavenumber/cm$^{-1}$')
plt.show()
| CH40208/working_with_data/file_io_exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# dependencies
import tensorflow as tf
print(tf.__version__)
import numpy as np
from sklearn.model_selection import train_test_split
import time
import data_utils
import matplotlib.pyplot as plt
# read dataset
X, Y, en_word2idx, en_idx2word, en_vocab, de_word2idx, de_idx2word, de_vocab = data_utils.read_dataset('data.pkl')
# +
# inspect data
print ('Sentence in English - encoded:', X[0])
print ('Sentence in Hindi - encoded:', Y[0])
print ('Decoded:\n------------------------')
for i in range(len(X[1])):
print (en_idx2word[X[1][i]],)
print ('\n')
for i in range(len(Y[1])):
print (de_idx2word[Y[1][i]],)
# +
# data processing
# data padding
def data_padding(x, y, length = 15):
for i in range(len(x)):
x[i] = x[i] + (length - len(x[i])) * [en_word2idx['<pad>']]
y[i] = [de_word2idx['<go>']] + y[i] + [de_word2idx['<eos>']] + (length-len(y[i])) * [de_word2idx['<pad>']]
data_padding(X, Y)
# data splitting
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.1)
del X
del Y
# +
# build a model
input_seq_len = 15
output_seq_len = 17
en_vocab_size = len(en_vocab) + 2 # + <pad>, <ukn>
de_vocab_size = len(de_vocab) + 4 # + <pad>, <ukn>, <eos>, <go>
# placeholders
encoder_inputs = [tf.placeholder(dtype = tf.int32, shape = [None], name = 'encoder{}'.format(i)) for i in range(input_seq_len)]
decoder_inputs = [tf.placeholder(dtype = tf.int32, shape = [None], name = 'decoder{}'.format(i)) for i in range(output_seq_len)]
targets = [decoder_inputs[i+1] for i in range(output_seq_len-1)]
# add one more target
targets.append(tf.placeholder(dtype = tf.int32, shape = [None], name = 'last_target'))
target_weights = [tf.placeholder(dtype = tf.float32, shape = [None], name = 'target_w{}'.format(i)) for i in range(output_seq_len)]
# output projection
size = 512
w_t = tf.get_variable('proj_w', [de_vocab_size, size], tf.float32)
b = tf.get_variable('proj_b', [de_vocab_size], tf.float32)
w = tf.transpose(w_t)
output_projection = (w, b)
outputs, states = tf.contrib.legacy_seq2seq.embedding_attention_seq2seq(
encoder_inputs,
decoder_inputs,
tf.contrib.rnn.BasicLSTMCell(size),
num_encoder_symbols = en_vocab_size,
num_decoder_symbols = de_vocab_size,
embedding_size = 100,
feed_previous = False,
output_projection = output_projection,
dtype = tf.float32)
# +
# define our loss function
# sampled softmax loss - returns: A batch_size 1-D tensor of per-example sampled softmax losses
def sampled_loss(labels, logits):
return tf.nn.sampled_softmax_loss(
weights = w_t,
biases = b,
labels = tf.reshape(labels, [-1, 1]),
inputs = logits,
num_sampled = 512,
num_classes = de_vocab_size)
# Weighted cross-entropy loss for a sequence of logits
loss = tf.contrib.legacy_seq2seq.sequence_loss(outputs, targets, target_weights, softmax_loss_function = sampled_loss)
# +
# let's define some helper functions
# simple softmax function
def softmax(x):
n = np.max(x)
e_x = np.exp(x - n)
return e_x / e_x.sum()
# feed data into placeholders
def feed_dict(x, y, batch_size = 64):
feed = {}
idxes = np.random.choice(len(x), size = batch_size, replace = False)
for i in range(input_seq_len):
feed[encoder_inputs[i].name] = np.array([x[j][i] for j in idxes], dtype = np.int32)
for i in range(output_seq_len):
feed[decoder_inputs[i].name] = np.array([y[j][i] for j in idxes], dtype = np.int32)
feed[targets[len(targets)-1].name] = np.full(shape = [batch_size], fill_value = de_word2idx['<pad>'], dtype = np.int32)
for i in range(output_seq_len-1):
batch_weights = np.ones(batch_size, dtype = np.float32)
target = feed[decoder_inputs[i+1].name]
for j in range(batch_size):
if target[j] == de_word2idx['<pad>']:
batch_weights[j] = 0.0
feed[target_weights[i].name] = batch_weights
feed[target_weights[output_seq_len-1].name] = np.zeros(batch_size, dtype = np.float32)
return feed
# decode output sequence
def decode_output(output_seq):
words = []
for i in range(output_seq_len):
smax = softmax(output_seq[i])
idx = np.argmax(smax)
words.append(de_idx2word[idx])
return words
# +
# ops and hyperparameters
learning_rate = 5e-3
batch_size = 64
steps = 1000
# ops for projecting outputs
outputs_proj = [tf.matmul(outputs[i], output_projection[0]) + output_projection[1] for i in range(output_seq_len)]
# training op
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(loss)
# init op
init = tf.global_variables_initializer()
# forward step
def forward_step(sess, feed):
output_sequences = sess.run(outputs_proj, feed_dict = feed)
return output_sequences
# training step
def backward_step(sess, feed):
sess.run(optimizer, feed_dict = feed)
# +
# let's train the model
# we will use this list to plot losses through steps
losses = []
# save a checkpoint so we can restore the model later
saver = tf.train.Saver()
print ('------------------TRAINING------------------')
with tf.Session() as sess:
sess.run(init)
t = time.time()
for step in range(steps):
feed = feed_dict(X_train, Y_train)
backward_step(sess, feed)
if step % 5 == 4 or step == 0:
loss_value = sess.run(loss, feed_dict = feed)
print ('step: {}, loss: {}'.format(step, loss_value))
losses.append(loss_value)
if step % 20 == 19:
saver.save(sess, 'checkpoints/', global_step=step)
print ('Checkpoint is saved')
print ('Training time for {} steps: {}s'.format(steps, time.time() - t))
# +
# plot losses
with plt.style.context('fivethirtyeight'):
plt.plot(losses, linewidth = 1)
plt.xlabel('Steps')
plt.ylabel('Losses')
plt.ylim((0, 12))
plt.show()
# +
# let's test the model
with tf.Graph().as_default():
# placeholders
encoder_inputs = [tf.placeholder(dtype = tf.int32, shape = [None], name = 'encoder{}'.format(i)) for i in range(input_seq_len)]
decoder_inputs = [tf.placeholder(dtype = tf.int32, shape = [None], name = 'decoder{}'.format(i)) for i in range(output_seq_len)]
# output projection
size = 512
w_t = tf.get_variable('proj_w', [de_vocab_size, size], tf.float32)
b = tf.get_variable('proj_b', [de_vocab_size], tf.float32)
w = tf.transpose(w_t)
output_projection = (w, b)
# change the model so that output at time t can be fed as input at time t+1
outputs, states = tf.contrib.legacy_seq2seq.embedding_attention_seq2seq(
encoder_inputs,
decoder_inputs,
tf.contrib.rnn.BasicLSTMCell(size),
num_encoder_symbols = en_vocab_size,
num_decoder_symbols = de_vocab_size,
embedding_size = 100,
feed_previous = True, # <-----this is changed----->
output_projection = output_projection,
dtype = tf.float32)
# ops for projecting outputs
outputs_proj = [tf.matmul(outputs[i], output_projection[0]) + output_projection[1] for i in range(output_seq_len)]
# let's translate these sentences
en_sentences = ["What' s your name", 'My name is', 'What are you doing', 'I am reading a book',\
'How are you', 'I am good', 'Do you speak English', 'What time is it', 'Hi', 'Goodbye', 'Yes', 'No', 'Ravi is great']
en_sentences = ["Teja is a good Girl","Read something"]
en_sentences_encoded = [[en_word2idx.get(word, 0) for word in en_sentence.split()] for en_sentence in en_sentences]
# padding to fit encoder input
for i in range(len(en_sentences_encoded)):
en_sentences_encoded[i] += (15 - len(en_sentences_encoded[i])) * [en_word2idx['<pad>']]
# restore all variables - use the last checkpoint saved
saver = tf.train.Saver()
path = tf.train.latest_checkpoint('checkpoints')
with tf.Session() as sess:
# restore
saver.restore(sess, path)
# feed data into placeholders
feed = {}
for i in range(input_seq_len):
feed[encoder_inputs[i].name] = np.array([en_sentences_encoded[j][i] for j in range(len(en_sentences_encoded))], dtype = np.int32)
feed[decoder_inputs[0].name] = np.array([de_word2idx['<go>']] * len(en_sentences_encoded), dtype = np.int32)
# translate
output_sequences = sess.run(outputs_proj, feed_dict = feed)
# decode seq.
for i in range(len(en_sentences_encoded)):
print ('{}.\n--------------------------------'.format(i+1))
ouput_seq = [output_sequences[j][i] for j in range(output_seq_len)]
#decode output sequence
words = decode_output(ouput_seq)
print (en_sentences[i])
for i in range(len(words)):
if words[i] not in ['<eos>', '<pad>', '<go>']:
print (words[i],)
print ('\n--------------------------------')
# -
# # This model can be improved by using more training steps, better dataset or even with better selection of hyperparameters
| neural_machine_translation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Looking into the `cov` matrix used to make the Metropolis Hastings jumps.
# gully
import h5py
import numpy as np
import emcee
# ### No covariance matrix used during sampling.
# +
f1 = h5py.File('mc_noCov.hdf5', mode='r')
d1 = f1['samples']
print("Acceptance fraction: {:.>10.1%}".format(np.float(d1.attrs['acceptance'])))
#f1.close()
# -
# On the high side...
# ### With covariance matrix during sampling:
# +
f2 = h5py.File('mc_Cov.hdf5', mode='r')
d2 = f2['samples']
print("Acceptance fraction: {:.>10.1%}".format(np.float(d2.attrs['acceptance'])))
#f2.close()
# -
# Looks better.
# ### How do the chains compare?
# <img src=walkers_noCov.png width=500></img> No covariance matrix used during sampling.
# <img src=walkers_Cov.png width=500></img> With covariance matrix used during sampling.
# The bottom one is better because it has a lower---but not too low--acceptance ratio.
# ### The path forward
# So in order to provide a guess for the transition probability matrix, we need to have already have run the MCMC sampling, and then performed the covariance analysis: `chain.py --files mc.hdf5 --cov`.
#
# So we have to sample twice. One would be a preliminary run, and the other would be a production run. This strategy already jives with our plan for tuning the spectral line outliers- we run a `SampleThetaPhi` then we run a `SamplePhiLines`.
# ### Why we should specify a transition probability matrix
# The main advantage of specifying a transition probability (cov) matrix is that it decreases the integrated autocorrelation time, $\tau_{int}$. Let's check that it did that, above.
d1.shape
d1.value.shape
# +
n_disc = 50
chain = d1.value[n_disc:, 5]
print("{: <8} {: <8} {: <8}".format("window", "tau_int", "x6"))
for w in [10*2**n for n in range(6)]:
tau_int = emcee.autocorr.integrated_time(chain, window=w)
print("{:.>8} {:.>8.1f} {:.>8.1f} ".format(w, tau_int, 6.0*tau_int))
# -
# It's hard to estimate $\tau_{int}$ from such a small window of data. We need more samples. But at any rate the acceptance fraction has gotten closer to 20-40%, so that seems like a good thing.
f1.close()
f2.close()
# # The end.
| demo3/notebooks/MCMC_step_size_04-Starfish_cov_transition_matrix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Preparation Basics
# ## Segment 1 - Filtering and selecting data
# +
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
# -
# ### Selecting and retrieving data
# You can write an index value in two forms.
# - Label index or
# - Integer index
series_obj = Series(np.arange(8), index=['row 1', 'row 2', 'row 3', 'row 4', 'row 5', 'row 6','row 7', 'row 8'])
series_obj
series_obj['row 7']
series_obj[[0, 7]]
np.random.seed(25)
DF_obj = DataFrame(np.random.rand(36).reshape((6,6)),
index=['row 1', 'row 2', 'row 3', 'row 4','row 5','row 6'],
columns=['column 1','column 2','column 3','column 4','column 5','column 6'])
DF_obj
DF_obj.loc[['row 2', 'row 5'], ['column 5', 'column 2']]
# ### Data slicing
# You can use slicing to select and return a slice of several values from a data set. Slicing uses index values so you can use the same square brackets when doing data slicing.
#
# How slicing differs, however, is that with slicing you pass in two index values that are separated by a colon. The index value on the left side of the colon should be the first value you want to select. On the right side of the colon, you write the index value for the last value you want to retrieve. When you execute the code, the indexer then simply finds the first record and the last record and returns every record in between them.
series_obj['row 3': 'row 7']
# ### Comparing with scalars
# Now we're going to talk about comparison operators and scalar values. Just in case you don't know that a scalar value is, it's basically just a single numerical value. You can use comparison operators like greater than or less than to return true/false values for all records to indicate how each element compares to a scalar value.
DF_obj < .2
# ### Filtering with scalars
series_obj[series_obj > 6]
# ### Setting values with scalars
series_obj['row 1', 'row 5', 'row 8'] = 8
series_obj
# Filtering and selecting using Pandas is one of the most fundamental things you'll do in data analysis. Make sure you know how to use indexing to select and retrieve records.
| 1 Data Preparation Basics/1 Filtering and selecting.ipynb |